Everything else has a default constructor that does the straightforward
thing of initializing most members to a default value, except for the
size.
We explicitly initialize the size (and others, for consistency), to
prevent potential uninitialized reads from occurring. Particularly given
the largeish surface area that this struct is used in.
On the texture cache we handle multisampled images by keeping their real
size in samples (e.g. 1920x1080 with 4 samples is 3840x2160).
This works nicely with size matches and other comparisons, but the
calculation for guest sizes was not having this in mind, and the size
was being multiplied (again) by the number of samples per dimension.
For example a 3840x2160 texture cache image had its width and height
multiplied by 2, resulting in a much larger texture.
Fix this issue.
- Fixes performance regression on cooking related titles when an
unrelated bug was fixed.
Images used as render targets were not being "prepared", causing
desynchronizations on the texture cache. Needs #6669 to avoid
performance regressions on certain cooking titles.
- Fixes black shadows on Age of Calamity.
Removes common_sizes.h in favor of having `_KiB`, `_MiB`, `_GiB`, etc
user-literals within literals.h.
To keep the global namespace clean, users will have to use:
```
using namespace Common::Literals;
```
to access these literals.
Users may want to fall back to the CPU ASTC texture decoder due to hangs
and crashes that may be caused by keeping the GPU under compute heavy
loads for extended periods of time. This is especially the case in games
such as Astral Chain which make extensive use of ASTC textures.
* Wrong alignment in u64 LOG_DEBUG -> memcpy.
* Huge shift exponent in stride calculation for linear buffer, unused result -> skipped.
* Large shift in buffer cache if word = 0, skip checking for set bits.
Non of those were critical, so this should not change any behavior.
At least with the assumption, that the last one used masking behavior, which always yield continuous_bits = 0.