SMDH is a metadata format used in some executable formats for the
Nintendo 3DS. Switch executables don't utilize this metadata format, so
this just a holdover from Citra and can be corrected.
Allows the loading screen code to compile with implicit string
conversions disabled.
While we're at it remove unnecessary const usages, and add it to nearby
variables where appropriate.
Gets rid of the need to special-case brace handling depending on the
overload used, and makes it consistent across the board with how fmt
handles them.
Strings with compile-time deducible strings are directly forwarded to
std::string's constructor, so we don't need to worry about the
performance difference here, as it'll be identical.
In a lot of places throughout the decompiler, string concatenation via
operator+ is used quite heavily. This is usually fine, when not heavily
used, but when used extensively, can be a problem. operator+ creates an
entirely new heap allocated temporary string and given we perform
expressions like:
std::string thing = a + b + c + d;
this ends up with a lot of unnecessary temporary strings being created
and discarded, which kind of thrashes the heap more than we need to.
Given we utilize fmt in some AddLine calls, we can make this a part of
the ShaderWriter's API. We can make an overload that simply acts as a
passthrough to fmt.
This way, whenever things need to be appended to a string, the operation
can be done via a single string formatting operation instead of
discarding numerous temporary strings. This also has the benefit of
making the strings themselves look nicer and makes it easier to spot
errors in them.
Many of these constructors don't even need to be templated. The only
ones that need to be templated are the ones that actually make use of
the parameter pack.
Even then, since std::vector accepts an initializer list, we can supply
the parameter pack directly to it instead of creating our own copy of
the list, then copying it again into the std::vector.
Given the class contains quite a lot of non-trivial types, place the
constructor and destructor within the cpp file to avoid inlining
construction and destruction code everywhere the class is used.
Avoids performing copies into the pair being returned. Instead, we can
just move the resources into the pair, avoiding the need to make copies
of both the std::string and ShaderEntries struct.
Given the offset is assigned a fixed value in the constructor, we can
just assign it directly and get rid of the need to write the name of the
variable again in the constructor initializer list.
Given the disk shader cache contains non-trivial types, we should
default it in the cpp file in order to prevent inlining of the
complex destruction logic.
The standard library expects hash specializations that don't throw
exceptions. Make this explicit in the type to allow selection of better
code paths if possible in implementations.
We don't need to load the code into a vector and then construct a string
over the data. We can just create a string with the necessary size ahead
of time, and read the data directly into it, getting rid of an
unnecessary heap allocation.
std::move does nothing when applied to a const variable. Resources can't
be moved if the object is immutable. With this change, we don't end up
making several unnecessary heap allocations and copies.
Booleans don't have a guaranteed size, but we still want to have them
integrate into the disk cache system without needing to actually use a
different type. We can do this by supplying non-template overloads for
the bool type.
Non-template overloads always have precedence during function
resolution, so this is safe to provide.
This gets rid of the need to smatter ternary conditionals, as well as
the need to use u8 types to store the value in.
These are only used from within this translation unit, so they don't
need to have external linkage. They were intended to be marked with this
anyways to be consistent with the other service functions.
Renames the members to more accurately indicate what they signify.
"OneShot" and "Sticky" are kind of ambiguous identifiers for the reset
types, and can be kind of misleading. Automatic and Manual communicate
the kind of reset type in a clearer manner. Either the event is
automatically reset, or it isn't and must be manually cleared.
The "OneShot" and "Sticky" terminology is just a hold-over from Citra
where the kernel had a third type of event reset type known as "Pulse".
Given the Switch kernel only has two forms of event reset types, we
don't need to keep the old terminology around anymore.
This reduces the boilerplate that services have to write out the current thread explicitly. Using current thread instead of client thread is also semantically incorrect, and will be a problem when we implement multicore (at which time there will be multiple current threads)
Nvidia's proprietary driver creates a real OpenGL compatibility profile
without this option, meanwhile Intel (and probably AMD, I haven't tested
it) require that QSurfaceFormat::FormatOption::DeprecatedFunctions is
explicitly enabled.
This was reduced due to happening on most games and at such constant
rate that it affected performance heavily for the end user. In general,
we are well aware of the assert and an implementation is already
planned.
Avoids inlining destruction logic where applicable, and also makes
forward declarations not cause unexpected compilation errors depending
on where the State class is used.
Lessens the amount of code that needs to be read, and gets rid of the
need to introduce an indexing variable. Instead, we just operate on the
objects directly.
std::memset is used to clear the entire register structure, which
requires that the Regs struct be trivially copyable (otherwise undefined
behavior is invoked). This prevents the case where a non-trivial type is
potentially added to the struct.
std::move within a copy constructor (on a data member that isn't
mutable) will always result in a copy. Because of that, the behavior of
this copy constructor is identical to the one that would be generated
automatically by the compiler, so we can remove it.
This corrects cases where it was possible to write more entries into the
write buffer than were requested. Now, we check the size of the buffer
before actually writing into them.
We were also returning the wrong value for
GetAvailableLanguageCodeCount2(). This was previously returning 64, but
only 17 should have been returned. 64 entries is the size of the static
array used in MakeLanguageCode() within the service binary itself, but
isn't the actual total number of language codes present.
Makes the class less surprising when it comes to forward declaring the
type, and also prevents inlining the destruction code of the class,
given it contains non-trivial types.
These are able to be omitted from the declaration of functions, since
they don't do anything at the type system level. The definitions of the
functions can retain the use of const though, since they make the
variables immutable in the implementation of the function where they're
used.
Instead of retrieving the data from the std::variant instance, we can
just check if the variant contains that type of data.
This is essentially the same behavior, only it returns a bool indicating
whether or not the type in the variant is currently active, instead of
actually retrieving the data.
By default, MSVC doesn't use standards-compliant volatile semantics.
This makes it behave in a standards-compliant manner, making
expectations more uniform across compilers.
For similar reasons to the previous change, we move this to a single
function, so we don't need to duplicate the conversion logic in several
places within main.cpp.
Specifies the conversions explicitly to avoid implicit conversions from
const char* to QString. This makes it easier to disable implicit QString
conversions in the future.
In this case, the implicit conversion was technically wrong as well. The
implicit conversion treats the input strings as ASCII characters. This
would result in an incorrect conversion being performed in the rare case
a branch name was created with a non-ASCII Unicode character, likely
resulting in junk being displayed.
Over time our config values have grown quite numerous in size.
Unfortunately it also makes the single functions we have for loading and
saving values more error prone.
For example, we were loading the core settings twice when they only
should have been loaded once. In another section, a variable was
shadowing another variable used to load settings from a completely
different section.
Finally, in one other case, there was an extraneous endGroup() call used
that didn't need to be done. This was essentially dead code and also a
bug waiting to happen.
This separates the section loading code into its own separate functions.
This keeps variables only visible to the code that actually needs it,
and makes it much easier to visually see the end of each individual
configuration group. It also makes it much easier to visually catch bugs
during code review.
While we're at it, this also uses QStringLiteral instead of raw string
literals, which both avoids constructing a lot of QString instances, but
also makes it much easier to disable implicit ASCII to QString and
vice-versa in the future via setting QT_NO_CAST_FROM_ASCII and
QT_NO_CAST_TO_ASCII as compilation flags.
The C++ standard allows constexpr variables declared with the extern
keyword to have external linkage. Previously MSVC wasn't abiding by
this. This just makes the compiler more standards compliant during
builds.
Given we currently don't make use of anything that would break by this,
this is safe to enable.
The backend is not used until we decide to submit the testcase/telemetry, and creating it early prevents users from updating the credentials properly while the games are running.
Also introduced in REV5 was a variable-size audio command buffer. This
also affects how the size of the work buffer should be determined, so we
can add handling for this as well.
Thankfully, no other alterations were made to how the work buffer size
is calculated in 7.0.0-8.0.0. There were indeed changes made to to how
some of the actual audio commands are generated though (particularly in
REV7), however they don't apply here.
Introduced in REV5. This is trivial to add support for, now that
everything isn't a mess of random magic constant values.
All this is, is a change in data type sizes as far as this function
cares.
"Unmagics" quite a few magic constants within this code, making it much
easier to understand. Particularly given this factors out specific
sections into their own self-contained lambda functions.
Instead of asserting on already stored shader variants, silently skip them.
This shouldn't be happening but when a shader is invalidated and it is
not stored in the shader cache, this assert would hit and save that
shader anyways when the asserts are disabled.
These are actually quite important indicators of thread lifetimes, so
they should be going into the debug log, rather than being treated as
misc info and delegated to the trace log.
Makes the code much nicer to follow in terms of behavior and control
flow. It also fixes a few bugs in the implementation.
Notably, the thread's owner process shouldn't be accessed in order to
retrieve the core mask or ideal core. This should be done through the
current running process. The only reason this bug wasn't encountered yet
is because we currently only support running one process, and thus every
owner process will be the current process.
We also weren't checking against the process' CPU core mask to see if an
allowed core is specified or not.
With this out of the way, it'll be less noisy to implement proper
handling of the affinity flags internally within the kernel thread
instances.
Provides serialization/deserialization to the database in system save files, accessors for database state and proper handling of both major Mii formats (MiiInfo and MiiStoreData)
This option allows picking the compatibility profile since a lot of bugs
are fixed in it. We devs will use this option to easierly debug current
problems in our Core implementation.:wq
flushing is now responsability of children caches instead of the cache
object. This change will allow the specific cache to pass extra
parameters on flushing and will allow more flexibility.
This is a holdover from Citra, where the 3DS has both
WaitSynchronization1 and WaitSynchronizationN. The switch only has one
form of wait synchronizing (literally WaitSynchonization). This allows
us to throw out code that doesn't apply at all to the Switch kernel.
Because of this unnecessary dichotomy within the wait synchronization
utilities, we were also neglecting to properly handle waiting on
multiple objects.
While we're at it, we can also scrub out any lingering references to
WaitSynchronization1/WaitSynchronizationN in comments, and change them
to WaitSynchronization (or remove them if the mention no longer
applies).
The actual behavior of this function is slightly more complex than what
we're currently doing within the supervisor call. To avoid dumping most
of this behavior in the supervisor call itself, we can migrate this to
another function.
The default constructor will always run, even when not specified, so
this is redundant.
However, the context member can indeed be initialized in the constructor
initializer list.
Previously we were building with MBCS, which is pretty undesirable. We
want the application to be Unicode-aware in general.
Currently, we make the command line variant of yuzu use ANSI variants of
the non-standard getopt functions that we link in for Windows, given we
only have an ANSI option-set.
We should really replace getopt with a library that we make all build
types of yuzu link in, but this will have to do for the time being.
Operations done before the main half float operation (like HAdd) were
managing a packed value instead of the unpacked one. Adding an unpacked
operation allows us to drop the per-operand MetaHalfArithmetic entry,
simplifying the code overall.
This is a compile definition introduced in Qt 4.8 for reducing the total
potential number of strings created when performing string
concatenation. This allows for less memory churn.
This can be read about here:
https://blog.qt.io/blog/2011/06/13/string-concatenation-with-qstringbuilder/
For a change that isn't source-compatible, we only had one occurrence
that actually need to have its type clarified, which is pretty good, as
far as transitioning goes.
This member variable is entirely unused. It was only set but never
actually utilized. Given that, we can remove it to get rid of noise in
the thread interface.
Essentially performs the inverse of svcMapProcessCodeMemory. This unmaps
the aliasing region first, then restores the general traits of the
aliased memory.
What this entails, is:
- Restoring Read/Write permissions to the VMA.
- Restoring its memory state to reflect it as a general heap memory region.
- Clearing the memory attributes on the region.
Uses arithmetic that can be identified more trivially by compilers for
optimizations. e.g. Rather than shifting the halves of the value and
then swapping and combining them, we can swap them in place.
e.g. for the original swap32 code on x86-64, clang 8.0 would generate:
mov ecx, edi
rol cx, 8
shl ecx, 16
shr edi, 16
rol di, 8
movzx eax, di
or eax, ecx
ret
while GCC 8.3 would generate the ideal:
mov eax, edi
bswap eax
ret
now both generate the same optimal output.
MSVC used to generate the following with the old code:
mov eax, ecx
rol cx, 8
shr eax, 16
rol ax, 8
movzx ecx, cx
movzx eax, ax
shl ecx, 16
or eax, ecx
ret 0
Now MSVC also generates a similar, but equally optimal result as clang/GCC:
bswap ecx
mov eax, ecx
ret 0
====
In the swap64 case, for the original code, clang 8.0 would generate:
mov eax, edi
bswap eax
shl rax, 32
shr rdi, 32
bswap edi
or rax, rdi
ret
(almost there, but still missing the mark)
while, again, GCC 8.3 would generate the more ideal:
mov rax, rdi
bswap rax
ret
now clang also generates the optimal sequence for this fallback as well.
This is a case where MSVC unfortunately falls short, despite the new
code, this one still generates a doozy of an output.
mov r8, rcx
mov r9, rcx
mov rax, 71776119061217280
mov rdx, r8
and r9, rax
and edx, 65280
mov rax, rcx
shr rax, 16
or r9, rax
mov rax, rcx
shr r9, 16
mov rcx, 280375465082880
and rax, rcx
mov rcx, 1095216660480
or r9, rax
mov rax, r8
and rax, rcx
shr r9, 16
or r9, rax
mov rcx, r8
mov rax, r8
shr r9, 8
shl rax, 16
and ecx, 16711680
or rdx, rax
mov eax, -16777216
and rax, r8
shl rdx, 16
or rdx, rcx
shl rdx, 16
or rax, rdx
shl rax, 8
or rax, r9
ret 0
which is pretty unfortunate.
This gives us significantly more control over where in the
initialization process we start execution of the main process.
Previously we were running the main process before the CPU or GPU
threads were initialized (not good). This amends execution to start
after all of our threads are properly set up.
Initially required due to the split codepath with how the initial main
process instance was initialized. We used to initialize the process
like:
Init() {
main_process = Process::Create(...);
kernel.MakeCurrentProcess(main_process.get());
}
Load() {
const auto load_result = loader.Load(*kernel.GetCurrentProcess());
if (load_result != Loader::ResultStatus::Success) {
// Handle error here.
}
...
}
which presented a problem.
Setting a created process as the main process would set the page table
for that process as the main page table. This is fine... until we get to
the part that the page table can have its size changed in the Load()
function via NPDM metadata, which can dictate either a 32-bit, 36-bit,
or 39-bit usable address space.
Now that we have full control over the process' creation in load, we can
simply set the initial process as the main process after all the loading
is done, reflecting the potential page table changes without any
special-casing behavior.
We can also remove the cache flushing within LoadModule(), as execution
wouldn't have even begun yet during all usages of this function, now
that we have the initialization order cleaned up.
Now that we have dependencies on the initialization order, we can move
the creation of the main process to a more sensible area: where we
actually load in the executable data.
This allows localizing the creation and loading of the process in one
location, making the initialization of the process much nicer to trace.
Like with CPU emulation, we generally don't want to fire off the threads
immediately after the relevant classes are initialized, we want to do
this after all necessary data is done loading first.
This splits the thread creation into its own interface member function
to allow controlling when these threads in particular get created.
Our initialization process is a little wonky than one would expect when
it comes to code flow. We initialize the CPU last, as opposed to
hardware, where the CPU obviously needs to be first, otherwise nothing
else would work, and we have code that adds checks to get around this.
For example, in the page table setting code, we check to see if the
system is turned on before we even notify the CPU instances of a page
table switch. This results in dead code (at the moment), because the
only time a page table switch will occur is when the system is *not*
running, preventing the emulated CPU instances from being notified of a
page table switch in a convenient manner (technically the code path
could be taken, but we don't emulate the process creation svc handlers
yet).
This moves the threads creation into its own member function of the core
manager and restores a little order (and predictability) to our
initialization process.
Previously, in the multi-threaded cases, we'd kick off several threads
before even the main kernel process was created and ready to execute (gross!).
Now the initialization process is like so:
Initialization:
1. Timers
2. CPU
3. Kernel
4. Filesystem stuff (kind of gross, but can be amended trivially)
5. Applet stuff (ditto in terms of being kind of gross)
6. Main process (will be moved into the loading step in a following
change)
7. Telemetry (this should be initialized last in the future).
8. Services (4 and 5 should ideally be alongside this).
9. GDB (gross. Uses namespace scope state. Needs to be refactored into a
class or booted altogether).
10. Renderer
11. GPU (will also have its threads created in a separate step in a
following change).
Which... isn't *ideal* per-se, however getting rid of the wonky
intertwining of CPU state initialization out of this mix gets rid of
most of the footguns when it comes to our initialization process.