Makes it more obvious that this function is intending to stand in for
the actual supervisor call itself, and not acting as a general heap
allocation function.
Also the following change will merge the freeing behavior of HeapFree
into this function, so leaving it as HeapAllocate would be misleading.
Another holdover from citra that can be tossed out is the notion of the
heap needing to be allocated in different addresses. On the switch, the
base address of the heap will always be managed by the memory allocator
in the kernel, so this doesn't need to be specified in the function's
interface itself.
The heap on the switch is always allocated with read/write permissions,
so we don't need to add specifying the memory permissions as part of the
heap allocation itself either.
This also corrects the error code returned from within the function.
If the size of the heap is larger than the entire heap region, then the
kernel will report an out of memory condition.
Rather than make a global accessor for this sort of thing. We can make
it a part of the thread interface itself. This allows getting rid of a
hidden global accessor in the kernel code.
Makes it an instantiable class like it is in the actual kernel. This
will also allow removing reliance on global accessors in a following
change, now that we can encapsulate a reference to the system instance
in the class.
Within the kernel, shared memory and transfer memory facilities exist as
completely different kernel objects. They also have different validity
checking as well. Therefore, we shouldn't be treating the two as the
same kind of memory.
They also differ in terms of their behavioral aspect as well. Shared
memory is intended for sharing memory between processes, while transfer
memory is intended to be for transferring memory to other processes.
This breaks out the handling for transfer memory into its own class and
treats it as its own kernel object. This is also important when we
consider resource limits as well. Particularly because transfer memory
is limited by the resource limit value set for it.
While we currently don't handle resource limit testing against objects
yet (but we do allow setting them), this will make implementing that
behavior much easier in the future, as we don't need to distinguish
between shared memory and transfer memory allocations in the same place.
Now that we have the address arbiter extracted to its own class, we can
fix an innaccuracy with the kernel. Said inaccuracy being that there
isn't only one address arbiter. Each process instance contains its own
AddressArbiter instance in the actual kernel.
This fixes that and gets rid of another long-standing issue that could
arise when attempting to create more than one process.
Similar to how WaitForAddress was isolated to its own function, we can
also move the necessary conditional checking into the address arbiter
class itself, allowing us to hide the implementation details of it from
public use.
Rather than let the service call itself work out which function is the
proper one to call, we can make that a behavior of the arbiter itself,
so we don't need to directly expose those implementation details.
Places all of the functions for address arbiter operation into a class.
This will be necessary for future deglobalizing efforts related to both
the memory and system itself.
Removes a few inclusion dependencies from the headers or replaces
existing ones with ones that don't indirectly include the required
headers.
This allows removing an inclusion of core/memory.h, meaning that if the
memory header is ever changed in the future, it won't result in
rebuilding the entirety of the HLE services (as the IPC headers are used
quite ubiquitously throughout the HLE service implementations).
Gets rid of the largest set of mutable global state within the core.
This also paves a way for eliminating usages of GetInstance() on the
System class as a follow-up.
Note that no behavioral changes have been made, and this simply extracts
the functionality into a class. This also has the benefit of making
dependencies on the core timing functionality explicit within the
relevant interfaces.
Places all of the timing-related functionality under the existing Core
namespace to keep things consistent, rather than having the timing
utilities sitting in its own completely separate namespace.
Looking into the implementation of the C++ standard facilities that seem
to be within all modules, it appears that they use 7 as a break reason
to indicate an uncaught C++ exception.
This was primarily found via the third last function called within
Horizon's equivalent of libcxxabi's demangling_terminate_handler(),
which passes the value 0x80000007 to svcBreak.
This is a bounds check to ensure that the thread priority is within the
valid range of 0-64. If it exceeds 64, that doesn't necessarily mean
that an actual priority of 64 was expected (it actually means whoever
called the function screwed up their math).
Instead clarify the message to indicate the allowed range of thread
priorities.
Now that we handle the kernel capability descriptors we can correct
CreateThread to properly check against the core and priority masks
like the actual kernel does.
This makes the naming more closely match its meaning. It's just a
preferred core, not a required default core. This also makes the usages
of this term consistent across the thread and process implementations.
In all cases that these functions are needed, the VMManager can just be
retrieved and used instead of providing the same functions in Process'
interface.
This also makes it a little nicer dependency-wise, since it gets rid of
cases where the VMManager interface was being used, and then switched
over to using the interface for a Process instance. Instead, it makes
all accesses uniform and uses the VMManager instance for all necessary
tasks.
All the basic memory mapping functions did was forward to the Process'
VMManager instance anyways.
If a thread handle is passed to svcGetProcessId, the kernel attempts to
access the process ID via the thread's instance's owning process.
Technically, this function should also be handling the kernel debug
objects as well, however we currently don't handle those kernel objects
yet, so I've left a note via a comment about it to remind myself when
implementing it in the future.
In the previous change, the memory writing was moved into the service
function itself, however it still had a problem, in that the entire
MemoryInfo structure wasn't being written out, only the first 32 bytes
of it were being written out. We still need to write out the trailing
two reference count members and zero out the padding bits.
Not doing this can result in wrong behavior in userland code in the following
scenario:
MemoryInfo info; // Put on the stack, not quaranteed to be zeroed out.
svcQueryMemory(&info, ...);
if (info.device_refcount == ...) // Whoops, uninitialized read.
This can also cause the wrong thing to happen if the user code uses
std::memcmp to compare the struct, with another one (questionable, but
allowed), as the padding bits are not guaranteed to be a deterministic
value. Note that the kernel itself also fully zeroes out the structure
before writing it out including the padding bits.
Moves the memory writes directly into QueryProcessMemory instead of
letting the wrapper function do it. It would be inaccurate to allow the
handler to do it because there's cases where memory shouldn't even be
written to. For example, if the given process handle is invalid.
HOWEVER, if the memory writing is within the wrapper, then we have no
control over if these memory writes occur, meaning in an error case, 68
bytes of memory randomly get trashed with zeroes, 64 of those being
written to wherever the memory info address points to, and the remaining
4 being written wherever the page info address points to.
One solution in this case would be to just conditionally check within
the handler itself, but this is kind of smelly, given the handler
shouldn't be performing conditional behavior itself, it's a behavior of
the managed function. In other words, if you remove the handler from the
equation entirely, does the function still retain its proper behavior?
In this case, no.
Now, we don't potentially trash memory from this function if an invalid
query is performed.
Amends the MemoryState enum to use the same values like the actual
kernel does. Also provides the necessary operators to operate on them.
This will be necessary in the future for implementing
svcSetMemoryAttribute, as memory block state is checked before applying
the attribute.
The Process object kept itself alive indefinitely because its handle_table
contains a SharedMemory object which owns a reference to the same Process object,
creating a circular ownership scenario.
Break that up by storing only a non-owning pointer in the SharedMemory object.