only slightly worse than GAE, because it focuses on translat-
ing system calls frequently used by games [55].
For efficiency, we conduct a pairwise comparison between
Trinity and each of the emulators in terms of the FPS of the
apps that Trinity and the compared emulator can both suc-
cessfully execute. On the high-end PC, Trinity outperforms
DAOW, Bluestacks, GAE, WSA, VMware and QEMU-KVM
in terms of the compatible apps by an average of 6.1%, 9.8%,
164.8%, 34.1%, 8.6%, and 132.2%, respectively. We observe a
significant visual difference between Trinity and GAE, WSA,
and QEMU-KVM across all apps. We observe less visual dif-
ference between Trinity and DAOW, Bluestacks, and VMware
for many apps. However, the visual difference is very notice-
able especially on apps where Trinity performs more than 15
FPS better, for which there were 9, 12, and 5 apps for DAOW,
Bluestacks, and VMware, respectively. Regarding the average
FPS values of individual apps, we find that Trinity shows the
best efficiency on 76 of the apps. For the 24 apps that Trinity
shows worse efficiency, we find that the differences in the
apps’ average FPS values are all less than 6 FPS, with 12
of them are in fact less than 1 FPS. On these apps, we find
that there is not any notable smoothness difference between
Trinity and the emulators that yield the best FPS.
Similar situations can also be observed on the middle-end
PC (as demonstrated in Figure 10b). Trinity outperforms
DAOW, Bluestacks, GAE, WSA, VMware and QEMU-KVM
on the middle-end PC in terms of the compatible apps by an
average of 4.9%, 16.1%, 168.7%, 84.6%, 17%, and 137.7%,
respectively. Also, although there are more (42) apps where
Trinity does not yield the best efficiency, the FPS differences
are still mostly insignificant, with 36 of them being less than
5 FPS. For the remaining 6 apps, DAOW has the best FPS
and outperforms Trinity by 6 to 9 FPS, though we could not
perceive any visual difference between the two. Careful ex-
amination of the apps’ runtime situations shows that they tend
to heavily stress the CPU as its graphics scenes involve many
physics effects such as collisions and reflections, which re-
quire the CPU to perform heavy computations such as matrix
transformations. Thus, DAOW’s directly interfacing with the
hardware CPU without the virtualization layer allows it to
perform better than Trinity (as well as the other emulators),
particularly given the middle-end PC’s rather weak CPU. In
comparison, Trinity performs better than DAOW for all the 6
apps on the high-end PC.
Compatibility with Random 10K Apps.
For the apps ran-
domly selected from Google Play, we can successfully install
all of them and run 97.2% of them without incurring app
crashes. For the apps we cannot run, we find that some (2.3%)
of them have also exhibited crashes on real devices; In addi-
tion, 0.43% require special hardware that Trinity currently has
not implemented, e.g., GPS, NFC and various sensors, which
is not hard to fix given the general device extensibility of
QEMU that Trinity is built on. Finally, the remaining 0.07%
seem to actively avoid being run in an emulator by closing
themselves when they notice that certain hardware configura-
tions (e.g., the CPU specification listed in
/proc/cpuinfo
)
are that of an emulator as complained in their runtime logs.
8.3 Performance Breakdown
To quantitatively understand the contributions of the proposed
mechanisms to Trinity’s efficiency, we respectively remove
each of the three major mechanisms of Trinity (i.e., projection
space, flow control and data teleporting), and measure the re-
sulting efficiency degradations when running the top-100 3D
apps on the high-end PC. In detail, removing projection space
degrades Trinity to API remoting, whose guest-host control
and data exchanges are still backed by our data teleporting
mechanism. Removing data teleporting disables all the static
timing analysis logics apart from data aggregation, which al-
lows us to retain at least the data transferring performance of
GAE since it also adopts a moderate buffer to batch void API
calls. For data persistence and arrival notification, we adopt
control flow blocking and VM Exit following GAE’s design.
Further, to fully demonstrate the efficiency impacts of the
three mechanisms, we also measure the performance break-
down when the maximum framerate restriction (which is 60
FPS) of the apps is removed. Note that we do not remove
this restriction when evaluating the top-100 3D apps in §8.2
since this requires source code modifications to the emulators,
while many of the emulators are proprietary (e.g., DAOW
and Bluestacks). Figure
11 depicts the average FPS values
of the top-100 3D apps in the breakdown experiments with
the 60-FPS framerate restriction, while Figure 12 shows the
results without the framerate restriction.
Projection Space.
After the projection space is removed, the
average FPS drops by 6.1
×
(8.6
×
) with (without) the framer-
ate restriction, providing the most significant efficiency bene-
fits. This is not surprising as our in-depth analysis of the API
call characteristics (by instrumenting our system graphics li-
brary as discussed in §2.2 during the breakdown experiments)
shows that with the projection space, 99.93% of graphics API
calls do not require synchronous host-side executions. The
remaining 0.07% API calls are Type-1 calls related to the
context information we do not maintain in shadow contexts,
including the rendered pixels and execution status of a GPU
as discussed in §4.1.
Among these asynchronously-executed calls, 26% are di-
rectly resolved at the projection space (with our maintained
context and resource information), fundamentally avoiding
their needs for any host-side executions. Such calls are mostly
related to context manipulation and context/resource informa-
tion querying. For the remainder (74%), they involve APIs
for resource allocations and populations, as well as drawing
calls. We also measure the memory consumption of the added
projection space when running the top-100 3D apps by mon-
itoring the maximum memory consumed by our provided
296 16th USENIX Symposium on Operating Systems Design and Implementation USENIX Association