Commit Graph

79 Commits

Author SHA1 Message Date
0c512f91a1 Fix #1643
This adds an extra op to translator to allow the block hook sync pc in the very begining
2025-01-18 15:07:22 +08:00
mio
6974b53588 Fix #2078
We shall only go through the else branch for code_read
2025-01-04 15:57:02 +08:00
Michael-c0de
4f417c3f11 patch multiple UC_HOOK_MEM callbacks for unaligned access (#2063)
* patch multiple UC_HOOK_MEM callbacks for unaligned access

* update test_x86.c for #2063

* update test_x86.c for build on win

---------

Co-authored-by: yaojiale2024@iscas.ac.cn <yaojiale2024@iscas.ac.cn>
Co-authored-by: lazymio <mio@lazym.io>
2024-12-29 23:24:32 +08:00
958ed09153 No longer need SPRR and probe it runtime 2024-12-07 23:33:34 +08:00
69200d4f00 Fix regression: If invalid instruction is handled, allow emulation to continue 2024-12-07 17:30:45 +08:00
3b2f54fc61 Fix regression: We should triage MIPS internal exceptions to Unicorn exceptions 2024-12-07 17:09:59 +08:00
tbodt
f71bc1a115 Several bugfixes (#2049)
* Remove global variable from aarch64 tcg target

This obviously breaks trying to run two unicorn instances at once on
aarch64. It appears a similar variable had already been moved to the
state struct for i386 tcg target.

* Reenable writing to jit region while calling tb_add_jump

On arm macs, every place that writes to jit code needs to have
tb_exec_unlock called first. This is already in most necessary places,
but not this one.

* Don't forget to call restore_jit_state in uc_context_restore

Every time UC_INIT is used, restore_jit_state must be used on the return
path, or occasional assertion failures will pop up on arm macs.

* Restore pc before calling into tlb fill hook

In my application it is important to have correct pc values available
from this hook.
2024-11-04 12:53:26 +08:00
PhilippTakacs
ab23d4ceb0 Optimize Notdirty write (#2031)
* enable notdirty_write for snapshots when possible

Snapshots only happens when the priority of the memory region is smaller
then the snapshot_level. After a snapshot notdirty can be set.

* disable notdirty_write for self modifying code

When SMC access the memory region more then once the
tb must be rebuild multible times.

fixes #2029

* notdirty_write better hook check

Check all relevant memory hooks before enabling notdirty write.
This also checks if the memory hook is registered for the affected
region. So it is possible to use notdirty write and have some hooks
on different addresses.

* notdirty_write check for addr_write in snapshot case

* self modifying code clear recursive mem access

when self modifying code does unaligned memory accese sometimes
uc->size_recur_mem is changed but for notdirty write not changed back.
This causes mem_hooks to be missed. To fix this uc->size_recur_mem is
set to 0 before each cpu_exec() call.
2024-11-01 00:02:11 +08:00
851914c8d0 Fix segfault if tlb is flushed in the hooks 2024-10-06 23:31:46 +08:00
mio
920d076e51 Remove page-collection-locs 2024-09-21 22:03:44 +08:00
mio
6cc7e1d431 Also only reset if hooks are installed 2024-09-21 21:52:38 +08:00
mio
8816883bb3 Fix TLB for snapshots 2024-09-21 21:49:01 +08:00
mio
2cd227f804 Update symbols for tlb_reset_dirty_by_vaddr 2024-09-21 20:54:24 +08:00
Andrei Warkentin
d01035767e notdirty_write: fix store-related performance problems
Every store would always cause the tb_invalidate_phys_page_fast path to be invoked,
amounting to a 40x slowdown of stores compared to loads.

Change this code to only worry about TB invalidation for regions marked as
executable (i.e. emulated executable).

Even without uc_set_native_thunks, this change fixes most of the performance
issues seen with thunking to native calls.

Signed-off-by: Andrei Warkentin <andrei.warkentin@intel.com>
2024-09-21 20:50:43 +08:00
mio
e03109d8c9 Respect users' decision for UC_ERR_INSN_INVALID 2024-03-08 17:31:27 +08:00
71c729a9d7 Define HAVE_SPRR 2024-02-13 19:09:35 +08:00
b31081a105 Remove unused var 2024-02-13 14:38:48 +08:00
78ea3c8301 Fix m1 defines 2024-02-13 11:52:10 +08:00
a6fb2a6870 Save jit state before/after callback 2024-02-13 11:13:01 +08:00
822bb527f3 M1 W^X fully supported 2024-02-12 00:10:44 +08:00
Mario Haustein
9a2583e967 fix deprecated storage-class declarations 2023-10-08 13:40:23 +02:00
Mark Giraud
e189e1fb8b fix: Use correct addresses during memory cow 2023-08-23 10:18:42 +02:00
6e97e59f54 Fix building on Apple Sillicon 2023-08-03 13:17:26 +08:00
Takacs, Philipp
80bd825420 implement simple memory snapshot mechanismus
Uses Copy on Write to make it posible to restore the memory state after a snapshot
was made. To restore all MemoryRegions created after the snapshot are removed.
2023-07-11 11:51:40 +02:00
Takacs, Philipp
065af19dc5 use address_space_translate to find memory mapping
first version has bugs
2023-07-11 11:47:50 +02:00
mio
49ccbde2d0 Leave out essential files
Co-authored-by: ζeh Matt <5415177+ZehMatt@users.noreply.github.com>
2023-06-10 23:44:05 +02:00
mio
8dffbc159c Add uc_ctl_get/set_tcg_buffer_size
We still need this API because the virtual memory address space of

32 bits os is only 4GB and we default need 1G per instance

Credits to @ZehMatt for original idea

Co-authored-by: ζeh Matt <5415177+ZehMatt@users.noreply.github.com>
2023-06-10 23:36:02 +02:00
mio
f8c7969d65 Revert "Add uc_ctl_get/set_tcg_buffer_size"
This reverts commit 3145e3c426 because not
properly co-authoer-ed.
2023-06-10 23:29:56 +02:00
mio
3145e3c426 Add uc_ctl_get/set_tcg_buffer_size 2023-06-10 16:08:29 +02:00
mio
5057f9925b Fix typo 2023-06-10 15:26:29 +02:00
mio
9de80cb625 Correct calling convention 2023-06-10 15:03:59 +02:00
mio
3d5b2643f0 Support demand paging via closures and seh
Reverts 12a79192ee which exploits normal tcg mechanism

This uses a trampoline to pass extra data to seh handlers
2023-06-10 14:04:56 +02:00
Choongwoo Han
cfaa5be912 Comment out more unused page lock functions 2023-05-26 12:52:25 -07:00
Choongwoo Han
75d26b7707 Ignore page_collection_lock 2023-05-23 13:11:36 -07:00
Takacs, Philipp
fa457a3a97 fix UC_MEM_WRITE_PROT callback
callbacks work on the physical address.
2023-05-22 15:38:37 +02:00
mio
994813a0e5 Also check cpu->stopped 2023-05-19 23:24:42 +02:00
Takacs, Philipp
4a7b3b7a3a fixup! load_helper only call cpu_loop_exit() when emulation is running 2023-05-12 12:36:16 +02:00
Takacs, Philipp
073c4b74ca load_helper only call cpu_loop_exit() when emulation is running
The load_helper is sometimes called from register writes. When the load
fails check if emulation is running before jump out of the emulated code.
2023-05-09 14:58:40 +02:00
Mio
bbbc7856ac Invalidate tb cache once mapping is removed 2023-04-12 20:56:54 +08:00
Takacs, Philipp
4b327baaf7 make unicorn use the physical addresses
This allows to emulate code witch fully uses the MMU. This is necesary
to allow full system emulation.
2023-03-28 13:50:11 +02:00
Nguyen Anh Quynh
eb118528b1 rename memory_mapping() to find_memory_region() and simplify mem_map() 2023-02-06 17:59:16 +08:00
mio
a25adf84f0 Rename flags to avoid confusion 2023-01-28 22:18:39 +01:00
mio
12a79192ee Demand paging on Windows 2023-01-28 22:04:43 +01:00
mio
3ea7857be3 Exit early when invalid read happens
In this way, the target register won't be overwritten
2022-10-20 21:57:28 +02:00
Mio
092014a6cc Don't sync pc if user requests a restart 2022-08-31 23:27:05 +08:00
mio
2c00546c6e Merge rhelmot's fix 2022-08-14 13:35:54 +02:00
mio
8303328aa8 Obtain memory mapping after hooks are called 2022-08-14 12:42:53 +02:00
fdd129fd30 Remember the regions a hook has intrumented and clear cache on deletion 2022-06-02 14:46:02 +02:00
289034538d Cleaner implementation for uc_mem_prot on mmio regions 2022-05-28 23:46:06 +02:00
2a6529348c Support uc_mem_protect on mmio regions
Also make mmio ranges return the correct errors on wrong protection
2022-05-28 23:33:43 +02:00