Age | Commit message (Collapse) | Author |
|
Notes:
Merged-By: maximecb <[email protected]>
|
|
With a well-timed OOM around a page switch in the backend, it can return
RetryOnNextPage twice and crash due to the assert. (More places can
signal OOM now since VirtualMem tracks Rust malloc heap size for
--yjit-mem-size.)
Return error in these cases instead of crashing.
Fixes: https://github.com/Shopify/ruby/issues/566
Notes:
Merged: https://github.com/ruby/ruby/pull/12668
|
|
* YJIT: Spill/load argument registers to reuse blocks
* Mention the immediate function name
* Explain the context behind spill/load operations
Notes:
Merged-By: k0kubun <[email protected]>
|
|
* YJIT: Pass method arguments using registers
* s/at_current_insn/at_compile_target/
* Implement register shuffle
Notes:
Merged-By: k0kubun <[email protected]>
|
|
* YJIT: Allow dev_nodebug to disasm release-mode code
* Revert "YJIT: Squash canary before falling back"
This reverts commit f05ad373d84909da7541bd6d6ace38b48eaf24a1.
The stray canary issue should have been solved by
def7023ee4a3fc6eeba9d3a34c31a5bcff315fac, alleviating this codegen
accommodation.
* s/runtime_assertions/runtime_checks/
---------
Co-authored-by: Alan Wu <[email protected]>
Notes:
Merged-By: k0kubun <[email protected]>
|
|
* YJIT: Local variable register allocation
* locals are not stack temps
* Rename RegTemps to RegMappings
* Rename RegMapping to RegOpnd
* Rename local_size to num_locals
* s/stack value/operand/
* Rename spill_temps() to spill_regs()
* Clarify when num_locals becomes None
* Mention that InsnOut uses different registers
* Rename get_reg_mapping to get_reg_opnd
* Resurrect --yjit-temp-regs capability
* Use MAX_CTX_TEMPS and MAX_CTX_LOCALS
|
|
This change implements a fallback mode for the `--yjit-dump-disasm`
development command-line option to make it usable in release builds.
Previously, using the option with release builds of YJIT yielded only
a warning asking the user to build with `--enable-yjit=dev`.
While builds that use the `disasm` feature still give the best output,
just having the comments is useful enough for many kinds of debugging.
Having it usable in release builds is nice for new hackers, too, since
this allows for tinkering without having to learn how to build YJIT in
development mode.
Sample output on A64:
```
# regenerate_branch
# Insn: 0001 opt_send_without_block (stack_size: 1)
# guard known object with singleton class
0x11f7e0034: 4b 00 00 58 03 00 00 14 08 ce 9c 04 01 00 00
0x11f7e0043: 00 3f 00 0b eb 81 06 01 54 1f 20 03 d5
# RUBY_VM_CHECK_INTS(ec)
0x11f7e0050: 8b 02 42 b8 cb 07 01 35
# stack overflow check
0x11f7e0058: ab 62 02 91 7f 02 0b eb 69 07 01 54
# save PC to CFP
0x11f7e0064: 0b 3b 9a d2 2b 2f a0 f2 0b 00 cc f2 6b 02 00
0x11f7e0073: f8 ab 82 00 91
```
To ensure this feature doesn't incur too much cost when running without
the `--yjit-dump-disasm` option, I checked that there is no significant
impact to compile time and memory usage with the `compile_time_ns` and
`yjit_alloc_size` entry in `RubyVM::YJIT.runtime_stats`. For each
sample, I ran 3 iterations of the `lobsters` YJIT benchmark. The
statistics summary and done with the `summary` function in R.
Compile time, sample size of 60, lower is better:
```
Before After
Min. :2.054e+09 Min. :2.028e+09
1st Qu.:2.069e+09 1st Qu.:2.044e+09
Median :2.081e+09 Median :2.060e+09
Mean :2.089e+09 Mean :2.066e+09
3rd Qu.:2.109e+09 3rd Qu.:2.085e+09
Max. :2.146e+09 Max. :2.144e+09
```
Allocation size, sample size of 20, lower is better:
```
Before After
Min. :21804742 Min. :21794082
1st Qu.:21826682 1st Qu.:21816282
Median :21844042 Median :21826814
Mean :21960664 Mean :22026291
3rd Qu.:21861228 3rd Qu.:22040439
Max. :22587426 Max. :22930614
```
The `yjit_alloc_size` samples are noisy, but since the average increased
by only 0.3%, and the median is lower, I feel safe saying that there is
no significant change.
|
|
* YJIT: A64: Add CBZ and CBNZ encoding functions
* YJIT: A64: Use CBZ/CBNZ to check for zero
Instead of emitting `cmp x0, #0` plus `b.z #target`, A64 offers Compare
and Branch on Zero for us to just do `cbz x0, #target`. This commit
utilizes that and the related CBNZ instruction when appropriate.
We check for zero most commonly in interrupt checks:
```diff
# Insn: 0003 leave (stack_size: 1)
# RUBY_VM_CHECK_INTS(ec)
ldur w11, [x20, #0x20]
-tst w11, w11
-b.ne #0x109002164
+cbnz w11, #0x1049021d0
```
* fix copy paste error
Co-authored-by: Randy Stauner <[email protected]>
---------
Co-authored-by: Randy Stauner <[email protected]>
|
|
Same idea as the x64 equivalent in c2622b52536c5, removing the register
shuffle coming from the pop two, push one stack motion these VM
instructions perform.
```
# Insn: 0004 opt_or (stack_size: 2)
- orr x11, x1, x9
- mov x1, x11
+ orr x1, x1, x9
```
|
|
This is best understood by looking at the change to the output:
```diff
# Insn: 0002 opt_and (stack_size: 2)
- mov rax, rsi
- and rax, rdi
- mov rsi, rax
+ and rsi, rdi
```
It's a bit awkward to match against due to how stack operands are
lowered, but hey, it's nice to save the 2 unnecessary MOVs.
|
|
* YJIT: A64: Use ADDS/SUBS/CMP (immediate) when possible
We were loading 1 into a register and then doing ADDS/SUBS previously.
That was particularly bad since those come up in fixnum operations.
```diff
# integer left shift with rhs=1
- mov x11, #1
- subs x11, x1, x11
+ subs x11, x1, #1
lsl x12, x11, #1
asr x13, x12, #1
cmp x13, x11
- b.ne #0x106ab60f8
- mov x11, #1
- adds x12, x12, x11
+ b.ne #0x10903a0f8
+ adds x12, x12, #1
mov x1, x12
```
Note that it's fine to cast between i64 and u64 since the bit pattern is
preserved, and the add/sub themselves don't care about the signedness of
the operands.
CMP is just another mnemonic for SUBS.
* YJIT: A64: Split asm.mul() with immediates properly
There is in fact no MUL on A64 that takes an immediate, so this
instruction was using the wrong split method. No current usages of this
form in YJIT.
---------
Co-authored-by: Maxime Chevalier-Boisvert <[email protected]>
|
|
|
|
* YJIT: Allow non-leaf calls on opt_* insns
* s/on_send_insn/is_sendish/
* Repeat known_cfunc_codegen
|
|
|
|
* YJIT: Allow tracing a counted exit
* Avoid clobbering caller-saved registers
|
|
I ran into this while trying to implement setbyte, was surprised
to find out we hadn't implemented it yet.
|
|
Small PR to add a comment when we clear local variable types,
so we can be aware that it's happening when looking at the disasm.
|
|
YJIT: Avoid doubly splitting Opnd::Value
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Previously, PosMarker callbacks ran even when the assembler failed to
assemble its contents due to insufficient space. This was problematic
because when Assembler::compile() failed, the callbacks were given
positions that have no valid code, contrary to general expectation.
For example, we use a PosMarker callback to record VM instruction
boundaries and patch in jumps to exits in case the guest program starts
tracing, however, previously, we could record a location near the end of
the code block, where there is no space to patch in jumps. I suspect
this is the cause of the recent occurrences of rare random failures on
GitHub Actions with the invariants.rs:529 "can rewrite existing code"
message. `--yjit-perf` also uses PosMarker and had a similar issue.
Buffer the list of callbacks to fire, and only fire them when all code
in the assembler are written out successfully. It's more intuitive this
way.
|
|
We've long had a size restriction on the code memory region such that a
u32 could refer to everything. This commit capitalizes on this
restriction by shrinking the size of `CodePtr` to be 4 bytes from 8.
To derive a full raw pointer from a `CodePtr`, one needs a base pointer.
Both `CodeBlock` and `VirtualMemory` can be used for this purpose. The
base pointer is readily available everywhere, except for in the case of
the `jit_return` "branch". Generalize lea_label() to lea_jump_target()
in the IR to delay deriving the `jit_return` address until `compile()`,
when the base pointer is available.
On railsbench, this yields roughly a 1% reduction to `yjit_alloc_size`
(58,397,765 to 57,742,248).
|
|
|
|
So that we get a reminder to check CodeBlock::has_dropped_bytes().
Internally, asm.compile() already checks it, and this patch just
propagates it out to the caller with a `#[must_use]`.
Code GC logic moved out one level in entry_stub_hit(), so the body
can freely use `?`
|
|
|
|
Co-authored-by: Alan Wu <[email protected]>
|
|
> note: `#[deny(clippy::redundant_locals)]` on by default
On Rust 1.73.0.
|
|
Previously, at the end of `leave` we did
`*caller_cfp->sp = return_value`, like the interpreter.
With future changes that leaves the SP field uninitialized for C frames,
this will become problematic. For cases like returning from
`rb_funcall()`, the return value was written above the stack and
never read anyway (callers use the copy in the return register).
Leave the return value in a register at the end of `leave` and have the
code at `cfp->jit_return` decide what to do with it. This avoids the
unnecessary memory write mentioned above. For JIT-to-JIT returns, it goes
through `asm.stack_push()` and benefits from register allocation for
stack temporaries.
Mostly flat on benchmarks, with maybe some marginal speed improvements.
Co-authored-by: Takashi Kokubun <[email protected]>
|
|
* YJIT: Chain-guard opt_mult overflow
* YJIT: Support regenerating Jo after Mul
|
|
* YJIT: Avoid creating a vector in get_temp_regs()
Co-authored-by: Alan Wu <[email protected]>
* Remove unused import
---------
Co-authored-by: Alan Wu <[email protected]>
Co-authored-by: Alan Wu <[email protected]>
|
|
|
|
* YJIT: Skip Insn::Comment and format!
if disasm is disabled
Co-authored-by: Alan Wu <[email protected]>
* YJIT: Get rid of asm.comment
---------
Co-authored-by: Alan Wu <[email protected]>
Notes:
Merged-By: k0kubun <[email protected]>
|
|
Notes:
Merged-By: k0kubun <[email protected]>
|
|
Notes:
Merged-By: maximecb <[email protected]>
|
|
The ARM backend allows for this so let's make x64 consistent.
Notes:
Merged: https://github.com/ruby/ruby/pull/8263
Merged-By: XrXr
|
|
* YJIT: implement fast path for integer multiplication in opt_mult
* Update yjit/src/codegen.rs
Co-authored-by: Alan Wu <[email protected]>
* Implement mul with overflow checking on arm64
* Fix missing semicolon
* Add arm splitting for lshift, rshift, urshift
---------
Co-authored-by: Alan Wu <[email protected]>
Notes:
Merged-By: maximecb <[email protected]>
|
|
* YJIT: implement codegen for rb_int_lshift
* Update yjit/src/asm/x86_64/mod.rs
Co-authored-by: Takashi Kokubun <[email protected]>
---------
Co-authored-by: Takashi Kokubun <[email protected]>
Notes:
Merged-By: maximecb <[email protected]>
|
|
|
|
Avoid generating long dispatch chains for all array lengths seen.
Notes:
Merged-By: maximecb <[email protected]>
|
|
Notes:
Merged-By: maximecb <[email protected]>
|
|
* YJIT: handle expandarray_rhs_too_small case
YJIT: fix csel bug in x86 backend, add test
Remove commented out lines
Refactor expandarray to use chain guards
Propagate Type::Nil when known
Update yjit/src/codegen.rs
Co-authored-by: Takashi Kokubun <[email protected]>
* Add missing counter, use get_array_ptr() in expandarray
* Make change suggested by Kokubun to reuse loop
---------
Co-authored-by: Takashi Kokubun <[email protected]>
Notes:
Merged-By: maximecb <[email protected]>
|
|
This reverts commit 3b88a0bee841aee77bee306d9d34e587561515cf.
This commit break aarch64 platform and Apple Silicon
|
|
* YJIT: handle expandarray_rhs_too_small case
* YJIT: fix csel bug in x86 backend, add test
* Remove commented out lines
Notes:
Merged-By: maximecb <[email protected]>
|
|
Notes:
Merged-By: maximecb <[email protected]>
|
|
YJIT: implement missing jg instruction in backend
While trying to implement a specialize integer left shift, I ran
into a problem where we have no way to do a greater-than comparison
at the moment. Surprising we went this far without ever needing it.
Notes:
Merged-By: maximecb <[email protected]>
|
|
* YJIT: Use registers to pass stack temps to C calls
* YJIT: Update comments in ccall
|