Age | Commit message (Collapse) | Author |
|
Notes:
Merged-By: maximecb <[email protected]>
|
|
For logical instructions such as AND, there is a constraint that the N
part of the bitmask immediate must be 0. We weren't respecting this
condition previously and were silently emitting undefined instructions.
Check for this condition in the assembler and tweak the backend to
correctly detect whether a number could be encoded as an immediate in a
32 bit logical instruction. Due to the nature of the immediate encoding,
the same numeric value encodes differently depending on the size of
the register the instruction works on.
We currently don't have cases where we use 32 bit immediates but we ran
into this encoding issue during development.
Notes:
Merged-By: maximecb <[email protected]>
|
|
Notes:
Merged-By: maximecb <[email protected]>
|
|
* Change IncrCounter lowering on AArch64
Previously we were using LDADDAL which is not available on
Graviton 1 chips. Instead, we're going to use an exclusive
load/store group through the LDAXR/STLXR instructions.
* Update yjit/src/backend/arm64/mod.rs
Co-authored-by: Maxime Chevalier-Boisvert <[email protected]>
Notes:
Merged-By: maximecb <[email protected]>
|
|
* YJIT: Add Opnd#sub_opnd to use only 8 bits
* Add with_num_bits and let arm64_split use it
* Add another assertion to with_num_bits
* Use only with_num_bits
Notes:
Merged-By: maximecb <[email protected]>
|
|
* Introduce InstructionOffset for AArch64
There are a lot of instructions on AArch64 where we take an offset
from PC in terms of the number of instructions. This is for loading
a value relative to the PC or for jumping.
We were usually accepting an A64Opnd or an i32. It can get
confusing and inconsistent though because sometimes you would
divide by 4 to get the number of instructions or multiply by 4 to
get the number of bytes.
This commit adds a struct that wraps an i32 in order to keep all of
that logic in one place. It makes it much easier to read and reason
about how these offsets are getting used.
* Use b instruction when the offset fits on AArch64
Notes:
Merged-By: maximecb <[email protected]>
|
|
* Better b.cond usage on AArch64
When we're lowering a conditional jump, we previously had a bit of
a complicated setup where we could emit a conditional jump to skip
over a jump that was the next instruction, and then write out the
destination and use a branch register.
Now instead we use the b.cond instruction if our offset fits (not
common, but not unused either) and if it doesn't we write out an
inverse condition to jump past loading the destination and
branching directly.
* Added an inverse fn for Condition (#443)
Prevents the need to pass two params and potentially reduces errors.
Co-authored-by: Jimmy Miller <[email protected]>
Co-authored-by: Maxime Chevalier-Boisvert <[email protected]>
Co-authored-by: Jimmy Miller <[email protected]>
Notes:
Merged-By: maximecb <[email protected]>
|
|
There are a lot of times when encoding AArch64 instructions that we
need to represent an integer value with a custom fixed width. For
example, the offset for a B instruction is 26 bits, so we store an
i32 on the instruction struct and then mask it when we encode.
We've been doing this masking everywhere, which has worked, but
it's getting a bit copy-pasty all over the place. This commit
centralizes that logic to make sure we stay consistent.
Notes:
Merged: https://github.com/ruby/ruby/pull/6289
|
|
Notes:
Merged: https://github.com/ruby/ruby/pull/6289
|
|
Notes:
Merged: https://github.com/ruby/ruby/pull/6289
|
|
* When we're storing an immediate 0 value at a memory address, we
can use STUR XZR, Xd instead of loading 0 into a register and
then storing that register.
* When we're moving 0 into an argument register, we can use
MOV Xd, XZR instead of loading the value into a register first.
* In the newarray instruction, we can skip looking at the stack at
all if the number of values we're using is 0.
Notes:
Merged: https://github.com/ruby/ruby/pull/6289
|
|
|
|
|
|
(https://github.com/Shopify/ruby/pull/382)
* LDR instruction for AArch64
* Split loads in arm64_split when memory address displacements do not fit
|
|
* Left and right shift for IR
* Update yjit/src/backend/x86_64/mod.rs
Co-authored-by: Alan Wu <[email protected]>
Co-authored-by: Maxime Chevalier-Boisvert <[email protected]>
|
|
(https://github.com/Shopify/ruby/pull/344)
* A64: Fix off by one in offset encoding for BL
It's relative to the address of the instruction not the end of it.
* A64: Fix off by one when encoding B
It's relative to the start of the instruction not the end.
* A64: Add some tests for boundary offsets
|
|
* Fix conditional jumps to label
* Bitmask immediates cannot be u64::MAX
|
|
* Better splitting for Op::Add, Op::Sub, and Op::Cmp
* Split stores if the displacement is too large
* Use a shifted immediate argument
* Split all places where shifted immediates are used
* Add more tests to the cirrus workflow
|
|
* Fix bitmask encoding to u32
* Fix splitting for Op::And to account for bitmask immediate
|
|
(https://github.com/Shopify/ruby/pull/335)
|
|
(https://github.com/Shopify/ruby/pull/329)
* Move to/from SP on AArch64
* Consolidate loads and stores
* Implement LDR post-index and LDR pre-index for AArch64
* Implement STR post-index and STR pre-index for AArch64
* Module entrypoints for LDR pre/post -index and STR pre/post -index
* Use STR (pre-index) and LDR (post-index) to implement push/pop
* Go back to using MOV for to/from SP
|
|
|
|
|
|
|
|
* CSEL on AArch64
* Implement various Op::CSel* instructions
|
|
* Port print_int to the new backend
* Tests for print_int and print_str
|
|
* ADR and ADRP for AArch64
* Implement Op::Jbe on X86
* Lera instruction
* Op::BakeString
* LeaPC -> LeaLabel
* Port print_str to the new backend
* Port print_value to the new backend
* Port print_ptr to the new backend
* Write null-terminators in Op::BakeString
* Fix up rebase issues on print-str port
* Add back in panic for X86 backend for unsupported instructions being lowered
* Fix target architecture
|
|
Instructions for pushing all caller-save registers and the flags so that
we can implement dump_insns.
|
|
|
|
Previously we were using a `Box<dyn FnOnce>` to support patching the
code when jumping to labels. We needed to do this because some of the
closures that were being used to patch needed to capture local variables
(on both X86 and ARM it was the type of condition for the conditional
jumps).
To get around that, we can instead use const generics since the
condition codes are always known at compile-time. This means that the
closures go from polymorphic to monomorphic, which means they can be
represented as an `fn` instead of a `Box<dyn FnOnce>`, which means they
can fall back to a plain function pointer. This simplifies the storage
of the `LabelRef` structs and should hopefully be a better default
going forward.
|
|
* More Arm64 lowering/backend work
* We now have encoding support for the LDR instruction for loading a PC-relative memory location
* You can now call add/adds/sub/subs with signed immediates, which switches appropriately based on sign
* We can now load immediates into registers appropriately, attempting to keep the minimal number of instructions:
* If it fits into 16 bytes, we use just a single movz.
* Else if it can be encoded into a bitmask immediate, we use a single mov.
* Otherwise we use a movz, a movk, and then optionally another one or two movks.
* Fixed a bunch of code to do with the Op::Load opcode.
* We now handle GC-offsets properly for Op::Load by skipping around them with a jump instruction. (This will be made better by constant pools in the future.)
* Op::Lea is doing what it's supposed to do now.
* Fixed a bug in the backend tests to do with not using the result of an Op::Add.
* Fix the remaining tests for Arm64
* Move split loads logic into each backend
|
|
* Get initial wiring up
* Split IncrCounter instruction
* Breakpoints in Arm64
* Support for ORR
* MOV instruction encodings
* Implement JmpOpnd and CRet
* Add ORN
* Add MVN
* PUSH, POP, CCALL for Arm64
* Some formatting and implement Op::Not for Arm64
* Consistent constants when working with the Arm64 SP
* Allow OR-ing values into the memory buffer
* Test lowering Arm64 ADD
* Emit unconditional jumps consistently in Arm64
* Begin emitting conditional jumps for A64
* Back out some labelref changes
* Remove label API that no longer exists
* Use a trait for the label encoders
* Encode nop
* Add in nops so jumps are the same width no matter what on Arm64
* Op::Jbe for CodePtr
* Pass src_addr and dst_addr instead of calculated offset to label refs
* Even more jump work for Arm64
* Fix up jumps to use consistent assertions
* Handle splitting Add, Sub, and Not insns for Arm64
* More Arm64 splits and various fixes
* PR feedback for Arm64 support
* Split up jumps and conditional jump logic
|
|
* LSL and LSR
* B.cond
* Move A64 files around to make more sense
* offset -> byte_offset for bcond
|
|
* Add TST instruction and AND/ANDS entrypoints for immediates
* TST/AND/ANDS for registers
* CMP instruction
|
|
|
|
* LDADDAL instruction
* STUR
* BL instruction
* Remove num_bits from imm and uimm
* Tests for imm_fits_bits and uimm_fits_bits
* Reorder arguments to LDADDAL
|
|
* MOVK instruction
* More tests for the A64 entrypoints
* Finish testing entrypoints
* MOVZ
* BR instruction
|
|
* LDUR
* Fix up immediate masking
* Consume operands directly
* Consistency and cleanup
* More consistency and entrypoints
* Cleaner syntax for masks
* Cleaner shifting for encodings
|
|
|
|
* Initial setup for aarch64
* ADDS and SUBS
* ADD and SUB for immediates
* Revert moved code
* Documentation
* Rename Arm64* to A64*
* Comments on shift types
* Share sig_imm_size and unsig_imm_size
|