diff options
author | Aaron Patterson <[email protected]> | 2024-04-15 10:48:53 -0700 |
---|---|---|
committer | Aaron Patterson <[email protected]> | 2024-06-18 09:28:25 -0700 |
commit | cdf33ed5f37f9649c482c3ba1d245f0d80ac01ce (patch) | |
tree | 1f8ef823ad788ee2c57d2a438f04e2ba7b1800e7 /vm_insnhelper.c | |
parent | 921f22e563d6372d9f853f87d48b11c479126da9 (diff) |
Optimized forwarding callers and callees
This patch optimizes forwarding callers and callees. It only optimizes methods that only take `...` as their parameter, and then pass `...` to other calls.
Calls it optimizes look like this:
```ruby
def bar(a) = a
def foo(...) = bar(...) # optimized
foo(123)
```
```ruby
def bar(a) = a
def foo(...) = bar(1, 2, ...) # optimized
foo(123)
```
```ruby
def bar(*a) = a
def foo(...)
list = [1, 2]
bar(*list, ...) # optimized
end
foo(123)
```
All variants of the above but using `super` are also optimized, including a bare super like this:
```ruby
def foo(...)
super
end
```
This patch eliminates intermediate allocations made when calling methods that accept `...`.
We can observe allocation elimination like this:
```ruby
def m
x = GC.stat(:total_allocated_objects)
yield
GC.stat(:total_allocated_objects) - x
end
def bar(a) = a
def foo(...) = bar(...)
def test
m { foo(123) }
end
test
p test # allocates 1 object on master, but 0 objects with this patch
```
```ruby
def bar(a, b:) = a + b
def foo(...) = bar(...)
def test
m { foo(1, b: 2) }
end
test
p test # allocates 2 objects on master, but 0 objects with this patch
```
How does it work?
-----------------
This patch works by using a dynamic stack size when passing forwarded parameters to callees.
The caller's info object (known as the "CI") contains the stack size of the
parameters, so we pass the CI object itself as a parameter to the callee.
When forwarding parameters, the forwarding ISeq uses the caller's CI to determine how much stack to copy, then copies the caller's stack before calling the callee.
The CI at the forwarded call site is adjusted using information from the caller's CI.
I think this description is kind of confusing, so let's walk through an example with code.
```ruby
def delegatee(a, b) = a + b
def delegator(...)
delegatee(...) # CI2 (FORWARDING)
end
def caller
delegator(1, 2) # CI1 (argc: 2)
end
```
Before we call the delegator method, the stack looks like this:
```
Executing Line | Code | Stack
---------------+---------------------------------------+--------
1| def delegatee(a, b) = a + b | self
2| | 1
3| def delegator(...) | 2
4| # |
5| delegatee(...) # CI2 (FORWARDING) |
6| end |
7| |
8| def caller |
-> 9| delegator(1, 2) # CI1 (argc: 2) |
10| end |
```
The ISeq for `delegator` is tagged as "forwardable", so when `caller` calls in
to `delegator`, it writes `CI1` on to the stack as a local variable for the
`delegator` method. The `delegator` method has a special local called `...`
that holds the caller's CI object.
Here is the ISeq disasm fo `delegator`:
```
== disasm: #<ISeq:delegator@-e:1 (1,0)-(1,39)>
local table (size: 1, argc: 0 [opts: 0, rest: -1, post: 0, block: -1, kw: -1@-1, kwrest: -1])
[ 1] "..."@0
0000 putself ( 1)[LiCa]
0001 getlocal_WC_0 "..."@0
0003 send <calldata!mid:delegatee, argc:0, FCALL|FORWARDING>, nil
0006 leave [Re]
```
The local called `...` will contain the caller's CI: CI1.
Here is the stack when we enter `delegator`:
```
Executing Line | Code | Stack
---------------+---------------------------------------+--------
1| def delegatee(a, b) = a + b | self
2| | 1
3| def delegator(...) | 2
-> 4| # | CI1 (argc: 2)
5| delegatee(...) # CI2 (FORWARDING) | cref_or_me
6| end | specval
7| | type
8| def caller |
9| delegator(1, 2) # CI1 (argc: 2) |
10| end |
```
The CI at `delegatee` on line 5 is tagged as "FORWARDING", so it knows to
memcopy the caller's stack before calling `delegatee`. In this case, it will
memcopy self, 1, and 2 to the stack before calling `delegatee`. It knows how much
memory to copy from the caller because `CI1` contains stack size information
(argc: 2).
Before executing the `send` instruction, we push `...` on the stack. The
`send` instruction pops `...`, and because it is tagged with `FORWARDING`, it
knows to memcopy (using the information in the CI it just popped):
```
== disasm: #<ISeq:delegator@-e:1 (1,0)-(1,39)>
local table (size: 1, argc: 0 [opts: 0, rest: -1, post: 0, block: -1, kw: -1@-1, kwrest: -1])
[ 1] "..."@0
0000 putself ( 1)[LiCa]
0001 getlocal_WC_0 "..."@0
0003 send <calldata!mid:delegatee, argc:0, FCALL|FORWARDING>, nil
0006 leave [Re]
```
Instruction 001 puts the caller's CI on the stack. `send` is tagged with
FORWARDING, so it reads the CI and _copies_ the callers stack to this stack:
```
Executing Line | Code | Stack
---------------+---------------------------------------+--------
1| def delegatee(a, b) = a + b | self
2| | 1
3| def delegator(...) | 2
4| # | CI1 (argc: 2)
-> 5| delegatee(...) # CI2 (FORWARDING) | cref_or_me
6| end | specval
7| | type
8| def caller | self
9| delegator(1, 2) # CI1 (argc: 2) | 1
10| end | 2
```
The "FORWARDING" call site combines information from CI1 with CI2 in order
to support passing other values in addition to the `...` value, as well as
perfectly forward splat args, kwargs, etc.
Since we're able to copy the stack from `caller` in to `delegator`'s stack, we
can avoid allocating objects.
I want to do this to eliminate object allocations for delegate methods.
My long term goal is to implement `Class#new` in Ruby and it uses `...`.
I was able to implement `Class#new` in Ruby
[here](https://github.com/ruby/ruby/pull/9289).
If we adopt the technique in this patch, then we can optimize allocating
objects that take keyword parameters for `initialize`.
For example, this code will allocate 2 objects: one for `SomeObject`, and one
for the kwargs:
```ruby
SomeObject.new(foo: 1)
```
If we combine this technique, plus implement `Class#new` in Ruby, then we can
reduce allocations for this common operation.
Co-Authored-By: John Hawthorn <[email protected]>
Co-Authored-By: Alan Wu <[email protected]>
Diffstat (limited to 'vm_insnhelper.c')
-rw-r--r-- | vm_insnhelper.c | 191 |
1 files changed, 177 insertions, 14 deletions
diff --git a/vm_insnhelper.c b/vm_insnhelper.c index 46737145ca..a74f5096f6 100644 --- a/vm_insnhelper.c +++ b/vm_insnhelper.c @@ -2526,6 +2526,15 @@ vm_base_ptr(const rb_control_frame_t *cfp) if (cfp->iseq && VM_FRAME_RUBYFRAME_P(cfp)) { VALUE *bp = prev_cfp->sp + ISEQ_BODY(cfp->iseq)->local_table_size + VM_ENV_DATA_SIZE; + + if (ISEQ_BODY(cfp->iseq)->param.flags.forwardable && VM_ENV_LOCAL_P(cfp->ep)) { + int lts = ISEQ_BODY(cfp->iseq)->local_table_size; + int params = ISEQ_BODY(cfp->iseq)->param.size; + + CALL_INFO ci = (CALL_INFO)cfp->ep[-(VM_ENV_DATA_SIZE + (lts - params))]; // skip EP stuff, CI should be last local + bp += vm_ci_argc(ci); + } + if (ISEQ_BODY(cfp->iseq)->type == ISEQ_TYPE_METHOD || VM_FRAME_BMETHOD_P(cfp)) { /* adjust `self' */ bp += 1; @@ -2594,6 +2603,7 @@ rb_simple_iseq_p(const rb_iseq_t *iseq) ISEQ_BODY(iseq)->param.flags.has_kw == FALSE && ISEQ_BODY(iseq)->param.flags.has_kwrest == FALSE && ISEQ_BODY(iseq)->param.flags.accepts_no_kwarg == FALSE && + ISEQ_BODY(iseq)->param.flags.forwardable == FALSE && ISEQ_BODY(iseq)->param.flags.has_block == FALSE; } @@ -2606,6 +2616,7 @@ rb_iseq_only_optparam_p(const rb_iseq_t *iseq) ISEQ_BODY(iseq)->param.flags.has_kw == FALSE && ISEQ_BODY(iseq)->param.flags.has_kwrest == FALSE && ISEQ_BODY(iseq)->param.flags.accepts_no_kwarg == FALSE && + ISEQ_BODY(iseq)->param.flags.forwardable == FALSE && ISEQ_BODY(iseq)->param.flags.has_block == FALSE; } @@ -2617,6 +2628,7 @@ rb_iseq_only_kwparam_p(const rb_iseq_t *iseq) ISEQ_BODY(iseq)->param.flags.has_post == FALSE && ISEQ_BODY(iseq)->param.flags.has_kw == TRUE && ISEQ_BODY(iseq)->param.flags.has_kwrest == FALSE && + ISEQ_BODY(iseq)->param.flags.forwardable == FALSE && ISEQ_BODY(iseq)->param.flags.has_block == FALSE; } @@ -3053,7 +3065,7 @@ vm_callee_setup_arg(rb_execution_context_t *ec, struct rb_calling_info *calling, argument_arity_error(ec, iseq, calling->argc, lead_num, lead_num); } - VM_ASSERT(ci == calling->cd->ci); + //VM_ASSERT(ci == calling->cd->ci); VM_ASSERT(cc == calling->cc); if (vm_call_iseq_optimizable_p(ci, cc)) { @@ -3140,9 +3152,125 @@ vm_callee_setup_arg(rb_execution_context_t *ec, struct rb_calling_info *calling, } } + // Called iseq is using ... param + // def foo(...) # <- iseq for foo will have "forwardable" + // + // We want to set the `...` local to the caller's CI + // foo(1, 2) # <- the ci for this should end up as `...` + // + // So hopefully the stack looks like: + // + // => 1 + // => 2 + // => * + // => ** + // => & + // => ... # <- points at `foo`s CI + // => cref_or_me + // => specval + // => type + // + if (ISEQ_BODY(iseq)->param.flags.forwardable) { + if ((vm_ci_flag(ci) & VM_CALL_FORWARDING)) { + struct rb_forwarding_call_data * forward_cd = (struct rb_forwarding_call_data *)calling->cd; + if (vm_ci_argc(ci) != vm_ci_argc(forward_cd->caller_ci)) { + ci = vm_ci_new_runtime( + vm_ci_mid(ci), + vm_ci_flag(ci), + vm_ci_argc(ci), + vm_ci_kwarg(ci)); + } else { + ci = forward_cd->caller_ci; + } + } + // C functions calling iseqs will stack allocate a CI, + // so we need to convert it to heap allocated + if (!vm_ci_markable(ci)) { + ci = vm_ci_new_runtime( + vm_ci_mid(ci), + vm_ci_flag(ci), + vm_ci_argc(ci), + vm_ci_kwarg(ci)); + } + argv[param_size - 1] = (VALUE)ci; + return 0; + } + return setup_parameters_complex(ec, iseq, calling, ci, argv, arg_setup_method); } +static void +vm_adjust_stack_forwarding(const struct rb_execution_context_struct *ec, struct rb_control_frame_struct *cfp, CALL_INFO callers_info, VALUE splat) +{ + // This case is when the caller is using a ... parameter. + // For example `bar(...)`. The call info will have VM_CALL_FORWARDING + // In this case the caller's caller's CI will be on the stack. + // + // For example: + // + // def bar(a, b); a + b; end + // def foo(...); bar(...); end + // foo(1, 2) # <- this CI will be on the stack when we call `bar(...)` + // + // Stack layout will be: + // + // > 1 + // > 2 + // > CI for foo(1, 2) + // > cref_or_me + // > specval + // > type + // > receiver + // > CI for foo(1, 2), via `getlocal ...` + // > ( SP points here ) + const VALUE * lep = VM_CF_LEP(cfp); + + // We'll need to copy argc args to this SP + int argc = vm_ci_argc(callers_info); + + const rb_iseq_t *iseq; + + // If we're in an escaped environment (lambda for example), get the iseq + // from the captured env. + if (VM_ENV_FLAGS(lep, VM_ENV_FLAG_ESCAPED)) { + rb_env_t * env = (rb_env_t *)lep[VM_ENV_DATA_INDEX_ENV]; + iseq = env->iseq; + } + else { // Otherwise use the lep to find the caller + iseq = rb_vm_search_cf_from_ep(ec, cfp, lep)->iseq; + } + + // Our local storage is below the args we need to copy + int local_size = ISEQ_BODY(iseq)->local_table_size + argc; + + const VALUE * from = lep - (local_size + VM_ENV_DATA_SIZE - 1); // 2 for EP values + VALUE * to = cfp->sp - 1; // clobber the CI + + if (RTEST(splat)) { + to -= 1; // clobber the splat array + CHECK_VM_STACK_OVERFLOW0(cfp, to, RARRAY_LEN(splat)); + MEMCPY(to, RARRAY_CONST_PTR(splat), VALUE, RARRAY_LEN(splat)); + to += RARRAY_LEN(splat); + } + + CHECK_VM_STACK_OVERFLOW0(cfp, to, argc); + MEMCPY(to, from, VALUE, argc); + cfp->sp = to + argc; + + // Stack layout should now be: + // + // > 1 + // > 2 + // > CI for foo(1, 2) + // > cref_or_me + // > specval + // > type + // > receiver + // > 1 + // > 2 + // > ( SP points here ) +} + static VALUE vm_call_iseq_setup(rb_execution_context_t *ec, rb_control_frame_t *cfp, struct rb_calling_info *calling) { @@ -3150,8 +3278,15 @@ vm_call_iseq_setup(rb_execution_context_t *ec, rb_control_frame_t *cfp, struct r const struct rb_callcache *cc = calling->cc; const rb_iseq_t *iseq = def_iseq_ptr(vm_cc_cme(cc)->def); - const int param_size = ISEQ_BODY(iseq)->param.size; - const int local_size = ISEQ_BODY(iseq)->local_table_size; + int param_size = ISEQ_BODY(iseq)->param.size; + int local_size = ISEQ_BODY(iseq)->local_table_size; + + // Setting up local size and param size + if (ISEQ_BODY(iseq)->param.flags.forwardable) { + local_size = local_size + vm_ci_argc(calling->cd->ci); + param_size = param_size + vm_ci_argc(calling->cd->ci); + } + const int opt_pc = vm_callee_setup_arg(ec, calling, iseq, cfp->sp - calling->argc, param_size, local_size); return vm_call_iseq_setup_2(ec, cfp, calling, opt_pc, param_size, local_size); } @@ -3661,7 +3796,7 @@ vm_call_cfunc_other(rb_execution_context_t *ec, rb_control_frame_t *reg_cfp, str return vm_call_cfunc_with_frame_(ec, reg_cfp, calling, argc, argv, stack_bottom); } else { - CC_SET_FASTPATH(calling->cc, vm_call_cfunc_with_frame, !rb_splat_or_kwargs_p(ci) && !calling->kw_splat); + CC_SET_FASTPATH(calling->cc, vm_call_cfunc_with_frame, !rb_splat_or_kwargs_p(ci) && !calling->kw_splat && !(vm_ci_flag(ci) & VM_CALL_FORWARDING)); return vm_call_cfunc_with_frame(ec, reg_cfp, calling); } @@ -4543,7 +4678,7 @@ vm_call_method_each_type(rb_execution_context_t *ec, rb_control_frame_t *cfp, st rb_check_arity(calling->argc, 1, 1); - const unsigned int aset_mask = (VM_CALL_ARGS_SPLAT | VM_CALL_KW_SPLAT | VM_CALL_KWARG); + const unsigned int aset_mask = (VM_CALL_ARGS_SPLAT | VM_CALL_KW_SPLAT | VM_CALL_KWARG | VM_CALL_FORWARDING); if (vm_cc_markable(cc)) { vm_cc_attr_index_initialize(cc, INVALID_SHAPE_ID); @@ -4577,7 +4712,7 @@ vm_call_method_each_type(rb_execution_context_t *ec, rb_control_frame_t *cfp, st CALLER_SETUP_ARG(cfp, calling, ci, 0); rb_check_arity(calling->argc, 0, 0); vm_cc_attr_index_initialize(cc, INVALID_SHAPE_ID); - const unsigned int ivar_mask = (VM_CALL_ARGS_SPLAT | VM_CALL_KW_SPLAT); + const unsigned int ivar_mask = (VM_CALL_ARGS_SPLAT | VM_CALL_KW_SPLAT | VM_CALL_FORWARDING); VM_CALL_METHOD_ATTR(v, vm_call_ivar(ec, cfp, calling), CC_SET_FASTPATH(cc, vm_call_ivar, !(vm_ci_flag(ci) & ivar_mask))); @@ -4796,13 +4931,18 @@ vm_search_super_method(const rb_control_frame_t *reg_cfp, struct rb_call_data *c ID mid = me->def->original_id; - // update iseq. really? (TODO) - cd->ci = vm_ci_new_runtime(mid, - vm_ci_flag(cd->ci), - vm_ci_argc(cd->ci), - vm_ci_kwarg(cd->ci)); + if (!vm_ci_markable(cd->ci)) { + VM_FORCE_WRITE((const VALUE *)&cd->ci->mid, (VALUE)mid); + } + else { + // update iseq. really? (TODO) + cd->ci = vm_ci_new_runtime(mid, + vm_ci_flag(cd->ci), + vm_ci_argc(cd->ci), + vm_ci_kwarg(cd->ci)); - RB_OBJ_WRITTEN(reg_cfp->iseq, Qundef, cd->ci); + RB_OBJ_WRITTEN(reg_cfp->iseq, Qundef, cd->ci); + } const struct rb_callcache *cc; @@ -5737,8 +5877,20 @@ VALUE rb_vm_send(rb_execution_context_t *ec, rb_control_frame_t *reg_cfp, CALL_DATA cd, ISEQ blockiseq) { stack_check(ec); - VALUE bh = vm_caller_setup_arg_block(ec, GET_CFP(), cd->ci, blockiseq, false); + + struct rb_forwarding_call_data adjusted_cd; + struct rb_callinfo adjusted_ci; + CALL_DATA _cd = cd; + + VALUE bh = vm_caller_setup_args(ec, GET_CFP(), &cd, blockiseq, false, &adjusted_cd, &adjusted_ci); VALUE val = vm_sendish(ec, GET_CFP(), cd, bh, mexp_search_method); + + if (vm_ci_flag(_cd->ci) & VM_CALL_FORWARDING) { + if (_cd->cc != cd->cc && vm_cc_markable(cd->cc)) { + RB_OBJ_WRITE(GET_ISEQ(), &_cd->cc, cd->cc); + } + } + VM_EXEC(ec, val); return val; } @@ -5757,8 +5909,19 @@ VALUE rb_vm_invokesuper(rb_execution_context_t *ec, rb_control_frame_t *reg_cfp, CALL_DATA cd, ISEQ blockiseq) { stack_check(ec); - VALUE bh = vm_caller_setup_arg_block(ec, GET_CFP(), cd->ci, blockiseq, true); + struct rb_forwarding_call_data adjusted_cd; + struct rb_callinfo adjusted_ci; + CALL_DATA _cd = cd; + + VALUE bh = vm_caller_setup_args(ec, GET_CFP(), &cd, blockiseq, true, &adjusted_cd, &adjusted_ci); VALUE val = vm_sendish(ec, GET_CFP(), cd, bh, mexp_search_super); + + if (vm_ci_flag(_cd->ci) & VM_CALL_FORWARDING) { + if (_cd->cc != cd->cc && vm_cc_markable(cd->cc)) { + RB_OBJ_WRITE(GET_ISEQ(), &_cd->cc, cd->cc); + } + } + VM_EXEC(ec, val); return val; } |