******************************************************************************** ******************************************************************************** ******************************************************************************** To: spin-comp@thistle.cs.washington.edu Subject: Treating functions -- memoizing, inlining, specializing Date: Wed, 26 Mar 1997 09:58:40 PST From: Brian Grant On the bus this morning, I thought a little bit about the idea of applying laziness to recursive calls. Here are some observations and questions. -- Calls, if they are to be specialized, are specialized lazily. -- We are only in danger of not terminating when inlining (at run time) or memoizing (calling "constant" functions at specialization time). -- For a function to be labelled "constant", all of the functions it calls must also be constant. We can either require this of the user, or perform interprocedural analysis (really just traversing the call graph) to verify that it is true. -- What happens if we decide to memoize a constant function with all static arguments that then makes a call to another constant function, not all of whose arguments are static? Probably we should prevent this. Perhaps a function shouldn't be labelled constant if it can call a function with dynamic arguments? -- It appears that we need interprocedural analysis (again, traversing the call graph) to detect recursion. -- How do we detect that a recursive call is guarded by a dynamic conditional? do we need interprocedural reachability analysis for dynamic conditionals? --Brian ******************************************************************************** ******************************************************************************** ******************************************************************************** To: chambers@thistle.cs.washington.edu cc: spin-comp@thistle.cs.washington.edu Subject: Interprocedural eager specialization Date: Wed, 26 Mar 1997 23:43:52 PST From: Brian Grant Supporting interprocedural specialization adds a number of details to the specializer. We may not want to include support for this in the pseudocode, rather only in the description. Changes necessary: -- At static compile time new function names & signatures are generated for each possible division of specializable functions (I think this may already be mentioned) -- The unit id must also include the function identity (implicitly or explicitly) -- The unit structure must also include a list of specializable calls and information about their static parameters -- ReduceAndResidualize must also return a list of calls to specialize with the values of static arguments and the addresses of the call sites -- Specialize must call itself on the first unit of each function to be specialized with their static arguments passed in the context -- The address of the first unit of the callee can be patched at the call site If a call is a potential def, that is already taken into account by the intraprocedural unit structure. No additional changes are required for the correct behavior (laziness where needed, etc.). IMO, these extensions are not that exciting and are probably best described in text. Memoization is hidden in ReduceAndResidualize. Should we also mention extensions for inlining? I think simple inlining (except through function pointers) should be easy to talk about (not nec. to implement efficiently) and makes things more interesting: -- End a unit at a call to be inlined -- Include the called unit in the list of edges/branches rather than in the list of calls Does anyone see any holes in this? I could work on adding these extensions to my code for the specialization algorithm, if we think it's a good idea. I just thought of some problems with function pointers. If we generate new function names & signatures at compile time, how do we know which to call when a call is made through a function pointer? For example: void Foo(int x, int y, int z); void Blu(int x, int y, int z); specialize void Foo(x, y, z) on (x); specialize void Foo(x, y, z) on (x, y); void Goo() { Bar(Foo); Bar(Blu); } void Bar(void (*fp)()) { make_static(s1, s2); fp(s1, s2, d3); fp(s1, d2, d3); fp(d1, d2, d3); } Stubs could be created for calls to be specialized. In cases when the correct division could be determined, the appropriate signature would be used and the stub would be skipped. Otherwise, the stub would be called. In this case, Foo_stub would be passed to Bar instead of Foo. An additional argument added to calls through function pointers would indicate which arguments are static. Stubs would check this argument to determine which division to use in specializing the callee. Non-stubs would ignore the argument. If the function pointer were additionally static, then the call site could be backpatched. I don't see a way, however, to permit eager specialization of calls through static function pointers unless we can determine the callee at static compile time. Does this hold water? It's more or less off the top of my head. Hmm.... --Brian