HK1178622B - Resumable methods - Google Patents
Resumable methods Download PDFInfo
- Publication number
- HK1178622B HK1178622B HK13105272.3A HK13105272A HK1178622B HK 1178622 B HK1178622 B HK 1178622B HK 13105272 A HK13105272 A HK 13105272A HK 1178622 B HK1178622 B HK 1178622B
- Authority
- HK
- Hong Kong
- Prior art keywords
- recoverable
- compiler
- code
- control point
- source code
- Prior art date
Links
Description
Background
A multi-core processor is a processing system that includes two or more separate processors (cores). A many-core processor is one of the following: in such processors, the number of cores is large enough that conventional multi-processor programming techniques are no longer efficient.
Programmers developing software for many-core processors must adjust the way they write their programs. That is, in order to write efficient programs for these types of computing environments, a programmer must write asynchronous code, a code that can be executed concurrently with other code without interfering with it. Writing non-blocking asynchronous code without language support is difficult because programmers must write code in a continuous delivery style (CPS), for example, by using callback-based code. What is implicit in traditional synchronous programming becomes explicit in CPS programming. For example, in conventional encoding, when a function is called, it returns a value. In CPS, the function takes the form of an explicit continuous argument (argument), a function that receives the results of the calculations performed within the original function. Similarly, when a subroutine is called within a CPS function, the caller function must provide a procedure (procedure) to be called with the subroutine return value.
Some languages, such as for example C #, do provide some form of compiler-supported sequential delivery of overwrites by means of iterator constructs. This type of language support is not particularly optimized for recursion and for other types of encoding techniques required for asynchronous programming.
SUMMARY
While built-in language support for iterators may help solve some of the problems associated with sets that are being evaluated lazily, and while built-in language support for asynchronous programming is somewhat absent in some languages, the subject matter disclosed herein is directed to a unified approach that abstracts the characteristics of various aspects of these domains and provides a universal external mechanism that can solve a number of problems associated with asynchronous programming, lazy generation of sets by iterators, writing symmetric co-routines, and so forth.
An API (program module) is provided which is external to the programming language but provides functionality that can be inserted into a language compiler. The provided API leverages functionality associated with asynchronous programming, iterators, or writing symmetric co-routines using a generic schema-based scheme. Several types of recoverable methods are provided in the API that can be applied to method bodies written in legacy program code. Syntactically distinguishable control points in a method body written in conventional programming code invoke transformations of the code by a compiler using external APIs. The transformed code enables the pausing and resuming of the code sandwiched between control points of the transformed code. That is, source code included in the ontology of code (e.g., a method) with control points inside is transformed such that the code within the method can be executed in discrete portions, each portion starting and ending at a control point in the transformed code.
Regardless of where the control point is located in the code, the code may be paused either directly or as part of a paused recursive call. The code may be resumed from the point at which it was paused. Different types of recoverable methods can be distinguished by how and when the method recovers after a pause, and by the types of arguments and return values that flow back and forth after the code pauses, returns, and terminates. The pause control point may optionally return a value to the caller and may receive a value from the resume module using the yield expression. The recursive call control point may recursively apply a compatible recoverable method that makes pauses determined by the method using the yieldorlydield-for-each expression. The return control point signals the termination of the recoverable method with or without a result value.
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Drawings
In the drawings:
FIG. 1 illustrates an example of a system 100 that provides a mechanism for recoverable methods that is external to a programming language, in accordance with aspects of the subject matter disclosed herein;
2 a-2 d are examples of driver classes according to aspects of the subject matter disclosed herein;
FIG. 2e is an example of source code in accordance with aspects of the subject matter disclosed herein;
FIG. 2f is an example of transformed code in accordance with aspects of the subject matter disclosed herein;
FIG. 2g is an example of a rewritten method in accordance with aspects of the subject matter disclosed herein;
2 h-2 j are examples of recoverable methods of implementing an iterator, according to aspects of the subject matter disclosed herein;
FIG. 2k is a flow diagram of an example of a method 201 for implementing a recoverable method using an external API in accordance with aspects of the subject matter disclosed herein;
FIG. 3 is a block diagram illustrating an example of a computing environment in which aspects of the subject matter disclosed herein may be implemented; and
FIG. 4 is a block diagram of an example of an integrated development environment in accordance with aspects of the subject matter disclosed herein.
Detailed Description
Overview
The subject matter disclosed herein describes contracts between the nature of a language compiler and schema-based Application Programming Interface (API) plug-ins inserted into the compiler. The API plug-in adapts this feature to asynchronous programming, iterators, symmetric co-routines, etc., providing the compiler with details of what to do for each particular type of recoverable method. This property reconstructs the output code (e.g., intermediate code), removing the one-to-one correspondence between the source code and the output code. Source code comprising one or more control points is transformed such that the output code can be executed in discrete portions, each portion starting and ending at a control point in the transformed output code. For example, an end user may write traditional synchronization code until he reaches the following point: at this point he wants his code to be able to pause to wait for something without having to stop all processing. At this point, the end user can insert control points anywhere in their code, including at points deeply nested within the control structure of the language. Recognition of the control point input by the compiler will trigger the compiler transformation of the API identified by the form (signature) of the control point expression in the source code. The called API determines the validity of the language syntax in the source code and can expose asynchronous programming specific versions of interactions with the language properties of the compiler if a background compiler is used.
Extending a compiler to implement a recoverable method using an external API
FIG. 1 illustrates an example of a system 100 that provides a mechanism external to a language compiler for implementing recoverable methods in accordance with aspects of the subject matter disclosed herein. All or some portions of system 100 may reside on one or more computers, such as the computers described below with reference to FIG. 3. All or some portions of system 100 may reside on one or more software development computers (e.g., computer 102), such as the computers described below with reference to FIG. 4. The system 100, or portions thereof, may include a portion of an integrated development environment (e.g., the IDE104), such as those described and illustrated below with reference to FIG. 4. Alternatively, system 100 or portions thereof may be provided as a stand-alone system or as a plug-in or add-in.
The system 100 may include one or more of the following: a processor, such as processor 142, a memory 144, and a library 106 of APIs or modules that provide a mechanism to implement recoverable methods. Other components known in the art may also be included but are not shown here. It can be appreciated that one or more modules of library 106 can be loaded into memory 144 to cause one or more processors, such as processor 142, to perform actions attributed to the following APIs: the API provides a mechanism to implement recoverable methods that are outside of the programming language and can be inserted into a language compiler.
The system 100 may include one or more of the following: a compiler 114, such as a background compiler, a parallel compiler, or an incremental compiler; a parser, such as a background parser, a parallel parser, or an incremental parser; or a plug-in, preprocessor, or accessory; or an extension to an IDE, parser, compiler or pre-processor. The APIs described herein may be attached to, incorporated into, or associated with a compiler such as: a compiler, such as a background compiler, a parallel compiler, or an incremental compiler; a parser, such as a background parser, a parallel parser, or an incremental parser; or a plug-in, preprocessor, or accessory; or an extension to an IDE, parser, compiler or pre-processor. Compiler 114 may include one or more modules that interact with a specialized API.
Specific kinds of specialized recoverable methods, such as but not limited to asynchronous or iterator methods, are provided in libraries that are external to the language compiler. An application programmer may cause these methods to be applied to a method body containing legacy programming code by: adding one or more syntactically distinguishable control points in the method ontology to the legacy code. Input source code comprising methods 108 including such control points 112 may be transformed by compiler 114 using an API from library 106 to generate an expanded method such as transformed method 110 that enables transformed method 110 to execute in a discrete portion sandwiched between control points when executed, as will be described more fully below. The control point is a point as follows: at this point, the body of code in the method may either pause directly or because the method is part of a paused recursive method call. The different kinds of recoverable methods differ in how and when the method recovers after a pause, and which arguments and return values flow back and forth after pause, return and termination. The Yield expression may optionally return a value to the caller and receive a value for the suspended control point from the restorer or restore module. The yield or yield before expression may recursively apply a compatible recoverable method that pauses for a recursive control point as determined by the method. The return statement signals the termination of the recoverable method with or without the final result value of the return control point.
The different kinds of recoverable methods differ in how and when the method recovers after a pause, and which arguments and return values flow back and forth after pause, return and termination. The Yield return expression may optionally return a value to the caller and receive a value for the suspended control point from the restorer or restore module. The yield or yield before expression may recursively apply a compatible recoverable method that pauses for a recursive control point as determined by the method. The return statement signals the termination of the recoverable method with or without the final result value of the return control point. An active recoverable method may be represented by some type of frame (frame) object derived (farm) from a recoverable method class such as the recoverable (resume) class. The object represents a stack frame of the method when the method is suspended. When a method is suspended, a stack frame representing the suspended method may be copied to another data structure and may be removed from the machine stack. Thus, the suspended method may not physically reside on the stack. Alternatively, the entire state of the stack may be saved, maintaining it on the heap regardless of whether the method is active or suspended.
In conventional unrecoverable control flow, machines use a single thread of execution to execute a continuous stack of methods currently waiting to be executed. Each method on the stack waits for another method to return to the method. Stacks are typically not directly accessible to compilers. In some environments where recoverable methods are implemented, the recoverable methods pause when a particular statement is encountered. At this point, according to aspects of the subject matter disclosed herein, frames of the stack of the paused method may be saved and placed back onto the stack when the method resumes. Information about what recoverable method called which recoverable method is stored in a separate data structure so that when a method returns, the return can be directed to the correct recipient specified by the responsible driver class. That is, each instance of a driver class (i.e., each specialized object) represents a method call in the saved representation of the machine stack. An object instantiated from a particular driver class represents a particular recoverable method call in execution. The set of dedicated objects represents the control chain previously maintained by the stack. An executing method call is any method call that is currently running or has been paused and has not yet ended. When the recoverable method returns to the structure, it provides the result to the method that called the method and the callout method recovers.
According to some aspects of the subject matter disclosed herein, the compiler generates code that causes control to be transferred to the recovered method, transforming the input source code. When a recoverable method recovers, an Invoke method or other calling method on the frame object may be called, which puts a real activation record for the recovered method back onto the machine stack. If a suspended method (a first recoverable method) is recursively waiting for another suspended recoverable method (a second recoverable method), the frame object of the second method may be similarly recovered so that the machine stack represents the actual calling order of the recoverable methods. Thus, exception propagation, debugging, and the like may naturally be built on corresponding built-in mechanisms, such as, but not limited to, CLR mechanisms for exception propagation and debugging. Other mechanisms for stack processing are also possible.
Different kinds of recoverable methods may be defined by driver classes derived from abstract recoverable base classes. Fig. 2a shows an example of such an abstract base class Resumable 200. Specific recoverable methods may be generated from these driver classes by a class compiler as a derivative, implementing the pause logic as a state machine.
State machine overwriting of a recoverable method can be considered to occur in two phases. The first phase may place the method body into an override (override) of the calling method in a compiler-generated class derived from the driver, where any occurrence of a control point is rewritten to one of:
a) a call to a "before" method in a driver class, where any argument is passed to the control point;
b) a call to a specific dedicated command for further overwriting;
c) a call to an after method in a driver class, where the method passes any resulting values to the context of the appearance control point.
In the second phase, compiler-generated class and Invoke methods may be augmented with state machine logic, and the special-purpose commands may be rewritten to code for state transitions and suspension.
Examples of how language properties and library APIs may interact using driver classes are described below. According to some aspects of the subject matter disclosed herein, the driver classes implement specialized portions that are external to the programming language. A pause control point may be used to pause the execution of the method. Suppose that: a particular method expresses the body of work to be performed progressively over time. In order to make the method returnable, a pause control point may be inserted in the method body. For example, if an element of the set is being generated, a pause control point may be inserted at the point where the computation that generates the next element has been performed. At this point, the element may be returned by yield. When the next element of the set is requested, the method may resume and may return at the point from which the yield return was made.
The recursive control point suspends the method making the recursive call whenever the called method is suspended. For example, assuming that a first iterator may yield3 elements (one at a time) and a second iterator may yield2 elements (one at a time), the two iterators may be assembled into an iterator that is capable of sequentially yield all 5 elements by creating recoverable methods that may call other recoverable methods. The assembled iterator method may first call the first iterator and have it yield its three elements one at a time. When the first iterator has yield all three elements, the assembled method may call a second iterator that yields the other two elements. When the first iterator is paused, the assembled iterator method is paused. Similarly, when the second iterator is paused, the assembled iterator method is paused. The return control point signals the termination of the recoverable method and is used to define what to do when the work is complete. Although several pauses and several resumes may occur in one pass, the work is eventually complete. The return may simply indicate that the work is complete or may include the results of the completed work.
Asynchronous methods may be used whenever the latency period is long, such as during input-output operations or for communication over a network. For example, assume that: a user wants to download some information from a web page, perform some computation on the downloaded information, and send the result of the computation to another web page. When the operation is completed, a boolean result will be returned indicating success or failure. To perform these operations synchronously, methods are typically used that call two helper methods. One of the helper methods typically navigates to the web page and extracts the desired content. After the content is received, the calculation is performed. The second helper method is typically invoked to send the computation results to another web page. When completed, the method returns a boolean result. The synchronization process in this case leads to a waiting period especially when a connection to the web page is made and the downloaded content is retrieved and also when the calculation result is sent to the second web page. Depending on the connection rate, the network traffic, and the size of the downloaded information, the latency may be significant.
Performing this series of actions using an asynchronous approach will free up operating system thread resources, which may result in a better user experience. For example, in any GUI-based application, only a single thread services user input events, such that a release thread failure may result in a very poor user experience. However, implementing an asynchronous approach without language support is difficult because the normal control structure cannot be used. Thus, the programmer must manually write and debug nested callbacks, a difficult and error-prone task, resulting in code that is also difficult to read and maintain. Furthermore, the complexity of the code increases rapidly as the number of "artificially" (non-programmatically) transformed control points within the method grows. According to aspects of the subject matter disclosed herein, control points can be placed within a normal control structure at any nesting depth. Recursive calls may be made within asynchronous methods supported by asynchronous driver classes. The callbacks are generated by the compiler, where the callbacks are methods but transparent to the developer. In the example of downloading from a web page, the asynchronous method first yield to the method reading from the web, the computation is performed, then recursion yield is performed on the method, the method sends the computation result to the second web page, and finally the boolean result is returned. Thus, the flow of control is the same as in the case of the synchronization method, but the processor is never idle. The iterator works similarly, except that the client triggers the generation of the next element of the set, rather than the callback from the web page triggering the next step. Different sets of contexts drive different processes described by the characteristics of the drive classes.
Thus, the different types of control points described above enable assembly of recoverable methods to conventional methods without the developer having to deal with the complexity of the recoverable methods. Recoverable methods can be assembled by calling other asynchronous or recoverable methods or by the method calling itself recursively. Although in general, an iterator may call other iterators and an asynchronous method may call other asynchronous methods, according to aspects of the subject matter described herein, a driver class specifies which resumable method may be called from a particular resumable method, such that a resumable method is not limited to calling another resumable method of the same type.
As described above, a specialized object representing an execution control chain (e.g., a stack) may be generated by a compiler from a specialized abstract base class. The specialized abstract base class includes specific behaviors and methods. For example, the abstract class Asynch < T > (shown in FIG. 2 d) may be used by a compiler to make derived classes that include a particular method body for the Asynch < T > class. The derived Asynch < T > class is instantiated into a framed object. A portion of the code in the frame object may be contributed by a dedicated base class from the library, and a portion of it may be contributed by a compiler that may be placed into the method specific logic. In a transformed or augmented version of a method, there will be calls to specialized methods inherited from a base class library, alternatives to the above-described scheme including: two objects are created, one created by a compiler containing kernel-frame objects and one provided by a library, where the objects communicate with each other through their methods.
According to aspects of the subject matter described herein, at each control point, compiler-generated code may invoke a method on a driver class before the method is paused, and may invoke the method after the method on the driver class is resumed, to provide the method with an opportunity to perform its specialized behavior. For example, for a yield control point, there are a previous yield method and a subsequent yield method that the compiler generated code will call on the base class. For recursive calls of a return, there is a previous yield return method and a subsequent yield return method, and for a return, there is a previous return method and no subsequent return method, since the method returns when it is completed. The previous and subsequent method calls enable the driver class to specify the method to be executed. If a driver class does not specify either a previous yield or a subsequent yield method, the compiler cannot generate method calls to them so that using such a control point will be illegal in recoverable methods managed by the driver class. Thus, the driver class may specify which of these control points are available by specifying the before and after methods or by failing to specify the before and after methods.
In addition to determining whether a particular control point is available, the driver class may determine which scenarios the control point is available for. Since the interaction is schema-based and the schema determines which method the compiler-generated code calls, the driver class can specify which methods can be recursively called by specifying which arguments the method takes. There may be multiple overloads (overrides) of methods such that, for example, asynchronous methods may be allowed to call multiple types of asynchronous methods, which may enable interoperation between different models. Similarly, the specification of argument typing can be used to invoke methods represented by task classes currently in the library.
Finally, the driver class may determine: what is in the ontology of the method, that is, what the driver-specific behavior was before yield. For example, some data may be transferred between frame classes, or processing may be performed to prepare the data representation for recovery. With respect to the iterator, a previous yield return method call may specify how the next element value of the set is yield and how that value is transmitted to the client. The before and after methods are open ended and can be used to implement iterators, asynchronous methods, and variations thereof. For example, each asynchronous method may be represented by a parameterized driver class that may be created by an end user for any imperative programming language.
2 a-2 d illustrate non-limiting examples of driver classes that implement the above-described aspects. It can be appreciated that while the examples provided use a particular syntactic form that identifies methods by name, other syntactic forms, not limited to naming methods, can be used. FIG. 2a illustrates an example of a system-wide base class from which a recoverable method (class resume 200) for a service-specific driver class may be derived. FIG. 2b shows an example of an abstract class AsyncResumable210 that is derived from the class Resumable200 and is asynchronous specific. Two variants of the abstract class AsyncResumable210 are shown in FIGS. 2c and 2 d. A class Async220 derived from the abstract class AsyncResumable210, and a class Async < T >230 also derived from the abstract class AsyncResumable 210. Class Async220 is used to create objects that represent asynchronous operations with no result. Class Async < T >230 is a generic class used to create objects representing asynchronous operations with results of type T.
The body of AsyncResumable210 includes BeforeYield method 212 and AfterYield method 214. Signatures of previous methods (e.g., Beforyyield method 212) and subsequent methods (e.g., Afteryield method 214) describe the types of methods that can be recursively invoked and ensure that the return values are correctly generated and consumed. It is illegal to call a type of previous return method when that type is missing in the class definition (e.g., the absence of the BeforeyrieldReturn method in the definition of class Async220 and the definition of class Async < T > means that it is illegal to call the YieldReturn in these asynchronous methods). The presence of the Beforyield and Afteryield methods in the class definition of AsyncResumable210 indicates: from the AsyncResumable method, yield is possible. Due to the lack of the beforeyrieldreturn method and the after yieldreturn method, the yield return control point cannot be used in the AsyncResumable method. Other asynchronous methods are possible, except for those that cannot be used.
The definition of classes Async220 and Async < T >230 includes other return methods. The BeforeReturn method 222 of class Asynch220 indicates: a return can only be made without a result value, so it takes no argument. The BeforeReturn method 232 of class Asynch < T >230 indicates: the method returns a value for type T, so the BeforeReturn method 232 takes arguments for type T. Thus, if "return 7" is included in the body of a recoverable method that returns Async < int >, a compiler-generated call to BeforeReturn with argument 7 will resolve to BeforeReturn method 232 without problems. However, if a method attempts to return a string to the Async < int > method, the compiler will find that the signatures do not match and will return a compile-time error.
FIG. 2e illustrates an example of a code segment that an end user may write that includes code for an asynchronously recoverable method 240. The method may be determined to be asynchronous by returning a statement static Async < int > M ()242 of Async < int > (one of the driver classes described above). Within the recoverable method 240 is normal control flow code that writes a "before" to the console (console) and then yieldx (intx ═ yieldWait (10); statement 244). The control point for this line of code is the "yield" expression, which triggers the conversion of method 240 to method 250 shown in FIG. 2 f. The method 250 is rewritten to generate an instance of the MFrame class as shown in FIG. 2g method 260. The compiler generates a class called MFrame derived from Async < int > and augments the appearance of yield and return as shown in FIG. 2 f. It can be appreciated that the augmented method 250 overlays the called method (publicoverriodervoidinvoke () {) (statement 251) and now has user-written code and additional code generated by the compiler therein. The console. writeline ("before") and console. writeline ("after") statements remain there, but between these two statements, the statement intx-yieldWait (10) statement 244 of the method 240 has been expanded into the code that calls the Wait (10) statement 252 and then the BeforeYield statement 253.
As a result of invoking Beforyield, execution of the method is paused, which is indicated by line CALL (_ tmp 2); a "CALL" in statement 254. The occurrence of "CALL" and "return" in the body of the calling method signals to the compiler: state machine control code must be inserted at these points. Calls to the Before and After methods are generated from the syntax of the original source code. The compiler may use method binding rules and techniques to check the call against the methods provided in Async < T >, and raise an error if the method is not used correctly. Thus, recursive calls are typed sufficiently strongly. When the method resumes, the method AfterYield is called (in line var _ tmp3 ═ AfterYield (_ tmp 2); statement 255). The result of the invocation of AfterYield may be placed in the variable tmp 3. the contents of the tmp3 variable are assigned to the variable x by the following user-written code: x — tmp 3; statement256, "after" is written into the console (statement 259), x is returned (statement 258) by calling BeforeReturn (x), and the return line statement 259 is executed.
Fig. 2 h-j show examples of recoverable methods of implementing the iterator. The driver class 270 shown in FIG. 2h declares an abstract class called Iterator, which is derived from the abstract class Resumable and implements an IEnumerator interface (IEnumerator). The IEnumerator interface has a method called MoveNext that moves to the next element in the collection. The IEnumerator interface also has a Current attribute in which the value of the Current element in the collection (the element to which MoveNext is moved) is stored. Whenever the MoveNext method is called on Iterator, the next part of the method with the control points therein may be executed. The driver class 270 provides "Before" and "After" methods for YieldReturn, yieldforward, YieldBreak (return) that are different from similar methods for the Async class. The number of overloads for YieldForeach in this example demonstrates the robustness of the method. For example, an overloaded YieldForeach gets an enumerator from enumeratable (enumerable) and transforms it into an iterator and recursively calls the iterator. The "PAUSE" command may be used to overwrite the YieldReturn control point.
Fig. 2i shows a simple iterator, static iterator F280. Iterator F280 includes a control point, namely a yield return statement ("yield return 1;" statement 282), which directly generates elements of the set and then pauses. Iterator F280 also includes a control point "yieldforeachF (); "statement 284, which recursively invokes iterator F until F runs out of elements in the set. Thus, multiple values can yield with a pause between yields. The statement "yieldreturn 2; the statement returns other values. "yieldbreak; the statement executes when all elements of the set have been yield.
Iterator F280 may be translated by a compiler as shown in fig. 2 j. As can be appreciated from FIG. 2j, each control point in IteratorF () may be converted into a call to a "before" and "after" method (e.g., statement "yieldreturn 1" 282 shown in FIG. 2j is transformed into a call statement 292 for Beforeyrieldreturn (1) as shown in FIG. 2j, a call to PAUSE as shown in statement 294, and a call to Afteryrieldreturn as shown in statement 296). Similarly, YieldForeachF (); statement 284 is translated into a BeforeyrieldForeach statement, a CALL statement, and an AfteryrieldForeach statement as shown in single statement 298 in FIG. 2 j. When iterator F280 is executed, a yield return with value is returned (statement 292), PAUSE statement 294 causes the method to suspend execution, and when the method resumes, the method will call the after yield return method as shown in statement 296. As shown in FIG. 2h statement 272, BeforeYieldReturn inherits from the Iterator class, setting the current value to object o so that when the call to MoveNext is completed, the current value will be returned.
The statements shown in the three statements 298 operate similarly to the YieldForeach control point of statement 284 for recursive invocation and achieve three overrides, namely, override 274, override 276, and override 278. These three different overloads allow different YieldForeach methods to take different arguments and implement yield through different representations of the set. The first overload, overload 274, recursively invokes another iterator. For the second overload, overload 276, the Enumerable represents the set of objects and has a method called GetEnumerator. When GetEnumerator is called, a new element of the set is obtained. When the end of the set is reached, the element can no longer be obtained. The last overload, i.e., overload 278, enables a new instance of the set to be obtained.
The final transformation of the calling method into a state machine is similar to the transformations of the iterators and asynchronous methods described herein. Each PAUSE and CALL is assigned a state. Logic is added at the beginning of the method and at the beginning of each taste block to fork to the point in the code associated with the current state. The PAUSE point PAUSEs or suspends execution of the method and advances the state to immediately after PAUSE and back. Subsequent recovery will recall the calling method, branching to a point immediately after the PAUSE command. The CALL point does not suspend execution. Instead, it starts executing the called recoverable method. The called resumable method may include a control point such as PAUSE so that the called resumable method itself can be paused or suspended, causing the entire stack to be paused (including the master method).
By combining an iterator and an asynchronous method, where the IEnumerator MoveNext method is asynchronous, a user-defined driver class such as IASyncEnumerator can be created. Such a combined method may recursively invoke both asynchronous methods and synchronous iterators, implementing various additional control points. Symmetric co-routines can be implemented by creating cooperative methods that pass control over each other (rather than returning to each other) while preserving the execution state between each recovery. Unlike some embodiments of asynchronous methods, the collaborative method may not increase the depth of the call stack. Instead, the leaf frames of the call stack may be swapped out.
FIG. 2k is an example of a method 201 for extending a compiler using an external API to implement a recoverable method. At 203, source code may be received by a compiler. The compiler may be a compiler such as the compiler described with reference to fig. 1. In response to identifying syntactically distinguishable control points (such as the pause, recursive call, and return control points described above), the compiler may call an external API determined by the signature of the control point expression, 205. As described above, the called API may be a dedicated method and may reside in a library external to the compiler. As described above, a specialized object representing a control chain (e.g., a stack) can be generated by a compiler from a specialized abstract base class in a library. Alternatively, instead of invoking the abstract base class, any code that conforms to the signature of the control point expression may be invoked, including but not limited to: non-abstract base classes, interfaces, static classes with extended methods, and the like. The specialized abstract base class or other code may include specialized behaviors as well as methods such as asynchronous and iterator methods. For example, the abstract class Asynch < T > (shown in FIG. 2 b) may be used by a compiler to make derived classes that include the specific method body of the Asynch < T > class.
The derived recoverable class may be instantiated into an object such as, for example, a frame object. A portion of the code in the object may be contributed by a dedicated base class from the library. A portion of the code in the object may be contributed by a compiler that puts in logic specific to the method. At 209, the received source code may be transformed into augmented code such that at each control point in the received source code, the compiler-generated code may invoke a method on a driver class before the method is paused and may invoke the method after the method on the driver class is resumed to provide the method with an opportunity to perform its specialized behavior. For example, the compiler generated code may call a previous yield method and a subsequent yield method on the base class. For recursive invocation of returns, the code generated by the compiler may invoke a previous yield return method and a subsequent yield return method. Similarly, for a return, compiler-generated code may call a previous return method, but not a later return method, because the method returns when it is completed. The previous and subsequent method calls enable the driver class to specify the method to be performed, and thus which kinds of control points are legal in a particular recoverable method, and to which cases the control points are available. Since the interaction is schema-based and the schema determines which method the compiler-generated code calls, the driver class can specify which methods can be recursively called by specifying which arguments the method takes. There may be multiple overloads of methods such that, for example, asynchronous methods may be allowed to call multiple types of asynchronous methods, which may enable interoperation between different models. The driver class may determine the particular behavior associated with the object. At 211, executable code may be created.
Examples of suitable computing environments
In order to provide context for various aspects of the subject matter disclosed herein, FIG. 3 and the following discussion are intended to provide a brief, general description of a suitable computing environment 510 in which embodiments may be implemented. While the subject matter disclosed herein is described in the general context of computer-executable instructions, such as program modules, executed by one or more computers or other computing devices, those skilled in the art will recognize that portions of the subject matter disclosed herein also can be implemented in combination with other program modules and/or combinations of hardware and software. Generally, program modules include routines, programs, objects, physical artifacts, data structures, etc. that perform particular tasks or implement particular data types. Typically, the functionality of the program modules may be combined or distributed as desired in various embodiments. The computing environment 510 is only one example of a suitable operating environment and is not intended to suggest any limitation as to the scope of use or functionality of the subject matter disclosed herein.
Referring to FIG. 3, a computing device for efficient recovery of co-routines on a linear stack in the form of a computer 512 is depicted. The computer 512 may include a processing unit 514, a system memory 516, and a system bus 518. The processing unit 514 can be any of various available processors. Dual microprocessors and other multiprocessor architectures also can be employed as the processing unit 514. The system memory 516 may include volatile memory 520 and non-volatile memory 522. Non-volatile memory 522 may include Read Only Memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), or flash memory. Volatile memory 520 may include Random Access Memory (RAM), which may act as external cache memory. The system bus 518 couples system physical artifacts including the system memory 516 to the processing unit 514. The system bus 518 may be any of several types of bus structures including a memory bus, memory controller, peripheral bus, external bus, or local bus and may use any of a variety of available bus architectures.
Computer 512 typically includes a variety of computer readable media such as volatile and nonvolatile media, removable and non-removable media. Computer storage media may be implemented by any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CDROM, Digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other transitory or non-transitory medium which can be used to store the desired information and which can accessed by computer 512.
It will be appreciated that FIG. 3 describes software that can act as an intermediary between users and computer resources. The software may include an operating system 528, which can be stored on disk storage 524, and which can control and allocate resources of the computer system 512. Disk storage 524 may be a hard disk drive connected to the system bus 518 through a non-removable memory interface such as interface 526. System applications 530 take advantage of the management of resources by operating system 528 through program modules 532 and program data 534 stored either in system memory 516 or on disk storage 524. It is to be appreciated that a computer can be implemented with various operating systems or combinations of operating systems.
A user may enter commands or information into the computer 512 through input device(s) 536. Input devices 536 include, but are not limited to, a pointing device such as a mouse, trackball, stylus, touch pad, keyboard, microphone, and the like. These and other input devices connect to the processing unit 514 through the system bus 518 via interface port(s) 538. Interface port(s) 538 may represent a serial port, a parallel port, a Universal Serial Bus (USB), or the like. The output device 540 may use the same type of port as the input device. Output adapter 542 is provided to illustrate that there are some output devices 540 like monitors, speakers, and printers that require special adapters. Output adapters 542 include, but are not limited to, video and sound cards that provide a connection between the output device 540 and the system bus 518. Other devices and/or systems and/or devices, such as remote computer 544, may provide both input and output capabilities.
The computer 512 may operate in a networked environment using logical connections to one or more remote computers, such as a remote computer 544. The remote computer 544 can be a personal computer, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to the computer 512, although only a memory storage device 546 has been illustrated in fig. 4. Remote computer(s) 544 may be logically connected via a communication connection 550. Network interface 548 encompasses communication networks such as local-area networks (LAN) and wide-area networks (WAN), but may also include other networks. Communication connection(s) 550 refers to the hardware/software employed to connect the network interface 548 to the bus 518. Connection 550 may be internal or external to computer 512 and include internal and external technologies such as modems (telephone, cable, DSL and wireless) and ISDN adapters, ethernet cards and so on.
It will be appreciated that the network connections shown are examples only, and other means of establishing a communications link between the computers may be used. One of ordinary skill in the art can appreciate that a computer 512 or other client device can be deployed as part of a computer network. In this regard, the subject matter disclosed herein relates to any computer system having any number of memory or storage units and any number of applications and processes occurring across any number of storage units or volumes. Aspects of the subject matter disclosed herein may apply to an environment with server computers and client computers deployed in a network environment, having remote or local storage. Aspects of the subject matter disclosed herein may also apply to a standalone computing device, having programming language functionality, interpretation and execution capabilities.
FIG. 4 illustrates an Integrated Development Environment (IDE)600 and a common language runtime environment 602. The IDE600 may allow a user (e.g., developer, programmer, designer, coder, etc.) to design, code, compile, test, run, edit, debug, or build programs, assemblies of programs, websites, web applications, and web services in a computer system. The software program may include source code (component 610) created in one or more source code languages (e.g., VisualBasic, VisualJ #, C + +, C #, J #, JavaScript, APL, COBOL, Pascal, Eiffel, Haskell, ML, Oberon, Perl, Python, Scheme, Smalltalk, etc.). The IDE600 may provide a native code development environment, or may provide managed code development running on a virtual machine, or may provide a combination thereof. The IDE600 may provide a managed code development environment using the NET framework. An intermediate language component 650 may be created from the source code component 610 and the native code component 611 using a language specific source compiler 620, and the native code component 611 (e.g., machine executable instructions) may be created from the intermediate language component 650 using an intermediate language compiler 660 (e.g., a just-in-time (JIT) compiler) when executing the application. That is, when an IL application is executed, it is compiled while being executed into the appropriate machine language for the platform on which it is being executed, thereby enabling code to be portable across several platforms. Alternatively, in other embodiments, the program may be compiled into a native code machine language (not shown) suitable for its target platform.
A user may create and/or edit source code components via the user interface 640 and source code editor 651 in the IDE600 according to known software programming techniques and the specific logical and syntactic rules associated with a particular source language. Thereafter, the source code component 610 can be compiled via a source compiler 620, whereby an intermediate language representation of the program, such as assembly 630, can be created. The assembly 630 can include an intermediate language component 650 and metadata 642. The application design may be able to be verified before deployment.
The various techniques described herein may be implemented in connection with hardware or software or, where appropriate, with a combination of both. Thus, the methods and apparatus disclosed herein, or certain aspects or portions thereof, may take the form of program code (i.e., instructions) embodied in tangible media, such as floppy diskettes, CD-ROMs, hard drives, or any other machine-readable storage medium, wherein, when the program code is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing aspects of the subject matter disclosed herein. In the case of program code execution on programmable computers, the computing device will generally include a processor, a storage medium readable by the processor (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device. One or more programs that may utilize the creation and/or implementation of domain-specific programming models aspects, e.g., through the use of a data processing API or the like, may be implemented in a high level procedural or object oriented programming language to communicate with a computer system. However, the program(s) can be implemented in assembly or machine language, if desired. In any case, the language may be a compiled or interpreted language, and combined with hardware implementations.
While the subject matter disclosed herein has been described in connection with the appended drawings, it is to be understood that modifications may be made to perform the same function in a different manner.
Claims (12)
1. A system for implementing a recoverable method, comprising:
a processor and a memory; and
a compiler configured to cause the processor to:
receiving source code including a recoverable method that includes a syntactically distinguishable control point expression having a signature, wherein the compiler invokes a program module of a plurality of program modules, the invoked program module being determined by the signature expressed by the control point, wherein the compiler creates at least one special object from the invoked program module, wherein the compiler rewrites a control point of the control point expression in the received source code to a call to a method executed before the recoverable method is paused and to a method executed after the recoverable method is resumed; and
loading a library external to the compiler, the library including the plurality of program modules.
2. The system of claim 1, wherein the plurality of specialized objects created by the compiler represent execution control chains.
3. The system of claim 1, wherein the called program module includes a special-purpose behavior and a special-purpose method, and wherein the special-purpose method includes a recoverable method that includes an asynchronous method or an iterator method or a symmetric co-routine method.
4. The system of claim 1, wherein derived classes are augmented with state machine logic and specialized commands are rewritten to derived classes for state transitions and suspension.
5. A method for implementing a recoverable method, comprising:
receiving source code in a compiler on a software development computer, the source code including a recoverable method that includes syntactically distinguishable control point expressions, wherein the compiler invokes a program module of a plurality of program modules based on a determined signature of the control point expressions, the plurality of program modules being included in a library external to the compiler, the compiler deriving a specialized driver class from an abstract recoverable base class;
transforming the received source code into augmented output code, wherein the augmented output code includes a callback to a specialized method inherited from a class conforming to a determined signature of the control point expression, wherein the callback is inserted into the augmented source code by the compiler, the inserted callback including a call to a method executed before the resumable method is paused and a call to a method executed after the resumable method is resumed;
an object is instantiated from a derived specialized driver class, the object representing a recoverable method comprising an asynchronous method or an iterator or a symmetric co-routine.
6. The method of claim 5, wherein the application-specific driver classes include application-specific behaviors and application-specific methods.
7. The method of claim 5, wherein the syntactically distinguishable control point expressions are nested within a control structure of a programming language in which the source code is written.
8. The method of claim 5, wherein the recoverable method is rewritten to a state machine, wherein a calling method of the recoverable method is augmented with state machine logic and a dedicated command associated with the recoverable method is rewritten to code for state transitions and suspension.
9. A method for implementing a recoverable method, the method comprising:
receiving source code, the source code including a recoverable method that includes syntactically distinguishable control point expressions with signatures, wherein a compiler calls signed code that conforms to the control point expressions, the code being in a library external to the compiler, the compiler creating a dedicated driver class from an abstract recoverable base class;
transforming the received source code into augmented source code, wherein the augmented source code includes a callback to a specialized method inherited from called code, wherein the callback is inserted into the augmented output code by the compiler, the inserted callback including a call to a method executed before the recoverable method is paused and a call to a method executed after the recoverable method is resumed;
an object is instantiated from a derived specialized driver class, the object representing a recoverable method comprising an asynchronous method or an iterator or a symmetric co-routine.
10. The method of claim 9, further comprising:
the specialized behavior and specialized methods are added to the derived specialized driver classes from the invoked code.
11. The method of claim 9, further comprising:
control points are created for callbacks nested within a control structure of the programming language in which the received source code was written.
12. The method of claim 9, further comprising:
transforming the received source code including the recoverable method into augmented output code such that the augmented output code is executable in discrete portions, each discrete portion beginning and ending at a control point in the augmented output code.
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/767,811 US8549506B2 (en) | 2010-04-27 | 2010-04-27 | Resumable methods |
US12/767,811 | 2010-04-27 | ||
PCT/US2011/034006 WO2011139722A2 (en) | 2010-04-27 | 2011-04-26 | Resumable methods |
Publications (2)
Publication Number | Publication Date |
---|---|
HK1178622A1 HK1178622A1 (en) | 2013-09-13 |
HK1178622B true HK1178622B (en) | 2017-04-21 |
Family
ID=
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP2564316B1 (en) | Resumable methods | |
US8707263B2 (en) | Using a DSL for calling APIS to test software | |
US11531529B2 (en) | Method and electronic device for deploying operator in deep learning framework | |
US8239823B2 (en) | Generating libraries for reflection without project compilation | |
US8407667B2 (en) | Inferring missing type information for reflection | |
JP5415557B2 (en) | User script code conversion for debugging | |
CN102696012B (en) | Creating inferred symbols from code usage | |
US9003377B2 (en) | Efficient resumption of co-routines on a linear stack | |
US20140165035A1 (en) | Expansion and reduction of source code for code refactoring | |
US9134973B2 (en) | Dynamic compiling and loading at runtime | |
CN104115120A (en) | Transform program execution from compiled to interpreted code | |
JP2015524126A (en) | Adaptively portable library | |
US20250199785A1 (en) | Compilation methods, compilers, and wasm virtual machines | |
US20130152061A1 (en) | Full fidelity parse tree for programming language processing | |
US10809985B2 (en) | Instrumenting program code | |
US8543975B2 (en) | Behavior-first event programming model | |
US20090320007A1 (en) | Local metadata for external components | |
US11922151B2 (en) | Compiler-generated asynchronous enumerable object | |
US20200110587A1 (en) | Memory ordering annotations for binary emulation | |
CN113961238A (en) | Object conversion method and device, electronic equipment and storage medium | |
HK1178622B (en) | Resumable methods | |
Verwaest et al. | Pinocchio: Bringing reflection to life with first-class interpreters | |
JP6011712B2 (en) | Correction management device, correction management method, and correction management program | |
US20110099534A1 (en) | Information processing apparatus, execution program operation modification method, and recording medium | |
HK1174709B (en) | Creating inferred symbols from code usage |