Savannah, Georgia, USA, January 19-20, 2009
Advice on Writing a PEPM Research Paper
ACM SIGPLAN 2009 Workshop on Partial Evaluation and Program Manipulation
The PEPM Symposium/Workshop series aims to bring together researchers and practitioners working in the areas of program transformation and generation. For PEPM 2006, the program chairs have written up advice for authors of PEPM research paper submissions and tool demonstration paper submissions. Advice for research papers is contained in this document; advice for tool demo papers can be found at this link.
The scope of topics to be covered by PEPM is already discussed in the Call For Papers and will not be addressed further in this document. Please contact the program chairs if you have remaining questions about the scope of PEPM.
The primary goal of the PEPM Program Committee will be to assemble a program that presents well-grounded and relevant contributions and that generate forward-looking discussions that serve to create and drive a dynamic research agenda that can significantly impact software engineering practice. The Committee will aim to arrive at papers, tool presentations, and discussions that collectively address the following questions:
- What is currently hindering programming transformation techniques and tools previously reported on at PEPM from being applied to solve real-world problems?
- What are the challenging open technical, engineering, integration, and methodological problems in the area?
- What new techniques are being developed that have the potential to address these problems?
- What are the existing and emerging applications areas that can benefit from program transformation technology?
- How can program transformation tools be integrated in the context of other software engineering tools and development environments and broader software engineering methodologies?
Program Committee Expectations
Below we provide guidelines and suggestions that PEPM authors should take into account when preparing submissions. Although there's a fair amount of material below, we want to emphasize that we are mindful of the fact that PEPM is a workshop this year and we don't expect perfect papers!! -- even though we hope each author will do their best to put together a solid submission. In other words, the advice below is meant to inspire you and give you ideas for improving your submission -- it's not meant to scare you.
All papers should be original work, and not have been previously published nor have been submitted to, or be in consideration for, any journal, book, conference, or workshop. Note that PEPM Tool Demonstration Papers have less restrictive conditions on novelty.
- Clear contributions: The contributions of the paper should be stated clearly/explicitly in the introduction of the paper and elaborated on in the rest of the paper body. Moreover, limitations should be acknowledged and, if possible, future work that could overcome those limitations should be mentioned.
- Well-written: The presentation in the paper should proceed in a coherent fashion that is relatively free from spelling and grammatical mistakes (use a spell-checker and even a grammar checker if possible!). Concepts should be illustrated with meaningful examples.
- Relevant to real-world problems/contexts: Authors should clearly indicate actual real-world contexts and applications in which they envision their work (when fully mature) being applied. Issues such as scalability, analysis precision, degree of automation, learning curve, etc. that might hinder the use of the technique in practice should be acknowledged and assessed.
- Well-supported claims: When appropriate (e.g., when proposing a new analysis/transformation technique designed to improve performance, reduce foot print, etc.), submissions should support claims by reporting on the results of case studies. This would typically involve reporting on computational costs of analyses and transformations, using multiple examples to illustrate the scalability of the approach, reporting numbers for speed-up factors and other positive benefits that result from the approach.
- Properly positioned to related work: Submissions should include a related work section that summarizes and contrasts closely related work. Related work should not simply list previous papers with a brief summary of their contents, nor should it state only the strengths of the submission and the weakness of previous work. A proper related work section should state both the strengths/weaknesses of the submission and previous and indicate situations in which one method might be preferred over another.
- Clear methodology and integration into larger development context: Authors should clearly indicate the steps that users should go through to apply a technique including any manual preprocessing, tool configuration, assessment of output, etc. Moreover, an assessment of how the technique fits into the context of other development tools such as debuggers, testing, integrated development environments, etc. Here are some examples. If the proposed technique is a generative programming technique in which users are not expected to read generated code, what mechanisms might be provided to to aid debugging of generated code. How does the transformation interact with the use of library code? Is there any impact on how the system is tested?
- Supporting material: The community benefits significantly when tools, examples, data, and other artifacts presented in papers are made available on web-sites. While publicly available tools/artifacts are not required, shared tools and examples are extremely useful for comparing techniques, forging collaborative relationships, generating further research, and forming accurate assessments of the capabilities of a technique.
You might also find this advice on how to structure a research paper helpful. It is also a good idea to have your paper reviewed by colleagues before submitting.
When considering how to write a contribution that helps us meet the goals above, it is worthwhile to consider common flaws in unsuccessful submissions or papers that failed to have an impact on advancing the agenda of the community.
- Language contexts of limited relevance: Early work in partial evaluation (PE) was often carried out in the context of the lambda-calculus because it served as a clean core language in which basic techniques could be formalized and reasoned about in a rigorous way. However, now that basic partial evaluation techniques are well-understood, PEPM submissions should aim to present techniques in the context of commonly-used programming languages. While papers may present or formalize a technique in terms of a "clean core language", submissions that attempt to handle realistic language features or convincingly argue that the proposed techniques can be applied to more challenging language featuers, may be more highly valued since they provide a more effective foundation for transitioning techniques into actual software development practice. Exceptions to this general rule might occur if a submission is attempting to show, e.g., the benefits of transformation/optimization on representations used in theorem-proving tools that use extensions of the lambda-calculus as an underlying representation.
- Failure to provide examples/applications that speak to a broader community: Early work in partial evaluation often used the "power function", "dot product" or similar examples to illustrate a technique. Since PE concepts are now fairly well-understood in the PEPM community, such examples should be avoided in PEPM submissions and replaced with examples that could convey the utility of PE and other program transformation techniques to a larger audience. Our aim is to grow the number of people from other areas that look to PEPM for solutions relevant to their problems. People from other applications domains will likely find examples such as those listed above irrelevant and unconvincing. It's time for PE to move beyond the power function.
- Limited or unclear scalability: Unsuccessful papers have presented techniques without providing (a) some form of experimental studies containing time/memory costs that indicate the scalability of the approach or (b) assessment of forms of human intervention that would be necessary to make the technique work in practice. A common example of the later flaw is the failure of many papers on partial evaluation to address the need/costs of "binding-time improvements" needed to make partial evaluation effective. There has long been the need for work that provides semi-automation of binding-time improvements or that provides program development or refactoring environments that would aid in binding-time improvements. It is important to note that the program committee doesn't expect every paper to describe a completely automated technique that is easy to use, provides huge speed-ups, and scales to systems of 100K lines of code. However, the submission should acknowledge limitations, whenever possible give results of experiments that show computational costs on multiple non-trivial examples, and explain why/how the computational/human overhead may be overcome in the context of a realistic development environment.
- Lack of clear correctness notions: Some papers prove unsatisfactory because they do not clearly state what properties a transformation aims to preserve. In general, a paper need not include detailed correctness proofs or theorems (though these are certainly welcome, they might be published in a separate tech report that is available at the time of review), but authors should clearly state the properties that they aim to preserve and any forms of unsoundness associated with the technique.
- Lack of clear methodology: Many papers present techniques without a description of a step-by-step process that they expect users will need to go through to apply the technique. Issues such as initial binding-time improvement or refactoring of source code, stubbing out or annotating library methods, assessing code for potential benefit for applying transformation techniques, etc. are often ignored. As a classic example, the fact that the need for manual binding-time improvements/annotations has been "swept under the rug" in many PEPM papers has significantly hindered non-experts in applying PE techniques.
- Interaction with larger development context: Many papers present techniques without taking care to explain how their technique fits into a larger software development process (how are other specification, testing, and debugging tools impacted, what type of experience/training is will be needed to apply a technique).
- Poor related work comparisons: Papers often suffer from poor related work comparisons in which authors follow a pattern "our tool is better at TASK-X than tool T1 because ..., our tool is better at TASK-Y than tool T2 because ..." (all the while ignoring the fact that tool T1 has a much broader scope than the authors' tool or that it is in fact better than the authors' tool for many aspects not mentioned in the paper). Authors should strive to give a balanced and fair assessment listing both strengths and weakness of all work considered.
Example Paper Types
- New transformation technique: The most common type of PEPM paper will report on a new analysis/transformation technique. Such papers should include a detailed description of the technique, some sort of formal or informal argument about why the technique is correct, illustrate the technique on interesting examples, and evaluate the effectiveness of the technique.
- Case studies: This type of paper might report on the use of an existing technique to a large example(s). Such papers should take care to indicate what new and significant insights were gained in the case study (what the technique effective, was it easy to apply, could it be applied by the "average software engineer", does it have particular flaws that need to be remedied, how does it fit within a larger development context). In the technique is a performance enhancing technique, significant attention should be paid to experimental studies.
- Description of new/interesting domain and potential for effective use of PEPM techniques: This type of paper might describe a particular application domain (e.g., middleware, avionics systems, active networks) or a particular development paradigm (e.g., model-driven development, ) and describe how techniques within the scope of PEPM might be applied effectively in that domain. Special attention should be given to enumerating the problems/challenges of the domain, characteristics that suggest that it might benefit from PEPM techniques, initial attempts at applying those techniques, and questions/challenges to be addressed by future work. Such papers should aim to educate the PEPM community about interesting/fundamental issues of the domain, and give pointers to where they may learn more about the issues.
What About Work In Progress?
Due to the workshop format adopted for PEPM this year, we do encourage submissions of "work in progress" in cases where the submission raises issues that will generate interesting discussions at the meeting, brings new knowledge of a particular application domain or technique to the community, or lays out challenging open problems of high relevance to software engineering practice. Depending on the quality and number of such submissions, we may collect work-in-progress papers into a single session with slightly shorter time slots for each presentation and a longer discussion time at the end of the session.