-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathconclusion.tex
58 lines (44 loc) · 7.76 KB
/
conclusion.tex
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
\chapter{Conclusion and Future Work}
\label{ch:conclusion}
The main aim of this work is to reduce the labour needed to create libraries of formal mathematics. We have introduced a $3$-phase interpreter of declarative definitions that uses combinators to define theories and morphisms of a library. Starting with $227$ declarations of theories in the algebraic hierarchy, we generated $5,902$ library definitions spanning over $32459$ lines of Tog code.
% 10 declarations, 5 of which have 4 variants.
This huge saving of human effort proves how useful and promising a generative approach to library building can be.
In Section~\ref{conc:summary} we summarize the contributions of this work referring to how they solve the research questions introduced in Section~\ref{sec:questions}. In Section~\ref{future_work} we discuss several extensions of this work.
\paragraph{A note on runtime.} The user of our framework would encounter a big wait time when running the interpreter described in Chapter~\ref{ch:design} on a large library like the one we develop here. It is worth mentioning that the main source of overhead is the type checker, and not any of the operations we use to process the theory presentations. We performed a simple runtime experiment in which we measure the runtime for every stage of the interpreter. In table~\ref{table:runtimes} we report the average and standard deviation over $10$ runs.
On the other hand, the type checker spent an average of $1686.81s$ (approximately $28$ minutes) over $3$ runs with standard deviation $20.96s$.
\begin{table}
\begin{center}
\begin{tabular}{| c | c | c |} \hline
\textbf{Stage} & \textbf{Average} & \textbf{Standard Deviation} \\ \hline
\textbf{Flattener} & $5.17$s & $0.18$s \\ \hline
\textbf{Generator} & $2.7$s & $0.06$s \\ \hline
\textbf{Exporter} & $9.1$us & $1.23$us \\ \hline
\end{tabular}
\end{center}
\caption{The average and standard deviation of the runtime of different stages of the interpreter over $10$ runs.}
\label{table:runtimes}
\end{table}
%Universal algebra is a well-established abstraction over details of algebraic theories in equational first order logic.
\section{Summary of contributions}
\label{conc:summary}
Universal algebra is a well-established abstraction over the details of the axiomatic representation of algebraic structures. In Chapter~\ref{ch:generation} we present a framework that given a theory presentation that has the structure defined by universal algebra, generates many of its related constructions. Our framework generated $10$ constructions for each theory, but can be extended to support more structures. Specifically, we believe all structures presented in the list in Section~\ref{sec:toBeGenerated} can be generated within this framework. The development of this framework answers the first research question positively that universal algebra constructions can be automatically generated.
This leads us to the second research question about the preconditions for developing a generation platform. To generate the universal algebra constructions, one needs to introspect the contents of a theory in the object language, and be able to generate definitions in the same language. The introspection capabilities should be able to retrieve the names and types of every declaration in the theory and information about which ones are parameters. The presence of these features are sufficient for developing generation platform like the one we present here.
%To the best of our knowledge, the reflection mechanisms provided by dependently-typed systems do not natively support meta-programming on the module level. Reflection techniques are usually applied to proof terms and the generated constructions are not reflected back to the object language. Even when the machinery to generate module definitions are in place, like in the case of
Generating information needs to start with a theory presentation to be manipulated. We have shown in Figure~\ref{fig:mon-diff-lang} how theory presentations look different in different formal systems and how they strongly reflect the design decisions of the library builders, leading to a usability problem for projects that do not employ the same decisions. In this work, we abstracted over two design decisions. The first is the hierarchy used to develop the theory. To build our library we use the combinators in~\cite{carette2018building} which are designed such that every theory can be flattened. By providing the flattened representation for every theory in our library. The theories are still connected in the underlying graph structure. The second design decision we abstract over is the bundling of the declarations of the theory. We follow the approach presented in~\cite{alhassy2019}. By adding a declaration to the type representing theories reflecting how many of its components are parameters, one does not have to fix specific elements as parameters. In both cases, the information being abstracted over can be reintroduced, which answers the third research question.
Our approach saves huge human effort needed to build libraries by generating the standard information that can be derived from given data. Writing these definitions by hand is boring and error-prone. By using a generative approach to library development, we can save the effort of writing thousands of definitions and make maintaining these definitions easier, as changes would then amount to writing meta programs that process the data in a different way. We answer research question 4 in more details in future work.
\section{Future Work}
\label{future_work}
Our work can be extended in different ways. The most immediate is adding more definitions to be generated, as shown in the list in Section~\ref{sec:toBeGenerated}. Here we suggest more ways of extending this work
\paragraph{Exporting to multiple front ends.} Theory presentations look different from one system to another. Even within the same system, they might look different between the different projects. We believe that developers and users of formal systems should not be writing the different presentations of the same information. Instead, they need to describe how the presentation that fits their purpose looks like and a meta-program should produce it for them. This can be done by investigating how different language features interact and how they affect the theory presentations. This can be done using a feature model~\cite{czarnecki2000generative}. The information captured by the feature model can be used to generate a staged multiple front end exporter as in~\cite{stagedConfig}.
\paragraph{DSL for library development.}
If we have a feature model studying design decisions and multiple front end exporter, and we use the combinators from \cite{carette2018building} as we did in Chapter~\ref{ch:library}, then we have the components to develop a domain-specific language for building libraries. We envision expressions in this language being like:
\begin{togcode}
Monoid = combine Unital and Semigroup over Magma
generate homomorphism, OpenTerms, Simplifier
using (waist=1,eq=Agda.Builtin.Equality)
export_to agda
\end{togcode}
\noindent or even referring to a whole graph and specifying the generation and exportation parameters the same way.
The same expression can also be used to generate knowledge ``on demand" for user-provided theories, similar to Haskell's derivings.
\paragraph{Generalized Algebraic Theories (GAT).}
GATs consist of a set of sorts, a set of function symbols, and a set of axioms, each being the identity~\cite{cartmell1986gats}. This definition is similar to that of algebraic theories that we presented in Section~\ref{sec:background:theory}. The generalization in GATs is that its sorts can interpret sets of functions or sets of sets. A useful extension of our work is to use our meta programs to derive the same information from GATs.