Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Q's about theoretically not using lambdas in HVM programs #109

Closed
nmushegian opened this issue Apr 9, 2022 · 8 comments
Closed

Q's about theoretically not using lambdas in HVM programs #109

nmushegian opened this issue Apr 9, 2022 · 8 comments

Comments

@nmushegian
Copy link

nmushegian commented Apr 9, 2022

I think that I can program anything using just type constructors (and pattern matching / destructuring) and not use lambdas or lambda application at all.

  1. Is this true? Is there an example of something that is not possible or is fundamentally more complex to express without lambdas (beyond just code aesthetics)?
  2. I remember in the docs reading that type constructor application is more efficient than lambda application. Is there any kind of optimization that could be done if it is known that there are no lambdas at all, or is it already optimal if I simply don't use them?
@VictorTaelin
Copy link
Member

  1. Yes, it is true, since the lambda-free fragment is Turing-complete. But there would be entire classes of programs that would be much harder to express, to a point you'd eventually implement lambdas as a library, on top of a DSL, losing overall efficiency.
  2. What is a "type constructor application"? Not sure what you're talking about, but I think there is almost nothing faster than lambda application on the HVM. It is one of the lightest, fastest opcodes.

@fogti
Copy link

fogti commented May 22, 2022

About (2 / type constructor application). I suppose that yes, the type constructor application itself should be really fast, but as this would then need to be pattern matched (which should theoretical be much slower than lambda application, where the next computation step is already clear and a simple substitution + possible duplication). Although I haven't looked at the implementation to confirm this.

@nmushegian
Copy link
Author

Right so I meant the case where you are using the trivial pattern match with 1 option, I think you are right it is proportional to size of pattern but I specifically am interested in if there is different base overhead, I think victor posted some “mana cost” pictures based on measuring the actual implementation overhead but I don’t understand exactly how to map them onto this quite yet also don’t have them on hand

@VictorTaelin
Copy link
Member

Perhaps provide some example codes on what precisely you're talking about? But yes, the mana cost should be a fairly good estimative of the actual time/cost to run a program. Notice that most rewrite rules take about the same time, and none is faster than beta-reduction.

@nmushegian
Copy link
Author

nmushegian commented May 23, 2022

(Foo x) = (Bar 1)
(Main) = (Foo 0)

(Main) = (\ x . (Bar 1)) (Foo 0)      // lambda 

Are these compiled identically?
How do these concepts map onto this picture? https://twitter.com/VictorTaelin/status/1524861417668722688

@VictorTaelin
Copy link
Member

VictorTaelin commented May 24, 2022

This is the current mana table:

.------------------------------------------------------.
| Opcode  | Effect                          | Mana     |
|---------|---------------------------------|----------|
| APP-LAM | applies a lambda                | 10       |
| APP-SUP | applies a superposition         | 20       |
| OP2-NUM | operates on a number            | 10       |
| OP2-SUP | operates on a superposition     | 20       |
| FUN-CTR | pattern-matches a constructor   | 10 + M   |
| FUN-SUP | pattern-matches a superposition | 10 + A*5 |
| DUP-LAM | clones a lambda                 | 20       |
| DUP-NUM | clones a number                 | 10       |
| DUP-CTR | clones a constructor            | 10 + A*5 |
| DUP-SUP | clones a superposition          | 20       |
| DUP-SUP | undoes a superposition          | 10       |
| DUP-ERA | clones an erasure               | 10       |
|------------------------------------------------------|
| * A is the constructor or function arity             |
| * M is the alloc count of the right-hand side        |
'------------------------------------------------------'

You can test it using Kindelia's block evaluator: kindelia run file.kindelia - note that Kindelia's HVM syntax is slightly different and more low level. It is described on the paper and on example.kindelia. Translating the program above to a Kindelia block, we'd have:

// Declares the Bar constructor
$(Bar value)

// Implements the Foo function
!(Foo x) {
  !(Foo x) = $(Bar #1)
} = #0

// Applies Foo to 0
{
  $(IO.done !(Foo #0))
}

// Runs thix expression
{
  $(IO.done (@x$(Bar #1) !(Foo #0)))
}

The result is:

- run $(Bar #1) (15 mana, 0 size)
- run $(Bar #1) (10 mana, 0 size)

That means your first main consumes 15 mana, because it performs a global function call, thus, it uses the FUN-CTR opcode, which costs 10 + A * 5 mana, with A = 1, because it allows exactly 1 word of memory (for (Bar #1)). The second main consumes 10 mana because it performs a lambda application, thus, it uses the LAM-APP opcode, which costs a flat 10 mana.

In short, calling a global function costs 10 mana plus 5 for each word of memory allocated on the right hand side. For example, if the right hand side is (Tuple (Bar 1 2) (Bar 3 4)), FUN-CTR will cost 40 mana, since that expression allocates 6 words of memory. Lambda Application (beta reduction) always cost 10 mana, regardless of the body size, because it is a O(1) operation. So it is always cheaper than calling a global function, except if that global function performs no allocation (which would happen if it returned an arity-0 constructor, or a number).

Note that, while a single lambda application is cheaper than a single global function call, it is still cheaper to implement recursive algorithms using global functions, because they avoid copying the lambda body (which Y-Combinators would need to do), and they entirely avoid computing case-of expressions (because pattern-matching / dispatch is built-in on global functions).

@danaugrs
Copy link

Out of curiosity, is HVM without lambdas Turing-complete? See the below example, which has a single lambda:

// List Map function
(Map f Nil)         = Nil
(Map f (Cons x xs)) = (Cons (f x) (Map f xs))

// List projectors
(Head (Cons x xs)) = x
(Tail (Cons x xs)) = xs

// The infinite list: 0, 1, 2, 3 ...
Nats = (Cons 0 (Map λx(+ x 1) Nats))

// Just a test (returns 2)
Main = (Head (Tail (Tail Nats)))

It looks like we can't define Succ and use it as a function in Map because of #64.

(Succ x) = (+ x 1)
Nats = (Cons 0 (Map Succ Nats))

Is there any way to write this program without lambdas?

@developedby
Copy link
Member

developedby commented Jun 26, 2022

Out of curiosity, is HVM without lambdas Turing-complete? See the below example, which has a single lambda:

// List Map function
(Map f Nil)         = Nil
(Map f (Cons x xs)) = (Cons (f x) (Map f xs))

// List projectors
(Head (Cons x xs)) = x
(Tail (Cons x xs)) = xs

// The infinite list: 0, 1, 2, 3 ...
Nats = (Cons 0 (Map λx(+ x 1) Nats))

// Just a test (returns 2)
Main = (Head (Tail (Tail Nats)))

It looks like we can't define Succ and use it as a function in Map because of #64.

(Succ x) = (+ x 1)
Nats = (Cons 0 (Map Succ Nats))

Is there any way to write this program without lambdas?

You can write it as

(Succ (Cons x xs)) = (Cons (+ x 1) (Succ xs))
Nats = (Cons 0 (Succ Nats))

Don't know if there's a generic rewriting rule to transform programs into lambda-less ones.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants