Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

decide on documentation format #27

Closed
cseed opened this issue Oct 29, 2015 · 1 comment
Closed

decide on documentation format #27

cseed opened this issue Oct 29, 2015 · 1 comment

Comments

@cseed
Copy link
Collaborator

cseed commented Oct 29, 2015

From @cseed on August 28, 2015 15:57

What format is the documentation going to be? We may more than one system.

  • Reference on what methods are computing. This needs to include mathematical equations.
  • Tutorial on how to use k3.
  • Documentation for k3 developers (how to set up k3, git workflow, etc.) Markdown should be fine for this.
  • Use Scaladoc to document APIs used by developers against our codebase.

Copied from original issue: cseed/hail#31

@cseed
Copy link
Collaborator Author

cseed commented Nov 20, 2015

This is decided: We're going to use Markdown with embedded TeX rendered by padoc.

@cseed cseed closed this as completed Nov 20, 2015
cseed added a commit to cseed/hail that referenced this issue Sep 22, 2018
* added pr-builder image logic

* added missing files

* address comments
danking pushed a commit to danking/hail that referenced this issue Sep 24, 2018
danking pushed a commit that referenced this issue Sep 25, 2018
* initial revision

* wip

* added deployment

updated logging

* added service

* wip

* wip

* cancellation, primitive client library

(unused) bidict
jupyter deployment

* batches, higher level api

* fixed readme

* minor

* added tests, reorg

* tests run in itself

* wip

added attributes
removed job names
create batch objects
added callback (untested)

* added callbacks, callback test

* makefile changes

* wip

try to get callback test working on k8s cluster

* tweaks

* callback test works on k8s

* starting to play with containerized spark

some cleanup

* Create setup.py

* move setup.py to correct location

* ignore compiled pyc files

* describe how to use minikube with local docker images

* add special note about `imagePullPolicy`

* add an environment yaml

* add a getting started section

* add a dockerignore file

* do not fail if callback fails

* expose pod/container volumes

* fix volumeMounts field name

* add resources

* add tolerations

* add jobs listing

* add jobs test

* fix the environment

* update Batch.create_job for all the new parameters

* fix missing id

* avoid crashing on bad event types

* update dockerignore

* retry event loops

* Update server.py

* fix dockerignore

* fix dockerignore

* stash the attributes sent by api

* cache the status

* fixed tests (#28)

* Pr builder image (#27)

* added pr-builder image logic

* added missing files

* address comments

* add hail-ci-build.sh (#30)

* add hail-ci-build.sh wip

* wip

* make hail-ci script work

added shutdown endpoint

* fix

* don't log the (pod) log (#29)

* added bash to pr-builder image (#32)

* Add git (#35)

* test

* added git to pr-builder image

* add python alias in pr-builder image

* added curl to pr-builder image

* Job delete (#36)

* wip: added job/delete

need to test

* added delete, various

* tag executable images (#38)

* update deployment (#39)

* make batch single threaded (#40)

* make batch single threaded

polling and k8s watch thread request into main thread
in addition to k8s notifications, periodically poll k8s state

* address comments

* label batch job pods (#41)

* add hail-ci-deploy.sh (#42)

* added hail-ci-deploy.sh

add docker to pr-builder image

* added missing deployment.yaml.in

removed deployment.yaml

* consistency

* addessed comments

* install kubectl in image (#44)

`gcloud components install kubectl` is failing with a non-obvious error message so I installed it directly

* make batch subproject (#46)

* make batch subproject

fixed more deployment bugs

* updated build image

* fix test race condition

job can complete before it is cancelled

* authenticate docker to push to gcr.io (#48)

* authenticate docker to push to gcr.io

* add gcloud quiet (-q)

* remove errant tab

* logging to match ci

* do callback asynchronously

* do callback asynchronously

* expose pod name as well (#49)

* restart ci on deploy (#50)

* Fix SHA check (#51)

* Update hail-ci-deploy.sh

* Update deployment.yaml.in

* Update hail-ci-deploy.sh

* Update hail-ci-deploy.sh

* add /jobs/<id>/log endpoint (#52)

* add /jobs/<id>/log endpoint

added to api, client
return completed job logs (including deleted ones)
return in-progress logs (if there are any)
return 404 (not empty) if there no logs to be found

status['log'] returns the same log

* fixed bug

* fixed typo

* prep to merge into monorepo

clean up unused files, experiments (jupyter, spark, etc.)

* updated build image

* fixes

hail-ci-build.sh handles all known projects
fix batch to deploy only when changed

* fixed typo

* fixed README.md conflict.

* activate environment

* add cloudtools to list of project-changed.py projects
tpoterba pushed a commit to tpoterba/hail that referenced this issue Feb 12, 2019
added --args option to submit script to allow passing arguments to su…
ammekk pushed a commit to ammekk/hail that referenced this issue Dec 9, 2021
danking pushed a commit to danking/hail that referenced this issue Oct 11, 2023
Consider this:

```scala
class Foo {
   def bar(): (Long, Long) = (3, 4)

   def destructure(): Unit = {
     val (x, y) = bar()
   }

   def accessors(): Unit = {
     val zz = bar()
     val x = zz._1
     val y = zz._2
   }
}
```

These should be exactly equivalent, right? There's no way Scala would compile the match into
something horrible. Right? Right?

```
public void destructure();
  Code:
     0: aload_0
     1: invokevirtual hail-is#27                 // Method bar:()Lscala/Tuple2;
     4: astore_3
     5: aload_3
     6: ifnull        35
     9: aload_3
    10: invokevirtual hail-is#33                 // Method scala/Tuple2._1$mcJ$sp:()J
    13: lstore        4
    15: aload_3
    16: invokevirtual hail-is#36                 // Method scala/Tuple2._2$mcJ$sp:()J
    19: lstore        6
    21: new           #13                 // class scala/Tuple2$mcJJ$sp
    24: dup
    25: lload         4
    27: lload         6
    29: invokespecial hail-is#21                 // Method scala/Tuple2$mcJJ$sp."<init>":(JJ)V
    32: goto          47
    35: goto          38
    38: new           hail-is#38                 // class scala/MatchError
    41: dup
    42: aload_3
    43: invokespecial hail-is#41                 // Method scala/MatchError."<init>":(Ljava/lang/Object;)V
    46: athrow
    47: astore_2
    48: aload_2
    49: invokevirtual hail-is#33                 // Method scala/Tuple2._1$mcJ$sp:()J
    52: lstore        8
    54: aload_2
    55: invokevirtual hail-is#36                 // Method scala/Tuple2._2$mcJ$sp:()J
    58: lstore        10
    60: return

public void accessors();
  Code:
     0: aload_0
     1: invokevirtual hail-is#27                 // Method bar:()Lscala/Tuple2;
     4: astore_1
     5: aload_1
     6: invokevirtual hail-is#33                 // Method scala/Tuple2._1$mcJ$sp:()J
     9: lstore_2
    10: aload_1
    11: invokevirtual hail-is#36                 // Method scala/Tuple2._2$mcJ$sp:()J
    14: lstore        4
    16: return
```

Yeah, so, it extracts the first and second elements of the primitive-specialized tuple, constructs
a `(java.lang.Long, java.lang.Long)` Tuple, then does the match on that.

sigh.
danking added a commit that referenced this issue Oct 17, 2023
…13794)

Consider this:

```scala
class Foo {
   def bar(): (Long, Long) = (3, 4)

   def destructure(): Unit = {
     val (x, y) = bar()
   }

   def accessors(): Unit = {
     val zz = bar()
     val x = zz._1
     val y = zz._2
   }
}
```


![image](https://github.com/hail-is/hail/assets/106194/532dc7ea-8027-461d-8e12-3217f5451713)

These should be exactly equivalent, right? There's no way Scala would
compile the match into something horrible. Right? Right?

```
public void destructure();
  Code:
     0: aload_0
     1: invokevirtual #27                 // Method bar:()Lscala/Tuple2;
     4: astore_3
     5: aload_3
     6: ifnull        35
     9: aload_3
    10: invokevirtual #33                 // Method scala/Tuple2._1$mcJ$sp:()J
    13: lstore        4
    15: aload_3
    16: invokevirtual #36                 // Method scala/Tuple2._2$mcJ$sp:()J
    19: lstore        6
    21: new           #13                 // class scala/Tuple2$mcJJ$sp
    24: dup
    25: lload         4
    27: lload         6
    29: invokespecial #21                 // Method scala/Tuple2$mcJJ$sp."<init>":(JJ)V
    32: goto          47
    35: goto          38
    38: new           #38                 // class scala/MatchError
    41: dup
    42: aload_3
    43: invokespecial #41                 // Method scala/MatchError."<init>":(Ljava/lang/Object;)V
    46: athrow
    47: astore_2
    48: aload_2
    49: invokevirtual #33                 // Method scala/Tuple2._1$mcJ$sp:()J
    52: lstore        8
    54: aload_2
    55: invokevirtual #36                 // Method scala/Tuple2._2$mcJ$sp:()J
    58: lstore        10
    60: return

public void accessors();
  Code:
     0: aload_0
     1: invokevirtual #27                 // Method bar:()Lscala/Tuple2;
     4: astore_1
     5: aload_1
     6: invokevirtual #33                 // Method scala/Tuple2._1$mcJ$sp:()J
     9: lstore_2
    10: aload_1
    11: invokevirtual #36                 // Method scala/Tuple2._2$mcJ$sp:()J
    14: lstore        4
    16: return
```

Yeah, so, it extracts the first and second elements of the
primitive-specialized tuple, ~~constructs a `(java.lang.Long,
java.lang.Long)` Tuple~~ constructs another primitive-specialized tuple
(for no reason???), then does the match on that.

sigh.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant