Skip to content
This repository has been archived by the owner on Jan 20, 2020. It is now read-only.

Project roadmap? #56

Closed
nikulis opened this issue Jun 6, 2017 · 19 comments
Closed

Project roadmap? #56

nikulis opened this issue Jun 6, 2017 · 19 comments

Comments

@nikulis
Copy link
Contributor

nikulis commented Jun 6, 2017

Hi All,

In diving into this library's code, I have accumulated several questions about its design and many ideas for future features.

Is there some larger project roadmap? Even just to release v1.0… I would love to open issues to ask questions and discuss and work on features, but I don't want to spam anyone or start working on things that significantly diverge from what the maintainers intend to support.

Thanks! 😄

@fb55
Copy link
Contributor

fb55 commented Jun 6, 2017

tl;dr We currently don't have a fixed roadmap for gdax-node. What we do have is a couple of rough ideas, but nothing is set in stone and I'm happy to discuss all ideas.

I personally would like to get this library to a state where it can be used on gdax.com, which currently duplicates most of the logic (sadly not in a way that would be useful to open-source). To get there we should replace request with superagent, which would give us support for running inside of browsers and promises out of the box.

Also, we were thinking about generating parts of the code base from a Swagger spec. This would focus the scope of this library by removing the need to implement every single endpoint by hand. Nothing is decided on this front though.

Internally, we are starting to switch our projects to use prettier. It might be nice to adopt it for gdax-node as well, as especially for an open-source project, enforcing consistent formatting will (hopefully) decrease the barrier for new contributors.

Once again, these are just rough ideas. Please feel free to suggest anything that comes to mind.

@nikulis
Copy link
Contributor Author

nikulis commented Jun 6, 2017

+infinity for prettier!! I've been using that on absolutely everything these days…

And ok, thank you for the explanation. I'll gather a few thoughts and bring them back here.

@awitherow
Copy link
Contributor

Got prettier started in #58 🎉 also figured it would be a good idea to do #57 to have a lock file for dependencies.

@nikulis
Copy link
Contributor Author

nikulis commented Jun 9, 2017

I personally would like to get this library to a state where it can be used on gdax.com

Wow! That's a great goal. I hope that we can see this project eventually reach that maturity to benefit both Coinbase/GDAX and the clients that would like to use it themselves.

To get there we should replace request with superagent, which would give us support for running inside of browsers and promises out of the box.

Huh, I can't believe I've never heard of superagent before. That definitely looks promising, considering that making network requests is so fragmented between Node's http and browsers' XMLHttpRequest and more modern fetch.

I'll be honest and say that the vast majority of my work is front-end… until a few months ago, I hadn't touched Node since before 1.0. However, my gut reaction is that we could definitely figure out some way to craft a library that can reach both Node and browser environments. I know there's discussion in other places (#59) on Node version targeting; as far as browsers go, npm packages are almost always run through babel and uglify and all that anyways, so using ES6+ shouldn't be an issue.

We could also have a dist directory on every release that has a pre-made transpiled, minified, etc. file(s) ready to run. And/or maybe the project needs to be split into separate node and browser modules that both just act as thin layers on top of a shared gdax-js-core library... lots of ways you could go with that.

Also, we were thinking about generating parts of the code base from a Swagger spec.

That's neat, I haven't played with Swagger for a while either. It would be cool to have code be auto-generated, but tbh GDAX' API right now is not super big in terms of the numbers of endpoints. On the other hand, using Swagger's CodeGen features could potentially allow others to re-use existing API specs to generate libs in other languages and platforms in the future.


As to my more conceptual questions about the API's design:

I find it a bit odd that callbacks have 3 arguments (err, response, data) instead of just two (namely err and data). The README documentation states that

response will be a generic HTTP response abstraction created by the request library. data will be the result of JSON-decoding the server's response, or null if the response was not parseable.

That raises a few flags for me...

  1. Not far above this, the documentation presents one of the library's features as offering "Abstracted interfaces – don't worry about HMAC signing or JSON formatting, the library does it for you." I think that exposing the entirety of the HTTP response kind of takes away from this principle of abstraction, especially by having it as the second parameter instead of the last (which could be more easily ignored by users).
  2. Of course, if the project switches away from the request library (or goes through any similar change in the future), then this reliance upon underlying implementation detail will create a potentially breaking change.
  3. Promises only settle into one of two states: resolved or rejected. Rejecting with a single error is straightforward enough, but creating a single resolve with essentially two entities (response and data) requires some potentially quirky semantics, namely resolving with an object or array that wraps them, then requiring the user to un-wrap them when they need to be used.
  4. From the point of view of a user, I would think that a JSON parse error should result in an error/rejection instead of data just being null.
  5. There is also an inconsistency in that some exposed methods potentially make multiple HTTP round-trips under the hood, but only one response is ever sent to the callback.

Whew, I think that – and my overall itch to write lots more tests and measure code coverage – covers most of my initial quandaries for now.

@fb55
Copy link
Contributor

fb55 commented Jun 15, 2017

Thanks for the thoughts! I would use node@6 as the required version, which would allow us to use most of es6 (sadly no async/await though). Once we adapt superagent we should be fine maintaining a single codebase — users that want to run it in browsers would only need to make sure this library is transpiled with babel (plus we need to figure out what to do about websockets, as ws doesn't provide a web fallback).

Part of the intention for Swagger is to be able to offer more client libraries without much work — mainly the order book sync including the websocket API would need to be implemented, although that might not even be needed for all platforms.

Regarding the API: I completely agree with you. Requests should be hidden from users, as users don't need to care about them and multi-request methods just get out of hand. Same applies for promises, where we should only return the body.

I don't have an issue with breaking things. With the current API surface being as bad as it is right now we have reason enough to overhaul the entire thing.

@nikulis
Copy link
Contributor Author

nikulis commented Jun 16, 2017

For websockets, what about socket.io? Its websocket client runs both on Node and in the browser.

@nikulis
Copy link
Contributor Author

nikulis commented Jun 20, 2017

I worked on implementing the new callback pattern (e.g. callback = (err, data) => {} instead of callback = (err, response, data) => {} (and ditto for promises to only resolve to the data). It also uses superagent now instead of request.

In this repo's current state it's not easy to do a PR because the work is based on #60, so I just opened a new PR in my fork for the time being.

@rmm5t
Copy link
Contributor

rmm5t commented Dec 2, 2017

Regarding the API: I completely agree with you. Requests should be hidden from users, as users don't need to care about them and multi-request methods just get out of hand. Same applies for promises, where we should only return the body.

Regarding this roadmap, does this stance on Promise handling hold when we're considering API calls that require pagination? Right now, it appears that a callback is required to read the CB-AFTER header that's relied on for pagination, because the returned promises masks the underlying response object (and thereby the headers).

I suppose a Promise pattern could instead look at the last object in the returned set of data, and inspect the relevant id to use for the subsequent after, but I just wanted to clarify that this is the desired intent.

@fb55
Copy link
Contributor

fb55 commented Dec 27, 2017

Late response: That’s the main reason why I wanted to return request objects as promises, and not the data itself. It would make the API slightly less convenient, but more flexible.

@nikulis
Copy link
Contributor Author

nikulis commented Jan 21, 2018

Today I've been trying to catch up on recent developments (since I was not able to contribute for a while 😢), and I see that others' thoughts have been mirroring my own musings. I gathered some of these topics from comments in various threads - all of which could be broken out into their own top-level issues.

And thank you to everyone who has been moving this project forward! (especially, it appears, @rmm5t 😄 )

Builds

Echoing @rmm5t's list of priorities, I too think that putting in place a more stable and frequent release process is a top priority.

Maybe we can move to a branch flow where master has only tagged releases while in-progress / potentially messy work can happen off of "dev", or something similarly git-flow-esque.

In the original spirit of making this library work in multiple environments, I think it would be advantageous to introduce a build step to transpile the source into different targets which are compatible with each. It is possible to make packages that target multiple platforms, and I think we would really only need to target 2, maybe 3:

  1. CommonJS Modules w/ ES2015 (e.g. Node)
  2. ES Modules w/ ES2015 (for bundlers + supported browsers)
  3. Bundled, self-contained ES5 file (gdax.js & gdax.min.js ?)

This would also have the benefits of:

  • not worrying about what language features we can use in the source itself, as those can be transpiled away
  • opens the possibility of using TypeScript for the source(!)
  • (theoretically) incorporate automated codegen from swagger specs

Docs

I think it would be nice to eventually move the bulk of documentation out of the README. A GitHub wiki could be nice, alongside documentation generated from the source - ESDoc (and possibly TypeDoc) seem to both be viable options. Generated documentation could be hosted on coinbase.github.io.

Shipping docs is a relatively low-hanging fruit since it's independent of API-level decisions. In the past I played with ESDoc on this codebase - I would be happy to look into it again.

Looking further ahead,

@rmm5t proposes moving the API from positional to named parameters, which I would love.


@fb55 said,

…the callback interface should probably stay the same for now. Instead, I'd opt for deprecating the callback interface entirely and urging all users to use promises.

which I also think would be nice. Even if it can't be done in a near-future release, moving to promises-only would greatly simplify parts of the codebase.


The discusson here (and on my (antiquidated) #86) moved into whether HTTP responses, or some representation of them, should be returned to users, which is also an interesting topic. I would be surprised if most users did want more network internals exposed - though if that were the case, I certainly wouldn't object to it being implemented that way.

From the perspective of a user of the library, I would like for pagination to be abstracted along with other http/network-layer internals like rate limiting - the thought being, if I cared about HTTP headers and such, I would just use a general http/request library directly and plug in the URLs myself. From gdax-node, I would rather get a promise with just the data (parsed response body). I do think it would be appropriate to include more response/network information on rejected Errors (i.e., If it works, I don't care about the network; if it doesn't work, I might want to know why.) @rmm5t suggests an implementation-agnostic response object format that I think could work nicely appended to an Error in that case.

Right now, the library already offers a bit of pagination-abstracting functionality in the PublicClient#getProductTradeStream() method. It accepts parameters tradesFrom and tradesTo, and is documented as "fetches all trades with IDs >= tradesFrom and <= tradesTo. Handles pagination and rate limits." tradesTo can also be a predicate function that accepts individual trade objects in order to decide where to stop, which I think is great. Maybe other paginated endpoints could work similarly, maybe with a version that returns a stream (or observable?) and another version that returns a promise that will just resolve with a single array when all are complete.

@rmm5t
Copy link
Contributor

rmm5t commented Jan 21, 2018

Hi @nikulis! I agree with almost everything you said here, but I do take issue with one topic:

Maybe we can move to a branch flow where master has only tagged releases while in-progress / potentially messy work can happen off of "dev", or something similarly git-flow-esque.

Git-flow is a fairly terrible and unnecessary methodology. If you love it, and your entire team wants to use it on private repos, more power to you, but IMO, it should never be used in an open source community, because it yields zero value and only adds confusion to new contributors.

Stick with the standard GitHub convention that 99% of all open source projects adhere to (and for good reason). The master branch has meaning. It's where good code gets merged, regardless of whether it's been officially released or not. Feature branches and/or user-forked branches are for the "messy work," though I also argue that messy work shouldn't live very long.

Yes, release tags are important, regardless. Tags matter much more than branches (not counting feature branches). Long-lived branches (other than master should only survive if there's an attempt at maintaining an older backwards-incompatible release (unlikely for a library like this), and that branch would have stemmed off of an existing tag.

@nikulis
Copy link
Contributor Author

nikulis commented Jan 23, 2018

@rmm5t Thank you for the explanation! Maybe a brief instruction on branching convention (however simple) could be put in CONTRIBUTING.md? Along with a note about if / how / when PRs should keep up with new commits on master while still open.

I also very much appreciated your outlining of release strategy in #178. Have there been any decisions towards feature freeze for any given versions? Do you think it would help to make milestones for specific releases to assign issues, or would you recommend a different procedure?

@rmm5t
Copy link
Contributor

rmm5t commented Jan 23, 2018

Maybe a brief instruction on branching convention (however simple) could be put in CONTRIBUTING.md? Along with a note about if / how / when PRs should keep up with new commits on master while still open.

It looks like CONTRIBUTING.md is a new addition to the project, but it also looks like it might be a global Coinbase guideline, but basic instructions like that are a good suggestion.

I also very much appreciated your outlining of release strategy in #178. Have there been any decisions towards feature freeze for any given versions? Do you think it would help to make milestones for specific releases to assign issues, or would you recommend a different procedure?

Thanks! Good question. No, nothing that I'm aware of has been decided there. My guess is that's still mostly in @fb55's head. I know he has a goal of a v1.0 release. Personally, I would prefer more interim releases, better NPM and GitHub conventions/procedures around those releases, an extended build matrix, and better honing and consistency in the function signatures before a v1.0 release, but ultimately, that decision is up to Coinbase at the moment.

Milestones (or a GitHub Projects) are a good idea, but they can also be a place for ideas to die if they aren't actively maintained, worked on, and pruned.

@fb55
Copy link
Contributor

fb55 commented Jan 24, 2018

Personally, I would prefer more interim releases

Generally agree that there should be way more frequent releases, feature freeze is more the issue here. There is always something I would still like to change (most recently the return values of promises).

It would also be nice to have something to announce on blog.gdax.com eventually, to point more people at this library.

better NPM and GitHub conventions/procedures around those releases

What exactly are you thinking about? A completely automated process?

an extended build matrix

Happy to accept PRs for the Circle config :D

and better honing and consistency in the function signatures before a v1.0 release

I'd like to switch most constructors to accept all args in a single object by default — anything else you can think of?

@rmm5t
Copy link
Contributor

rmm5t commented Jan 24, 2018

@fb55

Generally agree that there should be way more frequent releases, feature freeze is more the issue here.

True, feature freezing and frequent releases are non-related issues -- especially for most open-source projects.

It's just that feature freezing is a somewhat unnecessary notion unless there's a documented and structured project roadmap with milestones (a nicety, but potentially overkill for a project like this). More on that topic below.

It would also be nice to have something to announce on blog.gdax.com eventually, to point more people at this library.

I suppose, and I understand that a v1.0 release has more marketing weight, but for the most part, it's all arbitrary. I also understand the marketing significance for Coinbase, but I think people who actually use the library would first prefer better release management and more frequent releases that approach a cleaner API. Typically, a v1.0 just makes sense at some point. I think we're close, but I also believe that we've actually delayed an eventual v1.0 by not publishing interim releases.

e.g. You have to get the deprecated stuff out in the wild before you can remove it and move on to a clean, stable release.

e.g. I'd love to see named arguments replace most positional arguments, but one thing that's holding me back from doing that is that we only have a patch-level-release that included the (now deprecated) product-based inconsistencies between PublicClient and AuthenticatedClient. I would have preferred to see that in a minor release, then have another semver-compatible zero-dot-minor release that actually removes the old deprecated behavior. Next would be a release that prefers keyword/named arguments.

In other words, getting to a v1.0 release isn't just about drawing a line in the sand and marking it as such. It can be that way for general application development, but for shared library development, you must also have a release schedule leading up to a v1.0 release. The interim releases often matter much more than the major release, because they document changes, backwards incompatibilities, and new features while also giving people the chance to bite off small chunks at a time while upgrading. i.e. People can walk the version ladder one step at a time.

What exactly are you thinking about? A completely automated process?

Not exactly, I'm just reiterating my complaint that there is an lack of discipline on this project with regards to releases.

Happy to accept PRs for the Circle config :D

While possible with parallel jobs, Circle isn't the best tool for this. Circle is good tool for private projects and applications, but Travis is far better tool for build matrices and libraries -- especially for open source projects.

Are you willing to switch to Travis for this project, or would you prefer a "build matrix" using parallel jobs in Circle CI? Edit: You can ignore this question, I'll open up a new issue to capture this.

@fb55
Copy link
Contributor

fb55 commented Jan 24, 2018

a nicety, but potentially overkill for a project like this

That's my feeling.

In other words, getting to a v1.0 release isn't just about drawing a line in the sand and marking it as such

I see your point. I'll try to figure out a better release process, I hope the 0.5 releases were the last ones with these inconsistencies.

(CI discussion moved to #249)

@cpjolicoeur
Copy link

Are there any plans to release an update/new tag to NPM. There are some key critical fixes that have been added recently and are not available via NPM directly as a release marker (not to mention that the current 0.5.1 release doesn't even have a git tag as has been mentioned above).

@fb55
Copy link
Contributor

fb55 commented Feb 13, 2018

Published v0.6.0. Lmk if there are any issues with this release.

@rmm5t
Copy link
Contributor

rmm5t commented Jun 25, 2018

Theres some great discussion here, but I'm closing this issue out in favor of tracking a project roadmap on the wiki . For now, there's an initial draft where I've started to copy over some of the more important discussion throughout some of the open and closed issues already, but it still needs some honing. Feel free to add to it:
https://github.com/coinbase/gdax-node/wiki/Project-Roadmap

@rmm5t rmm5t closed this as completed Jun 25, 2018
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Development

No branches or pull requests

5 participants