Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

refactor core messaging and RPC (electron-only so far) #11076

Closed
wants to merge 5 commits into from

Conversation

paul-marechal
Copy link
Member

@paul-marechal paul-marechal commented Apr 26, 2022

  • Add new reusable components to do communication and RPC in the framework.
  • Refactor electron-main to electron-browser to use those new RPC components.
  • Refactor the sample updater from dev-packages/sample-api to use the new system.
  • Add electron-main to backend IPC communication.

I open this PR as a draft because I still need to add much needed documentation and tests.

I expect from this PR to get feedback on the proposed new APIs and go forward with it. I also hope for #11011 to align with it.

This PR only covers Electron-specific communication, I left out the backend to frontend part for now. We can discuss if it should be done in this PR too or not.

Here's a temporary rundown of the proposed new messaging and RPC system, until I document it properly:


The goal of the new APIs is to simplify usage by extenders, internal wiring should hold all the complexity but stay reasonable.

Connection<T> are the heart of any communication. T can be an object, serialization may happen eventually. Connections are often API adapters: Each and every communication library has its own API to send and receive messages but for most relevant use cases only sendMessage and onMessage are required. You are only required to implement this abstraction if you wish to feed your own details into the other reusable messaging components.

RpcConnection are the highest-level abstraction to be able to setup JavaScript proxies. Note that instances are scoped to their respective remote instances, hence why there is no notion of id in this API. However you establish a remote connection to a specific instance, or whatever protocol you actually use, the RpcConnection is abstracting it.

ProxyProvider are named bindings to allow binding a proxy as a regular Inversify service. Example:

bind(SomeRemoteService)
    .toDynamicValue(ctx => ctx.container.getNamed(ProxyProvider, 'backendOrFrontendOr...').getProxy(remoteServicePath))
    .inSingletonScope();

The API is meant to hide how proxies are setup: connection, protocol, etc... You just want a proxy to some remote runtime.

Proxies no longer have a setClient method: if you want to react to notifications you can add a listener on methods named like onSomething.... If you want a remote object to make requests and get responses, you need a new proxy.

ServiceContribution is the new API to expose remote services for proxies to be created. They are either a map of servicePath to a callback to get the actual instance to serve over RPC, or directly a callback that will handle the request (serviceId, serviceParams).

Inversify bindings might be more verbose now, but this is the best trade-off I was able to find to have small components that make sense. Implementation details are kept to the binding of internals. Theia extenders will most likely never have to write as much wiring code just to use what I documented so far.

Lots of implementation details.

Noteworthy:

  • I kept using JSON-RPC as protocol for now, my main concern is redesigning our APIs/abstractions to make JSON-RPC -or any other protocol- a replaceable implementation detail.
  • The current JSON-RPC implementation relies on a generic connection multiplexer to serve different remote services over the main connection used for messaging.
  • I don't explicitly serialize anything and instead I rely on the low-level transport's implicit serialization. We can serialize ourselves at any point, should we require it.
  • Implementation of a simple and generic ConnectionMultiplexer that can run over most Connection<any>. Supports bi-directional channel opening.
  • Inversify only resolves bindings as needed/required. You might see some places where I get a binding without doing anything with it, this is because there are side-effects when resolving it (e.g. serve RPC calls).
  • We often have an issue with Inversify where some bindings have to be resolved synchronously, but they are inherently constructed asynchronously. This is especially true with some proxy connections that can only be setup once something happens. This is why I rely on DeferredConnection in various places: It will buffer messages until the underlying connection is ready, but I can start using it right-away to get a RPC proxy out of it synchronously.
  • I added "tagged strings and symbols" to allow TypeScript to understand our typings when binding using Inversify. See packages/core/src/common/types.ts#serviceIdentifier.
  • Reacting to the frontend loading and reloading itself in Electron BrowserWindows is a pain...

How to test

Run the Electron app, IPC-related features should work:

  • You should see a log in the console output with the title of all opened windows.
  • You should be able to execute the Open New Window command.

Review checklist

Reminder for reviewers

@paul-marechal paul-marechal added enhancement issues that are enhancements to current functionality - nice to haves quality issues related to code and application quality jsonrpc issues related to jsonrpc messaging issues related to messaging core issues related to the core of the application potentially-breaking A change that might break some dependents conditionally labels Apr 26, 2022
@JonasHelming
Copy link
Contributor

@tortmayr

@tsmaeder
Copy link
Contributor

tsmaeder commented May 6, 2022

Hey @paul-marechal I had a glance at the PR today, and there are a couple of things which make it hard to review right now:

  1. Drive-by fixes: there are a bunch of changes which don't seem to have anything to do with the intent of the PR. That's fine in a small PR, but here, but the PR is already a bit overwhelming. IMO, it would make sense to open a separate PR for those.
  2. You mention that documentation is missing and that's fine. But it's quite hard to understand right now from the description what the PR is supposed to do concretely. Maybe a "this is the old concept, it's replaced by this new concept"-style guide might help?

I really want to understand where you're going with this change. Maybe we can do another live session and clarify the gist of it?

@tortmayr
Copy link
Contributor

tortmayr commented May 6, 2022

Hi @paul-marechal,
This draft sounds very promising and I'm in favor of introducing an abstraction to hide away rpc-proxy setup details and make it easier for Theia adopters to configure and use rpc proxies.
I only had quick glance for now but I will do a more in-depth review once I'm finished with addressing the initial review feedback for #11011.

If I understood correctly this change currently only refactors the Theia extension messaging API. I wonder whether it's feasible to also apply similar changes to the message RPC protocol of the plugin API. There we don't have access to inversify. Could this be a potential problem?

In general, I definitely like the object-based connection approach more than the binary channel approach we introduced in #11011.

Comment on lines +20 to +24
export const electronMainWindowServicePath = servicePath<ElectronMainWindowService>('/services/electron-window');
export const ElectronMainWindowService = serviceIdentifier<ElectronMainWindowService>('ElectronMainWindowService');
export interface ElectronMainWindowService {
openNewWindow(url: string, options?: NewWindowOptions): undefined;
openNewDefaultWindow(): void;
openNewWindow(url: string, options?: NewWindowOptions): MaybePromise<void>;
openNewDefaultWindow(): MaybePromise<void>;
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is another seemingly drive-by change that allowed me to actually type check the bindings for it.

Without this TypeScript wasn't giving me any type checking...

Comment on lines +54 to +92
/**
* Part of Theia's Service Layer.
*
* Requested services to offer over RPC are fetched through a `ServiceProvider` that will source from `ServiceContribution` bindings.
*
* ## Usage Examples
*
* ### Record
*
* ```ts
* bind(ServiceContribution)
* .toDynamicValue(ctx => ({
* [PATH1]: () => ctx.container.get(Service1),
* [PATH2]: () => ctx.container.get(Service2),
* [PATH3]: params => ctx.container.get(params.yourParam ? Service3 : Service4);
* // ...
* }))
* .inSingletonScope()
* .whenTargetNamed(YourServiceNamespace);
* ```
*
* ### Function
*
* ```ts
* bind(ServiceContribution)
* .toDynamicValue(ctx => (serviceId, params) => {
* // process arguments...
* return resolvedServiceOrUndefined;
* }))
* .inSingletonScope()
* .whenTargetNamed(YourServiceNamespace);
* ```
*/
export const ServiceContribution = serviceIdentifier<ServiceContribution>('ServiceContribution');
export type ServiceContribution = ServiceContributionFunction | ServiceContributionRecord;
// eslint-disable-next-line @typescript-eslint/no-explicit-any
export type ServiceContributionFunction = (serviceId: string, params?: any) => any;
// eslint-disable-next-line @typescript-eslint/no-explicit-any
export interface ServiceContributionRecord { [serviceId: string]: (params?: any) => any };
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Service contributions are one of the new layers: When you want to do RPC rather than explicitly binding services to work over one protocol or the other, you can use this contribution point to provide the instance to expose via RPC, and connection handlers may choose any RPC implementation that will fetch the instance to serve from the contributed services.

e.g. Frontend requests for a proxy at /jsonrpc/some/service/path via socket.io. Socket.io receives the new connection and routes it to the /jsonrpc/*serviceId connection handler which runs JSON-RPC over the incoming connection. This JSON-RPC connection handler then looks up the service for serviceId through ServiceContribution bindings, if it find an instance it will wire it to answer JSON-RPC requests over the incoming connection. Done.

Comment on lines +28 to +59
export default new ContainerModule(bind => {
// #region transients
bind(DefaultConnectionMultiplexer).toSelf().inTransientScope();
bind(DefaultRpcProxyProvider).toSelf().inTransientScope();
bind(ContainerScopeRegistry).to(DefaultContainerScopeRegistry).inTransientScope();
// #endregion
// #region singletons
bind(RpcProxying).to(DefaultRpcProxying).inSingletonScope();
bind(Reflection).to(DefaultReflection).inSingletonScope();
// #endregion
// #region factories
bind(ConnectionTransformer)
.toFunction((connection, transformer) => new TransformedConnection(connection, transformer));
bind(DeferredConnectionFactory)
.toFunction(promise => new DeferredConnection(promise));
bind(JsonRpcConnectionFactory)
.toFunction(connection => new JsonRpcConnection(connection));
bind(ContainerScopeFactory)
.toFunction((container, callbacks) => new DefaultContainerScope(container, callbacks));
bind(RcFactory)
.toFunction(ref => DefaultRc.New(ref));
bind(LazyProxyFactory)
.toDynamicValue(ctx => promise => {
// eslint-disable-next-line no-null/no-null
const nullObject = Object.freeze(Object.create(null));
const reflection = ctx.container.get(Reflection);
const proxyHandler = new LazyProxyHandler(promise, reflection);
return new Proxy(nullObject, proxyHandler);
})
.inSingletonScope();
// #endregion
});
Copy link
Member Author

@paul-marechal paul-marechal May 6, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This container module just contains new re-usable components to facilitate wiring using the new APIs.

Comment on lines +42 to +92
// We need to setup the JSON-RPC connection between electron-main and backend:
// We'll multiplex messages over a Node IPC connection and talk JSON-RPC over the channels.
bind(ConnectionMultiplexer)
.toDynamicValue(ctx => {
const transformer = ctx.container.get(ConnectionTransformer);
const nodeIpcConnectionFactory = ctx.container.get(NodeIpcConnectionFactory);
const deferredConnectionFactory = ctx.container.get(DeferredConnectionFactory);
const parentIpc = nodeIpcConnectionFactory(process);
const sharedIpc: AnyConnection = transformer(parentIpc, {
decode: (message, emit) => {
if (typeof message === 'object' && THEIA_ELECTRON_IPC_CHANNEL_NAME in message) {
emit(message[THEIA_ELECTRON_IPC_CHANNEL_NAME]);
}
},
encode: (message, write) => {
write({ [THEIA_ELECTRON_IPC_CHANNEL_NAME]: message });
}
});
const deferredConnection = deferredConnectionFactory(waitForRemote(sharedIpc));
return ctx.container.get(DefaultConnectionMultiplexer).initialize(deferredConnection);
})
.inSingletonScope()
.whenTargetNamed(ElectronMainAndBackend);
bind(BackendApplicationContribution)
.toDynamicValue(ctx => ({
initialize(): void {
const multiplexer = ctx.container.getNamed(ConnectionMultiplexer, ElectronMainAndBackend);
const serviceProvider = ctx.container.getNamed(ServiceProvider, ElectronMainAndBackend);
const jsonRpcConnectionFactory = ctx.container.get(JsonRpcConnectionFactory);
const rpcProxying = ctx.container.get(RpcProxying);
multiplexer.listen(({ serviceId, serviceParams }, accept, next) => {
const service = serviceProvider.getService(serviceId, serviceParams);
if (service) {
rpcProxying.serve(service, jsonRpcConnectionFactory(accept()));
} else {
next();
}
});
}
}))
.inSingletonScope();
bind(ProxyProvider)
.toDynamicValue(ctx => {
const multiplexer = ctx.container.getNamed(ConnectionMultiplexer, ElectronMainAndBackend);
const jsonRpcConnectionFactory = ctx.container.get(JsonRpcConnectionFactory);
return ctx.container.get(DefaultRpcProxyProvider).initialize(
(serviceId, serviceParams) => jsonRpcConnectionFactory(multiplexer.open({ serviceId, serviceParams }))
);
})
.inSingletonScope()
.whenTargetNamed(ElectronMainAndBackend);
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is an example of what I meant by "wiring might look more complex" with the new API:

The most important binding here is the ProxyProvider named ElectronMainAndBackend. The idea is that each runtime may need proxies from multiple other runtimes, so we need a way to identify which is which and this is now done with named bindings.

This allows developers to write:

bind(MyRemoteService)
    .toDynamicValue(ctx => ctx.container.getNamed(ProxyProvider, Target).getProxy(remoteServicePath))
    .inSingletonScope();

The other bindings are only there as implementation details:

We need a re-usable ConnectionMultiplexer to create remote channels in order to create our proxies as well as listen for incoming channels to serve the other side's requests for their own proxies (bidirectional).

The BackendApplicationContribution is there to force the creation of the ConnectionMultiplexer and start listening straight away. The reason is that if we left it to the ProxyProvider to resolve the listening bindings, then if an application never makes use of proxies then ProxyProvider will never be resolved, nor ConnectionMultiplexer. We need to explicitly setup the listening bits.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe I'm not understanding the concepts here yet, but it just seem I need to understand too many interlocking concepts at once here: If I think about what we're trying to achieve, it's not that complicated:

  1. The system establishes a number of well know point-to-point connections at startup, for example front end <=> back end or front end <=> plugin host or back end <=> file system change watcher
  2. Some clients want to bind service objects whose lifecycle it bound to those connections when the connections are established.
  3. Some clients bind service proxies to those connections.

When we establish a connection, we establish a service scope for those services. The connection being established is put inside this scope and allow any listeners to react to the connection.
We can bind a rpc protocol factory that will create an rpc protocol instance in this new scope. it will be bound to the connection.
We need a contribution point that allows clients to bind service object factories and client proxy factories inside this scope. There needs to be an initialization protocol between clients and services that is two-phase: only after full initialization are remote calls allowed.
The services are bound to a named endpoints, for example: "BackEndFrontEnd" or "PluginHost". Note that multiple instances of the same endpoint may be established (such as with plugin hosts).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well the above code is doing us a disservice as there doesn't need to be connection scopes in this case: There is a 1-to-1 relationship between the electron-main runtime and the backend runtime.

Connections scopes are only required when you have a 1-to-many relationship like between backend and frontends.

In such a case, not available in this PR yet, I have wired an internal router that takes connections targeted at the backend and tries to identify the origin frontend instance to properly route it to the appropriate scoped Inversify container. Connection handlers are then run in that scoped container, like we do now. The handlers will determine if we're doing RPC or anything else over the new connection.

Comment on lines -71 to -81
@injectable()
export class SampleUpdaterClientImpl implements SampleUpdaterClient {

protected readonly onReadyToInstallEmitter = new Emitter<void>();
readonly onReadyToInstall = this.onReadyToInstallEmitter.event;

notifyReadyToInstall(): void {
this.onReadyToInstallEmitter.fire();
}

}
Copy link
Member Author

@paul-marechal paul-marechal May 6, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a very important change with the new RPC/proxy API: we no longer setup "clients" to get notifications back from the remote: we can use Event fields directly instead.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure this is actually an improvement. How is this better than the old way of setting up service objects on both ends?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I fear this might make initialization of service objects more complicated.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see how this makes things more complicated. I keep deleting code to use this new API.

Comment on lines -116 to +107
@inject(SampleUpdaterClientImpl)
protected readonly updaterClient: SampleUpdaterClientImpl;

protected readyToUpdate = false;

@postConstruct()
protected init(): void {
this.updaterClient.onReadyToInstall(async () => {
this.updater.onReadyToInstall(async () => {
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here's an example of directly using Event fields directly on the proxy. Less boilerplate code than with clients.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I generally don't mind boilerplate code if it's simpler to understand than the alternative. Typing is not the problem in software construction.

Copy link
Member Author

@paul-marechal paul-marechal May 11, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Typing prevents mistakes. This apart, the previous required boilerplate code for the xxxClient API was often misused which I assumed was a symptom of a design problem.

The new API allows servers to expose their events through proxies. For requests going the other way from the main RPC setup you can just create a new dedicated proxy.

Just as an example, the most egregious implementation I've seen using clients must be this one:

this.terminalWatcher.onUpdateTerminalEnvVariablesRequested(() => {
this.storageService.getData<string>(ENVIRONMENT_VARIABLE_COLLECTIONS_KEY).then(data => {
if (data) {
const collectionsJson: SerializableExtensionEnvironmentVariableCollection[] = JSON.parse(data);
collectionsJson.forEach(c => this.shellTerminalServer.setCollection(c.extensionIdentifier, true, c.collection));
}
});
});

Here TerminalWatcher is a client to TerminalServer. And instead of having the remote server call a method to fetch collections from its clients and use the result, it does this roundabout procedure where it sends an event and expects the receiving side to initiate subsequent RPC calls (plural) with this.shellTerminalServer.setCollection(...).

In my latest changes I replaced the idea of the client on the server side by a proxy from the frontend implementing storage methods. The server just does environmentStore.getEnvironmentVariables() or environmentStore.saveEnvironmentVariables(vars), it is transparently handled by the frontend.

return new Proxy(target, new DisposableProxyHandler());
}

export class DisposableProxyHandler<T extends Disposable> implements ProxyHandler<T> {
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This file is not used anymore, but it gives a tool to wrap disposables so that we get warnings (without throwing) whenever we use a disposed instance because that's a bug.

Comment on lines +26 to +39
/**
* Represents a scoped connection to a remote service on which to call methods.
*
* There should be a 1-to-1 relationship between a `RpcConnection` and the
* remote service it represents.
*/
export const RpcConnection = serviceIdentifier<RpcConnection>('RpcConnection');
export interface RpcConnection {
onClose: Event<void>
onRequest(handler: (method: string, params: any[], token: CancellationToken) => any): void
onNotification(handler: (method: string, params: any[]) => void): void
sendRequest<T>(method: string, params: any[]): Promise<T>
sendNotification(method: string, params: any[]): void
}
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In my opinion this is one of the most important new abstractions in this change.

No matter how your RPC protocol works, or how you establish the connection, as long as you respect this interface we can create and serve JavaScript proxies.

Instances that implement this interface should be scoped to a service.

Copy link
Contributor

@tsmaeder tsmaeder May 11, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMO, we need to think about what we're abstracting here: is it transport (socket, electron, ipc) or is it lifecycle? I think the concepts adopters should deal with are service objects (proxies and servers) and transports (or channels in the related PR). Everything in between should be an implementation detail.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is an implementation detail, but remote procedure calls inherently carry the idea that there's some underlying "connection" to a remote. The connection related event onClose is only there to propagate to users of this API that the link is no longer valid and further calls won't work anymore. If the link never breaks, onClose will never be fired. Note that this API doesn't leak more than that to users, only notify proxies that RPC won't work anymore with the current instance.

Comment on lines +45 to +65
/**
* A `Connection` allows you to listen for messages and send messages back.
*
* Messages must arrive integrally and in the order they are sent!
*
* Most implementations are going to be API adapters.
*/
export const Connection = serviceIdentifier<Connection<any>>('Connection');
export interface Connection<T> {
readonly state: ConnectionState
onOpen: Event<void>
onClose: Event<void>
/**
* Emitted when something goes wrong with the underlying transport.
* Typically when a message fails to be sent or received.
*/
onError: Event<Error>
onMessage: Event<T>
sendMessage(message: T): void
close(): void
}
Copy link
Member Author

@paul-marechal paul-marechal May 6, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Second most important abstraction: a normalized interface to send/receive messages.

We handle so many APIs to do communications (sockets, node IPC, electron IPC, custom multiplexed channels, etc) and none really has the same API. We use them all in the same way: to send and receive messages.

This type wouldn't be required if it weren't for the other components I created that are used to help with setting up communication stacks.

e.g. I need to do multiplexing over different IPC APIs for Electron but I only want one multiplexer implementation. By using the Connection type I can just write adapters for each API and use both in the same DefaultConnectionMultiplexer implementation.

Another important thing to keep in mind with connections: They may not only send strings or buffers. Most APIs support automatic serialization, and some of those implementations do a much better job than any other JSON.stringify we would call ourselves.

e.g. Socket.io will optimize the transmission of buffers sent through it. By thinking we should just send strings we called JSON.stringify on our structures before passing them to Socket.io, preventing it from properly sending our data because we passed it as a raw string.

The idea with connections then is to understand what the underlying transport supports sending and stick to that, unless we have more specific needs. You may see more Connection<any> than Connection<string> in this PR.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We've discussed the necessity to encode/decode messages at some point. Either we assume a transport takes care of object encoding or we assume the "application layer" does.
Either way, we can run a stateful rpc protocol (request/reply) on the abstraction of "Channel" or "Connection" and the protocol should be separate from the transport. IMO, separating encoding/transport/protocol is the right way to go in the long run.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Either we assume a transport takes care of object encoding or we assume the "application layer" does.

Ultimately if you implement a component that relies on some connection, I believe you should assume that the connection will support whatever data objects you need to send. Only when thinking about instantiating this component and plugging it to the rest of the systems should you then consider if serialization is required for your data types or not. If your data needs special encoding, you should find a way to convert it at the wiring level, not so much from the component's design perspective.

@paul-marechal
Copy link
Member Author

Note that I am in the process of replacing the frontend/backend API with the new one. I initially thought it would be better to merge only the Electron bits but I don't think so anymore...

I'll mark the PR ready for review once I'm finished with frontend/backend. This should not block #11011.

In the meantime the high-level abstractions and ideas won't change, so feedback is still welcome on the new approach!

e.g. Named ProxyProvider, named ServiceContribution, RpcConnection, Connection<T>.

@paul-marechal
Copy link
Member Author

I don't have an ETA on the backend-frontend refactoring part, but I believe that I passed the mid-point.

Mostly fighting with previous code oddities.

@@ -91,7 +91,7 @@ const serverModule = require('./server');
const serverAddress = main.start(serverModule());

serverAddress.then(({ port, address }) => {
if (process && process.send) {
if (process.send) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is what I mean by "drive by fix". Yes, it's a good fix, but unrelated to the problem at hand.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fair enough on this one.

/**
* Use this to create "typed symbols" for Inversify typings to understand our bindings.
*/
export function serviceIdentifier<T>(name: string): SymbolIdentifier<T> {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changing service identifiers from symbols to their own types is one change that could be done in a separate PR to make this one smaller. It affects a lot of files that need to be reviewed.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll see to extract this once I get back to a functional state with this PR.

Right now I only used serviceIdentifier where I happened to work, but I could try to make a PR that replaces everything instead.

*
* Most implementations are going to be API adapters.
*/
export const Connection = serviceIdentifier<Connection<any>>('Connection');
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is there a service identifier here?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I used to bind named connections as part of internal wiring, having the identifier defined here helped.

* Most implementations are going to be API adapters.
*/
export const Connection = serviceIdentifier<Connection<any>>('Connection');
export interface Connection<T> {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the concrete benefit of connections being typed?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a way to advertise what the transport supports. Most connections are Connection<any> but as soon as you see Connection<string> or Connection<Buffer> you know you will need to serialize your data.

@paul-marechal
Copy link
Member Author

Closing for now, I'll come back to it eventually.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
core issues related to the core of the application enhancement issues that are enhancements to current functionality - nice to haves jsonrpc issues related to jsonrpc messaging issues related to messaging potentially-breaking A change that might break some dependents conditionally quality issues related to code and application quality
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants