-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TcpClient trait for creating shared TCP/IP stack #69
Conversation
As a follow-up to my comment on Along the lines of this old attempt, that we never got any further: https://github.com/BlackbirdHQ/embedded-nal/blob/feature/tls-options/src/tls.rs |
Yeah, for TLS (which I assume is out of scope for this PR), another example is https://github.com/drogue-iot/embedded-tls/blob/main/src/config.rs#L68 where you have additional stuff like an RNG needing to be passed through for the TLS open phase. |
I'll try to contribute a STD implementation based on the minimalistic smol async_io reactor. The existing STD example does not depend on any reactor impl, but is obviously not really async. smol does work on top of ESP-IDF's STD implementation and is thus a good fit for production use on the ESP32 chip (non-bare-metal, with ESP-IDF). For bare-metal we are using smoltcp, so that's covered already. |
Seems like this is okay to me, but my knowledge on Async is too limited to review this properly, perhaps someone else could pitch in? :) @Dirbaio perhaps? |
@ivmarkov Would you mind having a look if these traits look OK to you? |
@ivmarkov Much appreciated, thanks! I've updated the links. |
I've implemented the Next steps: I'll layer your async HTTP & MQTT clients on top of If all goes well, I guess we could call it a day and accept the PR! P.S. I have a small caveat with |
embedded-nal-async/src/stack/tcp.rs
Outdated
/// creating a TCP connection. | ||
pub trait TcpClient: embedded_io::Io { | ||
/// Type returned when creating a connection. | ||
type TcpConnection<'m>: embedded_io::asynch::Read + embedded_io::asynch::Write |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should enforce (as done here) that TcpClient
and TcpConnection
share a common error type. That makes it easier for user code to consume the TcpConnection
GAT and the trait overall.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, that's probably a good idea.
embedded-nal-async/src/stack/tcp.rs
Outdated
Self: 'm; | ||
|
||
/// Future returned by `connect` function. | ||
type ConnectFuture<'m>: Future<Output = Result<Self::TcpConnection<'m>, Self::Error>> + 'm |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think that using lifetime 'm
for Self::TcpConnection<'m>
achieves the advertised goal of "this trait allows creating multiple connections using the same TCP/IP stack".
The problem of 'm
is that it lives as long as the &'m mut self
reference to TcpClient
on line 193 below. What this means is that I can keep around the Self::TcpConnection<'m>
instance only as long as the &'m mut self
reference to TcpClient
is alive. This results in two problems:
- I can't hold an instance of
Self::TcpConnection<'m>
for a longer period than'm'
I.e. I can't implement cache and connection re-use, as I do here - Worse, I can't really create multiple
TcpConnection
instances as currently each createdTcpConnection
essentially holds a reference to&mut TcpClient
. So I can in fact have - at any point in time - a singleTcpConnection
?
The solution IMO is that the TcpClient
trait must be parameterized by another lifetime (say, 't
) and the returned TcpConnection
instances should depend on this (longer) lifetime 't
and not 'm
- as I did here. (For STD implementations 't
is actually 'static
as implemented here.)
See also this PR for all suggested changes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure introducing another lifetime helps. The way TcpClient is intended is that you create the clients on application init, and then pass them to whatever needs to create connections.
Having the connection outlive the client means that you need to store connection state somewhere global, but with this trait the connection can rely on tcpclient to be around until it is dropped.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure introducing another lifetime helps. The way TcpClient is intended is that you create the clients on application init, and then pass them to whatever needs to create connections.
OK so you are modelling Option "a" from my "philosophical" comment further below. No complaints with that and I see your arguments why we do not necessarily need to model the notion of a "Stack" from embassy as a trait - we might not need that in the embedded world indeed.
In that case:
- How about re-wording the comment above the
TcpClient
trait? It currently says - "... this trait allows creating multiple connections using the same TCP/IP stack.". In retrospective I see what you mean, but that comment might still be misleading as its direct interpretation led me to believe that I'm supposed to be capable of creating multiple connections via a singleTcpClient
instance; - It might also help if we rename
TcpClient
toTcpClientSocket
, becauseTcpClient
is really a socket in a way. Granted, a one that can also be in disconnected state initially, but that's just like the native smoltcpTcpSocket
type, isn't it? - The current design where
connect
takes&mut self
reference toTcpClient(Socket)
makes cachingTcpConnection
very difficult. Can you look here again? Not really sure how to cache theconnection
instance insideClient
. I would expect my MQTT client to be capable of auto-reconnecting if the socket connection dies - your esp8266 example does not do that; does this mean that on socket error the whole thing falls apart, or is the actor framework somehow automagically restarting the actor? Ditto for the HTTP client, where it is not just about re-connecting, but in fact about re-using the same (in the worst case - TLS) socket across multiple requests. The only way I see this possible to fix is by introducing explicitly anHttpConnection
trait that will wrap theTcpConnection
instance by taking a&mut self
reference to theClient
, which in turns wraps and owns aTcpClient
. A bit too heavyweight solution.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree the comment should be updated. We can also rename it, but I'd be wary about using the name 'Socket', as sockets are not reusable once closed/dropped (much like a connection), and this could set wrong expectations as well. Not sure what a good name would be, maybe TcpConnectionFactory :) IMHO TcpClient is an OK compromise, but maybe some better name exists.
On the connection reuse, I think it's a double edged sword. Having the mqtt and http client wrap the tcp client I think makes them less flexible, but more user friendly. I need to think/experiment a bit with this to see what can be done, but I think a similar fix to what you propose might be needed. In general your trying to reuse resources across multiple requests, much like TcpClient tried to reuse the tcp stack for multiple connections, so I think the same pattern emerges.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here's an alternative design which avoids the lifetime problem at the expense of runtime errors if you try to read/write from a TcpClientSocket
instance which is not connected yet. But... this might well happen even for a connected socket if the other side did disconnect, isn't it?
As for renaming TcpClient
to TcpClientSocket
... I see your concerns but on the other hand, that did not stop the smoltcp folks naming their own thing TcpSocket
even if it is a bit different from a STD socket.
As for the mqtt and http clients wrapping a tcp client making them less flexible - yes and no. E.g. if you do not want to use the full-fledged HTTP client, you can just re-use the Request
trait impl, which only needs a Read+Write
trait impl. The benefit of wrapping the tcp client (socket) is exactly that you get easier management of the socket connection, with automatic reconnect and reuse, which I thought is one of the main reasons to push for the TcpClient
trait anyway?
By the way, how I would imagine TLS would be layered is by a TcpClient(Socket)
impl, which takes another, raw TcpClient(Socket)
impl and layers on top embedded-tls
in a way which is transparent for the consumer...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(Side topic, but perhaps related, as ideally we should be able to abstract out the TLS machinery as well.)
By the way, how I would imagine TLS would be layered is by a
TcpClient(Socket)
impl, which takes another, rawTcpClient(Socket)
impl and layers on topembedded-tls
in a way which is transparent for the consumer...Yeah, I went down this route initially, but it's a hard to make it transparent because you typically want to configure TLS server name, CA certificates, client certs. etc, and they cannot be represented in the TcpClient connect API.
Huh that's very unfortunate. I do not have in-depth understanding of TLS, but I was assuming, that whatever you configure should be independent of the host to which you are connecting, and therefore should not be passed-in to the connect
method, but earlier, when the TcpClient(Socket)
is constructed. I.e.:
- CA certificates are always the same
- Client certificate should be the same too, regardless of the host?
- Not sure about TLS server name though... :(
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All of these settings depends on what you're connecting to. Limiting them to be configured at client construction time can be fine for many use cases and is certainly possible. It just means in practice you can only connect to the same host with the same client: this is probably fine, but it is weird since connect() in the trait is passed the SocketAddr which means you can use it to connect to whatever IP + port you want, so shouldn't that also be in the constructor of the type... :)
TLS also requires some resources only at connect time, like a random number generator. Borrowing it only on connect means you don't need to deal with complex lifetimes and shared access to the rng in the application.
But let's not finalize that design here :D Would it make sense then to update command + replace TcpClient with the TcpClientSocket trait as you propose for this PR?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But let's not finalize that design here :D Would it make sense then to update command + replace TcpClient with the TcpClientSocket trait as you propose for this PR?
Yes. But only if you feel that's the right direction too. Does it make sense to try to implement the new trait for smoltcp and esp8266 AT as you did for the original trait? I'll do the same for async STD and I'll also try to use the new trait in my experimental HTTP client traits.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, good suggestion, I'll give that ago
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Async STD impl in progress here.
embedded-nal-async/src/lib.rs
Outdated
@@ -14,4 +14,4 @@ pub use no_std_net::{IpAddr, Ipv4Addr, Ipv6Addr, SocketAddr, SocketAddrV4, Socke | |||
|
|||
pub use dns::Dns; | |||
pub use embedded_nal::AddrType; | |||
pub use stack::{TcpClientStack, TcpFullStack, UdpClientStack, UdpFullStack}; | |||
pub use stack::{TcpClient, TcpClientStack, TcpFullStack, UdpClientStack, UdpFullStack}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If TcpClient
is successful, I don't think we need the *Stack
implementations at all?
Further, we'll probably need UdpClient
trait, as well as TcpServer
and possibly UdpServer
traits, whereas the latter two implement socket bind
+ accept
logic. With multiple connections again, at least for the accept
part.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I think they should be kept around until we find an approach that could work for the server side.
embedded-nal-async/src/stack/tcp.rs
Outdated
/// Connect to the given remote host and port. | ||
/// | ||
/// Returns `Ok` if the connection was successful. | ||
fn connect<'m>(&'m mut self, remote: SocketAddr) -> Self::ConnectFuture<'m>; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmmmm. Philosophical question: I'm looking at the SmolTcpClient
impl here and I'm no longer sure what TcpClient
is supposed to represent after all?
The way TcpClient
is currently implemented for smoltcp, it seems to represent a thing that - at any time - only has either zero, or one active connection? Even if we introduce lifetime 't
, the fact that you are taking &mut
refs to singleton TX and RX buffers means again - at most one active connection per TcpClient
at a time.
So... what are we modelling with TcpClient
?
- (a) if it is a thing that can produce one active connection at a time - fine - but then we need another trait, that will model
embassy_net::Stack
. Moreover, this (Stack
?) trait should not be async and either should take a&self
rather than&mut self
(i.e. it should be internally synchronized), or we should ask users to protect it with a blocking Mutex. So that I can provide references to it to e.g. the HTTP client, MQTT client and HTTP server and they can all pull connections (TcpClient
/ futureTcpServer
) instances actually) from it "in parallel". - (b) If it is a thing that indeed can produce multiple connections at a time, we need to think how to abstract away the notion of TX and RX buffers.
With (b) there is another problem: your current connect
method on line 193 in this PR is async and takes &mut
of the TcpClient
trait. So I cannot just give out references to TcpClient
to multiple clients and expect these to be able to pull TcpConnections
in parallel. For that - at the very least - I'll need an async mutex. And that is if we are able to solve the TX and RX problems from above in the first place. EDIT: These are solvable by just pushing them inside TcpConnection
as buffers owned by TcpConnection
itself. (Not that there won't be other lifetime problems to solve, but that approach seems tractable.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, a client can only have 1 connection at a time, this is on purpose. The idea is that with your stack impl, you can pre-create the number of clients your application expects so that you can do this without alloc, but you can still create the connections "on demand".
I'm not sure I see the need for having a Stack trait, because the whole point as I see it is that the application knows which stack it is using at the point of creating the clients, so it creates the number of clients == the number of connections it needs.
Example with multiple clients:
@MathiasKoch @lulf - Latest design looks good! +1 for merging it. |
I think the new design is not better than the previous one.
It's true the lifetime problem is an issue though... (having a client that auto-reconnects if the connection). The new design doesn't fully solve that though, you still have the problem with |
Any brilliant ideas how to preserve the statefulness of the current design (TcpClientSocket) while preserving the rigor of the previous one? If that's not clear, the statefulness of the design is its major selling point IMO. If I don't have it, I can just as well fall down to only relying on Read+Write, as having the ability to |
Also I thought it is obvious but let me state this explicitly: the TcpConnection thing in the new design is supposed to be used in the rare case where you don't need caching. Otherwise the raw TcpClientSocket would've not been there in the first place. |
I think the latest trait maps more natural to the no_std drivers, whether it's smoltcp, esp8266 or eswifi. (Smoltcp example here @Dirbaio https://github.com/lulf/embassy/blob/embassy-net-async-nal/embassy-net/src/tcp.rs#L297) . Yes, std, tokio etc. requires the Option, but I think we shouldn't optimize for those. I agree it is unfortunate that the API can be "misused", but I also think it's easier to use for higher level clients that needs to 'reconnect' without needing the Option at that level. The TcpConnection type was an attempt at a compromise where you can still get the 'disconnect on drop' semantics if you want that, and I think this is OK. |
Now that you mention it Also, the fact that smoltcp sockets are reusable is an artifact on how it manages memory. I don't consider that a good feature. Attempting to "preserve" that in the traits is leaking implementation details. I'm using a pub trait TcpConnect {
type Conn<'a>: Read + Write + Split
where
Self: 'a;
type ConnectFuture<'a>: Future<Output = Result<Self::Conn<'a>, TcpConnectError>>
where
Self: 'a;
fn connect<'a>(&'a mut self, addr: Address<'a>, port: u16) -> Self::ConnectFuture<'a>;
}
// impl for embassy-net
pub struct TcpClient {
stack: &'static Stack<NetDevice>,
rx_buffer: [u8; BUFFER_SIZE],
tx_buffer: [u8; BUFFER_SIZE],
}
impl TcpConnect for TcpClient {
type Conn<'a> = TcpSocket<'a>
where
Self: 'a;
type ConnectFuture<'a> = impl Future<Output = Result<Self::Conn<'a>, TcpConnectError>>
where
Self: 'a;
fn connect<'a>(&'a mut self, addr: Address<'a>, port: u16) -> Self::ConnectFuture<'a> {
async move {
let ip = ..;
let mut socket = embassy_net::tcp::TcpSocket::new(&self.stack, &mut self.rx_buffer, &mut self.tx_buffer);
info!("connecting...");
let r = socket.connect((ip, port)).await;
if let Err(e) = r {
info!("connect error: {:?}", e);
return Err(TcpConnectError::ConnectionFailed);
}
info!("connected!");
Ok(TcpSocket { inner: socket })
}
}
} The idea is the About caching long-lived connections like MQTT: the previous design works well with a dedicated task. Very rough pseudocode of what I'm doing (it's not MQTT, but it's also a long-lived connection that can do requests in both directions: device->cloud or cloud->device) async fn net_task(mut client: TcpClient) {
loop {
match client.connect(addr, port).await {
Ok(conn) => match handle_conn(conn).await {
Ok(_) => unreachable!(); // only returns on fail
Err(e) => warn!("Handle conn failed: {:?}, reconnecting", e),
},
Err(e) => warn!("connect failed: {:?}, retrying", e).
}
// some sleep to backoff maybe
}
}
async fn handle_conn(conn: TcpSocket<'_>) -> Result<(), Error> {
// read/write to the conn. Return if conn breaks.
// communicate with other tasks with channels.
} Having a dedicated task has more advantages, besides solving the "lifetime issue".
|
@Dirbaio Your design works well for stuff similar to MQTT, because it is the MQTT client who controls and encapsulates the "loop" and thus having two nested loops is not the end of the world as this is encapsulated inside the MQTT client itself. How would that work for an HTTP client, which by necessity needs to expose something like a This if you wish. Don't get me wrong - if I've missed the elephant in the room - please point it to me. I would gladly take the previous design then. As it has obvious other advantages. |
Ah, the http case... :( What if Then an HTTP client could look like this. The borrow checker will be happy because it's not self-referential. struct HttpClient<'a, T: TcpClient> {
client: &'a T,
conn: Option<T::Conn<'a>>,
}
|
Yes that would likely work and it is a variation of the design I proposed initially here, except yours is with a bit different lifetimes which are more likely to work. Sorry - all the code in my original comment is now gone except in github history. These proposals turn But as you correctly say, we'll now have an issue with tx/rx buffer management becoming more complicated (pools and such), I am not sure what is the best path forward. The original |
I understand more the non-mut connect option now, but would like to experiment a bit with some of the drivers to get a feel for it, so let's hold off merging anything for now. |
Would it be possible, while making sense, to add a similar API to the blocking side, to maintain a somewhat 1:1 capability? |
I suggest we wait until @lulf returns and then continue on the async API. Once we have this one, we can probably think about the non-async ones. By the way, by blocking" I assume you mean the old "nb" one? Because - technically speaking - it does not block either. For me it is also a question of whether we should even look for further developing the "nb" ones. "nb" is a weird spot to be - neither a true blocking one, nor a true async one.... and difficult to compose. BTW isn't |
I absolutely agree we should wait until you reach a concensus on the current design 👍
Yeah, that was what i meant, though you are correct. I think we will just tag-along on the way The one thing i would like to see on this abstraction is a more rust like feeling along #53 |
Yep, IMO that's what we should do.
Heh. The latest design for the client socket is mostly what is suggested in #53 anyway, and it is still a bit controversial. I.e. see the comments from @Dirbaio earlier in the thread. Compare: pub trait TcpClientSocket:
embedded_io::Io + embedded_io::asynch::Read + embedded_io::asynch::Write
{
type ConnectFuture<'m>: Future<Output = Result<(), Self::Error>> + 'm
where
Self: 'm;
fn connect<'m>(&'m mut self, remote: SocketAddr) -> Self::ConnectFuture<'m>;
type IsConnectedFuture<'m>: Future<Output = Result<bool, Self::Error>> + 'm
where
Self: 'm;
fn is_connected<'m>(&'m mut self) -> Self::IsConnectedFuture<'m>;
fn disconnect(&mut self) -> Result<(), Self::Error>;
} ( Proposal from #53: pub trait TcpSocket {
type Error;
fn connect(&mut self, remote: SocketAddr) -> Result<(), Error>;
fn send(&mut self, data: &[u8]) -> Result<usize, Error>;
fn receive(&mut self, data: &mut [u8]) -> Result<usize, Error>;
fn close(self);
} Modulo blocking and modulo naming differences ( |
Using borrowed Final set of comments. With the general approach I'm now completely fine!
|
Oh and one more: impl<T> TcpConnector for &T
where
T: TcpConnector
... @Dirbaio wuld appreciate feedback here ^^^. By the way, the above is currently unimplementable, because |
I disagree with that. I've seen this term used in similar constructs, and I find TcpConnector a sufficiently short term that I think fits well with it's purpose. In the same way I think that Acceptor or TcpAcceptor is a good name for a server side version that accepts connections.
Agreed, I'll rename it
It does simplify things, so yes lets do that. |
Actually, perhaps not? In my smoltcp implementation, I have an error type that can be 'OutOfMemory' when no more connections can be created. It feels wrong to require that connection read/write errors should be the same as the connect errors. |
OK, that would also work, as long as the server "socket" that does bind is named - say -
|
Let me try to adjust my HTTP client if |
I think usually |
We can add that. There's only the That being said, do we want about naming:
|
I'd prefer TcpConnect in that case - it is more descriptive to the trait and what it does than TcpClient IMHO.
Agreed, I've removed relying on the embedded_io::Io trait. |
Haven't finished and will continue tonight, but I'm not thrilled by the complexity introduced as a result from For example, if I want to wrap both in an enum error, here's what I got: #[derive(Debug)]
pub enum IoError<C, E>
{
Connect(C),
Io(E),
}
impl<C, E> Error for IoError<C, E>
where
C: Error,
E: Error,
{
fn kind(&self) -> embedded_io::ErrorKind {
match self {
Self::Connect(e) => e.kind(),
Self::Io(e) => e.kind(),
}
}
} So far so good, but then the signatures of the Http Client methods become unreadable. E.g. Old: pub async fn initiate_response(&mut self) -> Result<(), HttpError<T::Error>> { New: pub async fn initiate_response(&mut self) -> Result<(), HttpError<IoError<T::Error, <<T as TcpConnector>::Connection<'b> as Io>::Error>>> { So in the new signature you have it all: lifetimes, trait casts (multiple of them). Joy! What am I missing? And if nothing, is the purity of differentiating between the error of |
I agree its a mouthful, but on the other hand I think you can enforce the constraint in your http client:
Yes, it's more typing involved, but this way you enforce the error type to be the same as you want. I'm not sure what's better longer term. You might be able to encode the above in a 'super trait' as well? The problems of using an error type covering lots of errors that can't happen is that users have to deal with errors that can never happen, and stop caring about handling specific errors, because now they can't rely on the type system to tell them which errors could actually happen. |
> 'm,
> E,
> C: Io<Error = E> + Write + Read,
> T: TcpConnect<Error = E, Connection<'m> = C>, I think the type constraints you are suggesting will constrain my HTTP client to accept only connectors which happen to implement a single error type for both In general: I agree that the unhappy path should also get a decent level of attention, but the reality is - I think - people tend to be more relaxed about typing in it (errors) than the happy path one and I think might have a lower tolerance for complex types in the unhappy path. We might be going a bit too far with separate error types for @Dirbaio any thoughts? Is this really how stuff is modelled in |
I agree, it does become a bit much, so maybe let's do this:
This way, TcpConnect + Connection can use the same error as you proposed, while still having a different set of errors from TcpListen? (Difference from before is that TcpConnect does not depend on Io, which would not enforce TcpListen and TcpConnect to use the same error should they be implemented by the same type) |
I agree. |
Sounds good to me. I do think having separate error types would be more "pure" but I agree it's not practical (also
|
@lulf I'm fine with the changes. Maybe one last thing - let's have the |
Okay, what happens is the trait declaration compiles, but it's impossible to implement it unless (I don't think it's a rust bug, behavior is the same without GATs) |
What do we do with the previous traits? Remove them? |
I think we should sooner rather than later, there are no uses of them that I know of that this point. (And it's not a 1.0 yet so lets do it).
Agreed. I'll get the changes sorted out and squashed tomorrow, thanks! |
The TcpConnect trait allows creating a TCP connection that implements the embedded-io traits for I/O. The trait supports sharing a single driver for multiple TCP connections, though it is up to the implementation how this is done. The connection created by the trait is closed on drop
Commits have been squashed. I'll do the removal of the existing traits in another PR so that we can keep any discussion separate. @MathiasKoch ^^ |
Sure thing. Here we go! I'll wait with the release for the cleanup PR i guess? |
I'll have the PR ready shortly. Also, I noticed the previous release is 0.1.0, but is there a convention that it should have a -alpha for releases until the API is stable, or is 0.2.0 fine? |
I think the consensus was to have the |
New set of traits using embedded-io that allows creating shared TCP stack implementations using async.