Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update (rewrite) overview, motivation, and data plane #24

Closed
wants to merge 17 commits into from
Closed
260 changes: 153 additions & 107 deletions specs/eventing/data-plane.md
Original file line number Diff line number Diff line change
@@ -1,125 +1,171 @@
# Knative Eventing Data Plane Contracts
# Knative Eventing Data Plane Contract

## Introduction

Developers using Knative Eventing need to know what is supported for delivery to
user provided components that receive events. Knative Eventing defines contract
for data plane components and we have listed them here.

## Conformance
Late-binding event senders and receivers (composing applications using
configuration) only works when all event senders and recipients speak a common
protocol. In order to enable wide support for senders and receivers, Knative
evankanderson marked this conversation as resolved.
Show resolved Hide resolved
Eventing extends the [CloudEvents HTTP
bindings](https://github.com/cloudevents/spec/blob/v1.0.1/http-protocol-binding.md)
with additional semantics for the following reasons:

- Knative Eventing aims to enable highly-reliable event processing workflows. As
such, it prefers duplicate delivery to discarded events. The CloudEvents spec
does not take a stance here.
evankanderson marked this conversation as resolved.
Show resolved Hide resolved

- The CloudEvents HTTP bindings provide a relatively simple and efficient
network protocol which can easily be supported in a wide variety of
programming languages leveraging existing library investments in HTTP.
evankanderson marked this conversation as resolved.
Show resolved Hide resolved

- Knative Eventing assumes a sender-driven (push) event delivery system. That
is, each event processor is actively responsible for an event until it is
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it might be good to stick with consistent terms and not introduce "event processor" as an alternative for "receiver".

Copy link
Member Author

@evankanderson evankanderson May 25, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In this case, it actually means:

Each sender is responsible for an event until it receives an acknowledgement from the receiver. Each receiver MUST NOT send an acknowledgement for an event until it is ready to take responsibility for the event.

That's a little longer than "each event processor is actively responsible for an event", but maybe it's clearer?

handled (or affirmatively delivered to all following recipients).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure what it means to say that a "receiver is responsible for an event". What are the responsibilities implied by that statement?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See my previous comment. It's like "hot potato" -- you're responsible for not dropping the potato until you hand it to someone else.


- Knative Eventing aims to make writing [event
sources](./overview.md#event-source) and event-processing software easier to
write; as such, it imposes higher standards on system components like
[brokers](./overview.md#broker) and [channels](./overview.md#channel) than on
edge components.

This contract defines a mechanism for a single event sender to reliably deliver
a single event to a single recipient. Building from this primitive, chains of
reliable event delivery and event-driven applications can be built.

## Background

The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD",
"SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be
interpreted as described in RFC2119.

## Data plane contract for Sinks
When not specified in this document, the [CloudEvents HTTP bindings, version
1](https://github.com/cloudevents/spec/blob/v1.0.1/http-protocol-binding.md) and
[HTTP 1.1 protocol](https://tools.ietf.org/html/rfc7230) standards should be
followed (with the CloudEvents bindings preferred in the case of conflict).
evankanderson marked this conversation as resolved.
Show resolved Hide resolved

A **Sink** MUST be able to handle duplicate events.
The current version of this document does not describe protocol negotiation or
the ability to upgrade an HTTP 1.1 event delivery into a more efficient protocol
such as GRPC, AMQP, or the like. It is expected that a future compatible version
of this specification might describe a protocol negotiation mechanism.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you should remove this sentence, It doesn't make sense to me to describe the spec future inside the spec itself. Maybe we can have such claims somewhere else, but not in the spec.

Maybe just keep:

The current version of this document does not describe protocol negotiation or the ability to upgrade an HTTP 1.1 event delivery into another protocol.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added this so that readers of the specification would be aware when building compatible implementations. Since we don't have a separate version number, it's possible that an implementation built against this specification might be run against a future specification which would offer extra fields. The (older) implementation should not reject the event because of additional fields that it doesn't understand.

(For example, if the Upgrade header is used, the older implementation should just ignore the protocol upgrade offer.)

Copy link

@slinkydeveloper slinkydeveloper May 20, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see what you mean, but what you're talking about is a concern of the future spec, not of the existing one: when we're going to add such mechanisms, we need to make sure they'll be backwards compatible, in the sense that old implementations should still work properly.

On the other hand, this sentence sounds wrong, because we're saying in the spec what the future of the spec might look like, which is just going to confuse the reader that needs to implement the spec today. And also, what if we're never going to keep this promise?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's important for implementers to realize that there may be future-compatible extensions. How about the following language:

The current version of this document does not describe protocol negotiation or any delivery mechanism other than HTTP 1.1. Future versions may define protocol negotiation to optimize delivery; compliant implementations SHOULD aim to interoperate by ignoring unrecognized negotiation options (such as HTTP Upgrade headers).

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This wording you proposed is definitely better, but it still doesn't sound right to me writing such "future proof" sentence inside the spec. I prefer just keeping the first sentence of this new wording, but it's ok if you want to have all the text block.

For me from Future versions onward should go in a doc like a primer: #24 (comment)


A **Sink** is an [_addressable_](./interfaces.md#addressable) resource that
takes responsibility for the event. A Sink could be a consumer of events, or
middleware. A Sink MUST be able to receive CloudEvents over HTTP and HTTPS.
## Event Delivery

A **Sink** MAY be [_callable_](./interfaces.md#callable) resource that
represents an Addressable endpoint which receives an event as input and
optionally returns an event to forward downstream.
To provide simpler support for event sources which might be translating events
from existing systems, some data plane requirements for senders are relaxed in
the general case. In the case of Knative Eventing provided resources (Channels
and Brokers) which implement these roles, requirements are increased from
SHOULD to MUST. These cases are called out as they occur.
evankanderson marked this conversation as resolved.
Show resolved Hide resolved

Almost every component in Knative Eventing may be a Sink providing
composability.
### Minimum supported protocol

Every Sink MUST support HTTP Protocol Binding for CloudEvents
[version 1.0](https://github.com/cloudevents/spec/blob/v1.0/http-protocol-binding.md)
All senders and recipients MUST support the CloudEvents 1.0 protocol and the
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

New term "recipients"

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, I use recipient 17 times, and receiver 5 times. Switched receiver to recipient unless you prefer the other way around.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

either is fine, I'm more interested in consistency

[binary](https://github.com/cloudevents/spec/blob/v1.0.1/http-protocol-binding.md#31-binary-content-mode)
and
[version 0.3](https://github.com/cloudevents/spec/blob/v0.3/http-transport-binding.md)
with restrictions and extensions specified below.

### HTTP Support

This section adds restrictions on
[requirements in HTTP Protocol Binding for CloudEvents](https://github.com/cloudevents/spec/blob/v1.0/http-protocol-binding.md#12-relation-to-http).

Sinks MUST accept HTTP requests with POST method and MAY support other HTTP
methods. If a method is not supported Sink MUST respond with HTTP status code
`405 Method Not Supported`. Non-event requests (e.g. health checks) are not
constrained.

The URL used by a Sink MUST correspond to a single, unique endpoint at any given
moment in time. This MAY be done via the host, path, query string, or any
combination of these. This mapping is handled exclusively by the
[Addressable control-plane](./interfaces.md#control-plane) exposed via the
`status.address.url`.

If an HTTP request's URL does not correspond to an existing endpoint, then the
Sink MUST respond with `404 Not Found`.

Every non-Callable Sink MUST respond with `202 Accepted` if the request is
accepted.

If Sink is Callable it MAY respond with `200 OK` and a single event in the HTTP
response. A returned event is not required to be related to the received event.
The Callable should return a successful response if the event was processed
successfully. If there is no event to send back then Callable Sink MUST respond
with 2xx HTTP and with empty body.

If a Sink receives a request and is unable to parse a valid CloudEvent, then it
MUST respond with `400 Bad Request`.

### Content Modes Supported

A Sink MUST support `Binary Content Mode` and `Structured Content Mode` as
described in
[HTTP Message Mapping section of HTTP Protocol Binding for CloudEvents](https://github.com/cloudevents/spec/blob/master/http-protocol-binding.md#3-http-message-mapping)

A Sink MAY support `Batched Content Mode` but that mode is not used in Knative
Eventing currently (that may change in future).

### Retries

Sinks should expect that retries and accept possibility that duplicate events
may be delivered.

### Error handling

If Sink is not returning HTTP success header (200 or 202) then the event may be
sent again. If the event can not be delivered then some sources of events (such
as Knative sources, brokers or channels) MAY support
[dead letter sink or channel](https://github.com/knative/eventing/blob/main/docs/delivery/README.md) for events that can not be
delivered.
[structured](https://github.com/cloudevents/spec/blob/v1.0.1/http-protocol-binding.md#32-structured-content-mode)
content modes of the CloudEvetns HTTP binding. Senders MUST support both
evankanderson marked this conversation as resolved.
Show resolved Hide resolved
cleartext (`http`) and TLS (`https`) URLs as event delivery destinations.

### HTTP Verbs

In the absence of specific delivery preferences, the sender MUST initiate
delivery of the event to the recipient using the HTTP POST verb, using either
the structured or binary encoding of the event (sender's choice). This delivery
SHOULD be performed using the CloudEvents HTTP Binding, version 1.0.
evankanderson marked this conversation as resolved.
Show resolved Hide resolved

Senders MAY probe the recipient with an [HTTP OPTIONS
request](https://tools.ietf.org/html/rfc7231#section-4.3.7); if implemented, the
recipent MUST indicate support for the POST verb using the [`Allow`
header](https://tools.ietf.org/html/rfc7231#section-7.4.1). Senders which
receive an error when probing with HTTP OPTIONS SHOULD proceed using the HTTP
POST mechanism.

### Event Acknowledgement and Repeat Delivery
evankanderson marked this conversation as resolved.
Show resolved Hide resolved

Event recipients MUST use the HTTP response code to indicate acceptance of an
event. The recipient MUST NOT return a response accepting the event until it has
handled event (processed the event or stored it in stable storage). The
following response codes are explicitly defined; event recipients MAY also
respond with other response codes. A response code not in this table SHOULD be
treated as a retriable error.

| Response code | Meaning | Retry | Delivery completed | Error |
| ------------- | --------------------------- | ----- | ------------------ | ----- |
| `1xx` | (Unspecified) | Yes\* | No\* | Yes\* |
| `200` | [Event reply](#event-reply) | No | Yes | No |
| `202` | Event accepted | No | Yes | No |
| other `2xx` | (Unspecified) | Yes\* | No\* | Yes\* |
| other `3xx` | (Unspecified) | Yes\* | No\* | Yes\* |
| `400` | Unparsable event | No | No | Yes |
| `404` | Endpoint does not exist | Yes | No | Yes |
| other `4xx` | Error | Yes | No | Yes |
evankanderson marked this conversation as resolved.
Show resolved Hide resolved
| other `5xx` | Error | Yes | No | Yes |

\* Unspecified `1xx`, `2xx`, and `3xx` response codes are **reserved for future
extension**. Event recipients SHOULD NOT send these response codes in this spec
version, but event senders MUST handle these response codes as errors and
implement appropriate failure behavior.

evankanderson marked this conversation as resolved.
Show resolved Hide resolved
<!-- TODO: Should 3xx redirects and 401 (Unauthorized) or 403 (Forbidden) errors
be retried? What about `405` (Method Not Allowed), 413 (Payload Too Large), 414
(URI Too Long), 426 (Upgrade Required), 431 (Header Fields Too Large), 451
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was working off of the following:

https://github.com/knative/specs/blob/main/specs/eventing/data-plane.md#http-support

Which specified 200/202, 404, and 400 (unparseable CloudEvent). This is somewhat contradicted by:

https://github.com/knative/specs/blob/main/specs/eventing/data-plane.md#error-handling

If Sink is not returning HTTP success header (200 or 202) then the event may be sent again.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we want to default to 4xx = no retry, should any of following codes be retried?

  • 401 (Unauthorized)
  • 403 (Forbidden)
  • 404 (Not Found)
  • 405 (Method Not Allowed)
  • 408 (Request Timeout)
  • 412 (Precondition Failed)
  • 413 (Payload Too Large)
  • 414 (URI Too Long)
  • 421 (Misdirected Request)
  • 426 (Upgrade Required)
  • 431 (Request Header Fields Too Large)

Currently, the spec specifies:

If an HTTP request's URL does not correspond to an existing endpoint, then the Sink MUST respond with 404 Not Found.

I interpreted this to mean that 404s were retriable, but the current spec says that every error "may be sent again" (non-spec language), which doesn't really line up with the code implementation.

I've adjusted this to only retry the three codes mentioned and all other 400s are non-retriable, though the 404 case seems strange to me, and I'd love to get more details.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure on all the "Too Long" things,

Also not sure on 426, as that indicates the current protocol is refused. I doubt we do upgrade...

(Unavailable for Legal Reasons)? -->

Recipients MUST be able to handle duplicate delivery of events and MUST accept
delivery of duplicate events, as the event acknowledgement could have been lost
evankanderson marked this conversation as resolved.
Show resolved Hide resolved
in return to the sender. Event recipients MUST use the [`source` and `id`
evankanderson marked this conversation as resolved.
Show resolved Hide resolved
attributes](https://github.com/cloudevents/spec/blob/v1.0.1/spec.md#required-attributes)
to detect duplicated events (see [observability](#observability) for an example
case where other event attributes may vary from one delivery attempt to
another).

Where possible, event senders SHOULD re-attempt delivery of events where the
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think SHOULD may be too strong here. In serving we specifically stopped the networking layer from retrying on 5xx errors because it ended up spamming the ksvc when the ksvc was correctly returning a 503. Even with a MAY I'd still like to see some text that talks about how the sender needs to be selective about it to avoid DDOSing the sink.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the mention of congestion control in the next sentence sufficient?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's a good thing to mention, but it still pushes people towards resending... just slower. I keep having flashbacks to wasting my time debugging a ksvc trying to figure out why I was seeing 3 times the errors in its logs... and it was because of the network retries. I think it's sufficient to talk about how it's ok to retry if the sender thinks a new result will happen, but not just because it failed with a 5xx

HTTP request failed or returned a retriable status code. It is RECOMMENDED that
event senders implement some form of congestion control (such as exponential
backoff) when managing retry timing. This specification does not document any
evankanderson marked this conversation as resolved.
Show resolved Hide resolved
specific congestion control algorithm or
parameters. [Brokers](./overview.md#broker) and
[Channels](./overview.md#channel) MUST implement congestion control and MUST
implement retries.

### Observability

CloudEvents received by Sink MAY have
[Distributed Tracing Extension Attribute](https://github.com/cloudevents/spec/blob/v1.0/extensions/distributed-tracing.md).

### Event reply contract

An event sender supporting event replies SHOULD include a `Prefer: reply` header
in delivery requests to indicate to the sink that event reply is supported. An
event sender MAY ignore an event reply in the delivery response if the
`Prefer: reply` header was not included in the delivery request.

An example is that a Broker supporting event reply sends events with an
additional header `Prefer: reply` so that the sink connected to the Broker knows
event replies will be accepted. While a source sends events without the header,
in which case the sink may assume that any event reply will be dropped without
error or retry attempt. If a sink wishes to ensure the reply events will be
delivered, it can check for the existence of the `Prefer: reply` header in the
delivery request and respond with an error code if the header is not present.

### Data plane contract for Sources

See [Source Delivery specification](sources.md#source-event-delivery)
for details.

### Data plane contract for Channels

See [Channel Delivery specification](channel.md#data-plane) for details.

### Data plane contract for Brokers

See [Broker Delivery specification](broker.md)

## Changelog
Event senders MAY add or update CloudEvents attributes before sending to
implement observability features such as tracing; in particular, the
[`traceparent` and `tracestate` distributed tracing
attributes](https://github.com/cloudevents/spec/blob/v1.0/extensions/distributed-tracing.md)
may be modified in this way for each delivery attempt of the same event.
evankanderson marked this conversation as resolved.
Show resolved Hide resolved

This specification does not mandate any particular logging or metrics
aggregtion, nor a method of exposing observability information to users
evankanderson marked this conversation as resolved.
Show resolved Hide resolved
configuring the resources. Platform administrators SHOULD expose event-delivery
telemetry to users through platform-specific interfaces, but such interfaces are
beyond the scope of this document.

<!-- TODO: should we mention RECOMMENDED spans or RECOMMENDED metrics like in
https://github.com/knative/specs/blob/main/specs/eventing/channel.md#observability?
-->
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMO we can do that later. If you agree, I can create a ticket to define them.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, removing!


### Derived (Reply) Events

In some applications, an event receiver might emit an event in reaction to a
evankanderson marked this conversation as resolved.
Show resolved Hide resolved
received event. An event sender MAY document support for this pattern by
including a `Prefer: reply` header in the HTTP POST request. This header
indicates to the event receiver that the caller will accept a [`200`
response](#event-acknowledgement-and-repeat-delivery) which includes a
CloudEvent encoded using the binary or structured formats.
[Brokers](./overview.md#broker) and [Channels](./overview.md#channel) MUST
indicate support for replies using the `Prefer: reply` header.

The sender SHOULD NOT assume that a received reply event is directly related to
the event sent in the HTTP request.

A recipient MAY reply to any HTTP POST with a `200` response to indicate that
the event was processed successfully, with or without a response payload. If the
recipient will _never_ provide a response payload, the `202` response code is
likely a better choice.

If a recipient chooses to reply to a sender with a `200` response code and a
reply event in the absence of a `Prefer: reply` header, the sender SHOULD treat
the event as accepted, and MAY log an error about the unexpected payload. The
sender MUST NOT process the reply event if it did not advertise the `Prefer:
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this section brings the Prefer: reply to a mandatory and I thought before it was more of a "you can provide this, but the sink is free to ignore it, but if you say "Prefer: " and get a reply, you are free to drop the event without warning.

This language makes the header mandatory, and it is not used anywhere yet, so it would be an API change for eventing at the moment.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, this attempts to make the expected behavior for e.g. a receiving function explicit.

I'm willing to change this, but the current formulation implemented in eventing makes it difficult for a function to know if the reply might be dropped, which seems to go against the goal of avoiding message loss.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(I didn't change this yet, because I'd like to discuss whether it makes more sense to change the contract or the implementation.)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FYI, here's some more background on this.
knative/eventing#4560

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see the background, and the concern that adding MUST language around this header would make previous versions of Knative incompatible with the 1.0 spec.

knative/eventing#4560 (comment)

I'm wondering if we have a requirement that versions of Knative prior to the 1.0 spec MUST be compliant? If so, what is our boundary?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given that this functionality is not written in the reference implementation, I would be inclined to not make it a MUST.

The original PR was using SHOULD and MAY, no MUST NOT nor MUST

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The original PR did not include any MUST clauses.

Unfortunately, I have a conflict with the eventing meeting today (interview); is there a way you could see this becoming a "MUST", or do you think it always needs to be a MAY/SHOULD?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see it as always a MAY/SHOULD for 1.0

reply` capability.

- 2020-04-20: `0.13.x release`: initial version that documents common contract
for sinks, sources, channels and brokers.
58 changes: 21 additions & 37 deletions specs/eventing/motivation.md
Original file line number Diff line number Diff line change
@@ -1,49 +1,33 @@
# Motivation

The goal of Knative Eventing is to define common, composable primitives to
enable late-binding event sources and event consumers.
The goal of the Knative Eventing project is to define common primitives to
enable composing event-processing applications through configuration, rather
than application code.

<!-- TODO(n3wscott): [Why late-binding] -->
Building by combining independent components provides a number of benefits for
application designers:

Knative Eventing has following principles:

1. Services are loosely coupled during development and deployed independently on
a variety of platforms (Kubernetes, VMs, SaaS or FaaS).
1. Services are loosely coupled during development and may be deployed
independently on a variety of platforms (Kubernetes, VMs, SaaS or FaaS). This
composability allows re-use of common patterns and building blocks, even
across programming language and tooling boundaries.

1. A producer can generate events before a consumer is listening, and a consumer
can express an interest in an event or class of events that is not yet being
produced.
produced. This allows event-driven applications to be evolve over time
without needing to closely coordinate changes.

1. Services can be connected to create new applications
- without modifying producer or consumer.
1. Services can be connected to create new applications:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not sure what this one is trying to say. Do you mean we can add new sinks into the eventing infra/mesh w/o impacting existing producers and consumers?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In particular, this is saying that new event sources and sinks can be connected to the topology without needing to change any code in the producers or consumers.

Compare this with explicitly configuring exchanges in AMQP or topics in Kafka, where consuming from two exchanges or topics would required updating the code to create two subscriptions or consumer groups.

- without modifying producer or consumer
Copy link
Contributor

@duglin duglin May 25, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

let's use common terms... introducing "consume" in this doc

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure what you're suggesting here; in the data plane doc, receiver is being used specifically to refer to the "origin server" side of HTTP, whereas this is intended to speak about the logical connections and flow of events. (All of motivation should use the produce/consume terminology, I think.)

- with the ability to select a specific subset of events from a particular
producer

These primitives enable producing and consuming events adhering to the
[CloudEvents Specification](https://github.com/cloudevents/spec), in a decoupled
way.

Kubernetes has no primitives related to event processing, yet this is an
essential component in serverless workloads. Eventing introduces high-level
primitives for event production and delivery with an initial focus on push over
HTTP. If a new event source or type is required of your application, the effort
required to plumb them into the existing eventing framework will be minimal and
will integrate with CloudEvents middleware and message consumers.

Knative eventing implements common components of an event delivery ecosystem:
enumeration and discovery of event sources, configuration and management of
event transport, and declarative binding of events (generated either by storage
services or earlier computation) to further event processing and persistence.

The Knative Eventing API is intended to operate independently, and interoperate
well with the [Serving API](https://github.com/knative/serving) and
[Build API](https://github.com/knative/build).

---

_Navigation_:
In order to enable loose coupling and late-binding of event producers and
consumers, Knative Eventing utilizes and extends the [CloudEvents
specification](https://github.com/cloudevents/spec) as the data plane protocol
between components. Knative Eventing prioritizes at-least-once delivery
semantics, using the CloudEvents HTTP POST (push) transport as a minimum common
transport between components.

- **Motivation and goals**
- [Resource type overview](overview.md)
- [Interface contracts](interfaces.md)
- [Object model specification](spec.md)
Knative Eventing also defines patterns to simplify the construction and usage of
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do you have an example of what you had in mind with this last sentence?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think Destination/Addressable and the high availability of Channels and Brokers are the two benefits to the event producer side. For the event consumer side, I think the reply pattern and retry policies along with CloudEvents-over-HTTP are the main contributors.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm willing to cut this sentence if it doesn't feel like it belongs, though.

event senders and recipients.
Loading