Skip to content

Commit

Permalink
Re-run json doc generation.
Browse files Browse the repository at this point in the history
  • Loading branch information
rcaudy committed Jan 31, 2022
1 parent ba91e2c commit 6adf0c7
Show file tree
Hide file tree
Showing 2 changed files with 4 additions and 8 deletions.

This file was deleted.

Original file line number Diff line number Diff line change
Expand Up @@ -2,12 +2,15 @@
"className": "io.deephaven.kafka.KafkaTools",
"methods": {
"avroSchemaToColumnDefinitions": "**Incompatible overloads text - text from the first overload:**\n\nConvert an Avro schema to a list of column definitions, mapping every avro field to a column of the same name.\n\n*Overload 1* \n :param columnsOut: java.util.List<io.deephaven.engine.table.ColumnDefinition<?>>\n :param fieldPathToColumnNameOut: java.util.Map<java.lang.String,java.lang.String>\n :param schema: org.apache.avro.Schema\n :param requestedFieldPathToColumnName: java.util.function.Function<java.lang.String,java.lang.String>\n \n*Overload 2* \n :param columnsOut: (java.util.List<io.deephaven.engine.table.ColumnDefinition<?>>) - Column definitions for output; should be empty on entry.\n :param schema: (org.apache.avro.Schema) - Avro schema\n :param requestedFieldPathToColumnName: (java.util.function.Function<java.lang.String,java.lang.String>) - An optional mapping to specify selection and naming of columns from Avro\n fields, or null for map all fields using field path for column name.\n \n*Overload 3* \n :param columnsOut: (java.util.List<io.deephaven.engine.table.ColumnDefinition<?>>) - Column definitions for output; should be empty on entry.\n :param schema: (org.apache.avro.Schema) - Avro schema",
"columnDefinitionsToAvroSchema": ":param t: io.deephaven.engine.table.Table\n:param schemaName: java.lang.String\n:param namespace: java.lang.String\n:param colProps: java.util.Properties\n:param includeOnly: java.util.function.Predicate<java.lang.String>\n:param exclude: java.util.function.Predicate<java.lang.String>\n:param colPropsOut: org.apache.commons.lang3.mutable.MutableObject<java.util.Properties>\n:return: org.apache.avro.Schema",
"consumeToTable": "Consume from Kafka to a Deephaven table.\n\n:param kafkaProperties: (java.util.Properties) - Properties to configure this table and also to be passed to create the KafkaConsumer\n:param topic: (java.lang.String) - Kafka topic name\n:param partitionFilter: (java.util.function.IntPredicate) - A predicate returning true for the partitions to consume. The convenience constant\n ALL_PARTITIONS is defined to facilitate requesting all partitions.\n:param partitionToInitialOffset: (java.util.function.IntToLongFunction) - A function specifying the desired initial offset for each partition consumed\n:param keySpec: (io.deephaven.kafka.KafkaTools.Consume.KeyOrValueSpec) - Conversion specification for Kafka record keys\n:param valueSpec: (io.deephaven.kafka.KafkaTools.Consume.KeyOrValueSpec) - Conversion specification for Kafka record values\n:param resultType: (io.deephaven.kafka.KafkaTools.TableType) - KafkaTools.TableType specifying the type of the expected result\n:return: (io.deephaven.engine.table.Table) The result table containing Kafka stream data formatted according to resultType",
"friendlyNameToTableType": "Map \"Python-friendly\" table type name to a KafkaTools.TableType.\n\n:param typeName: (java.lang.String) - The friendly name\n:return: (io.deephaven.kafka.KafkaTools.TableType) The mapped KafkaTools.TableType",
"getAvroSchema": "**Incompatible overloads text - text from the first overload:**\n\nFetch an Avro schema from a Confluent compatible Schema Server.\n\n*Overload 1* \n :param schemaServerUrl: (java.lang.String) - The schema server URL\n :param resourceName: (java.lang.String) - The resource name that the schema is known as in the schema server\n :param version: (java.lang.String) - The version to fetch, or the string \"latest\" for the latest version.\n :return: (org.apache.avro.Schema) An Avro schema.\n \n*Overload 2* \n :param schemaServerUrl: (java.lang.String) - The schema server URL\n :param resourceName: (java.lang.String) - The resource name that the schema is known as in the schema server\n :return: (org.apache.avro.Schema) An Avro schema.",
"partitionFilterFromArray": ":param partitions: int[]\n:return: java.util.function.IntPredicate",
"partitionToOffsetFromParallelArrays": ":param partitions: int[]\n:param offsets: long[]\n:return: java.util.function.IntToLongFunction",
"produceFromTable": "Consume from Kafka to a Deephaven table.\n\n:param table: (io.deephaven.engine.table.Table) - The table used as a source of data to be sent to Kafka.\n:param kafkaProperties: (java.util.Properties) - Properties to be passed to create the associated KafkaProducer.\n:param topic: (java.lang.String) - Kafka topic name\n:param keySpec: (io.deephaven.kafka.KafkaTools.Produce.KeyOrValueSpec) - Conversion specification for Kafka record keys from table column data.\n:param valueSpec: (io.deephaven.kafka.KafkaTools.Produce.KeyOrValueSpec) - Conversion specification for Kafka record values from table column data.\n:param lastByKeyColumns: (boolean) - Whether to publish only the last record for each unique key. Ignored when keySpec\n is IGNORE. If keySpec != null && !lastByKeyColumns, it is expected that table will\n not produce any row shifts; that is, the publisher expects keyed tables to be streams, add-only, or\n aggregated.\n:return: (java.lang.Runnable) a callback to stop producing and shut down the associated table listener; note a caller should keep a\n reference to this return value to ensure liveliness."
"predicateFromSet": ":param set: java.util.Set<java.lang.String>\n:return: java.util.function.Predicate<java.lang.String>",
"produceFromTable": "Consume from Kafka to a Deephaven table.\n\n:param table: (io.deephaven.engine.table.Table) - The table used as a source of data to be sent to Kafka.\n:param kafkaProperties: (java.util.Properties) - Properties to be passed to create the associated KafkaProducer.\n:param topic: (java.lang.String) - Kafka topic name\n:param keySpec: (io.deephaven.kafka.KafkaTools.Produce.KeyOrValueSpec) - Conversion specification for Kafka record keys from table column data.\n:param valueSpec: (io.deephaven.kafka.KafkaTools.Produce.KeyOrValueSpec) - Conversion specification for Kafka record values from table column data.\n:param lastByKeyColumns: (boolean) - Whether to publish only the last record for each unique key. Ignored when keySpec\n is IGNORE. If keySpec != null && !lastByKeyColumns, it is expected that table will\n not produce any row shifts; that is, the publisher expects keyed tables to be streams, add-only, or\n aggregated.\n:return: (java.lang.Runnable) a callback to stop producing and shut down the associated table listener; note a caller should keep a\n reference to this return value to ensure liveliness.",
"putAvroSchema": "Push an Avro schema from a Confluent compatible Schema Server.\n\n:param schema: (org.apache.avro.Schema) - An Avro schema\n:param schemaServerUrl: (java.lang.String) - The schema server URL\n:param resourceName: (java.lang.String) - The resource name that the schema will be known as in the schema server\n:return: (java.lang.String) The version for the added resource as returned by schema server."
},
"path": "io.deephaven.kafka.KafkaTools",
"typeName": "class"
Expand Down

0 comments on commit 6adf0c7

Please sign in to comment.