Key Default Type Description
pipeline.auto-generate-uids
true Boolean When auto-generated UIDs are disabled, users are forced to manually specify UIDs on DataStream applications.

It is highly recommended that users specify UIDs before deploying to production since they are used to match state in savepoints to operators in a job. Because auto-generated ID's are likely to change when modifying a job, specifying custom IDs allow an application to evolve over time without discarding state.
pipeline.auto-watermark-interval
200 ms Duration The interval of the automatic watermark emission. Watermarks are used throughout the streaming system to keep track of the progress of time. They are used, for example, for time based windowing.
pipeline.cached-files
(none) List<String> Files to be registered at the distributed cache under the given name. The files will be accessible from any user-defined function in the (distributed) runtime under a local path. Files may be local files (which will be distributed via BlobServer), or files in a distributed file system. The runtime will copy the files temporarily to a local cache, if needed.

Example:
name:file1,path:'file:///tmp/file1';name:file2,path:'hdfs:///tmp/file2'
pipeline.classpaths
(none) List<String> A semicolon-separated list of the classpaths to package with the job jars to be sent to the cluster. These have to be valid URLs.
pipeline.closure-cleaner-level
RECURSIVE

Enum

Configures the mode in which the closure cleaner works.

Possible values:
  • "NONE": Disables the closure cleaner completely.
  • "TOP_LEVEL": Cleans only the top-level class without recursing into fields.
  • "RECURSIVE": Cleans all fields recursively.
pipeline.default-kryo-serializers
(none) List<String> Semicolon separated list of pairs of class names and Kryo serializers class names to be used as Kryo default serializers

Example:
class:org.example.ExampleClass,serializer:org.example.ExampleSerializer1; class:org.example.ExampleClass2,serializer:org.example.ExampleSerializer2
pipeline.force-avro
false Boolean Forces Flink to use the Apache Avro serializer for POJOs.

Important: Make sure to include the flink-avro module.
pipeline.force-kryo
false Boolean If enabled, forces TypeExtractor to use Kryo serializer for POJOS even though we could analyze as POJO. In some cases this might be preferable. For example, when using interfaces with subclasses that cannot be analyzed as POJO.
pipeline.generic-types
true Boolean If the use of generic types is disabled, Flink will throw an UnsupportedOperationException whenever it encounters a data type that would go through Kryo for serialization.

Disabling generic types can be helpful to eagerly find and eliminate the use of types that would go through Kryo serialization during runtime. Rather than checking types individually, using this option will throw exceptions eagerly in the places where generic types are used.

We recommend to use this option only during development and pre-production phases, not during actual production use. The application program and/or the input data may be such that new, previously unseen, types occur at some point. In that case, setting this option would cause the program to fail.
pipeline.global-job-parameters
(none) Map Register a custom, serializable user configuration object. The configuration can be accessed in operators
pipeline.jars
(none) List<String> A semicolon-separated list of the jars to package with the job jars to be sent to the cluster. These have to be valid paths.
pipeline.jobvertex-parallelism-overrides
Map A parallelism override map (jobVertexId -> parallelism) which will be used to update the parallelism of the corresponding job vertices of submitted JobGraphs.
pipeline.max-parallelism
-1 Integer The program-wide maximum parallelism used for operators which haven't specified a maximum parallelism. The maximum parallelism specifies the upper limit for dynamic scaling and the number of key groups used for partitioned state. Changing the value explicitly when recovery from original job will lead to state incompatibility. Must be less than or equal to 32768.
pipeline.name
(none) String The job name used for printing and logging.
pipeline.object-reuse
false Boolean When enabled objects that Flink internally uses for deserialization and passing data to user-code functions will be reused. Keep in mind that this can lead to bugs when the user-code function of an operation is not aware of this behaviour.
pipeline.operator-chaining.chain-operators-with-different-max-parallelism
true Boolean Operators with different max parallelism can be chained together. Default behavior may prevent rescaling when the AdaptiveScheduler is used.
pipeline.operator-chaining.enabled
true Boolean Operator chaining allows non-shuffle operations to be co-located in the same thread fully avoiding serialization and de-serialization.
pipeline.registered-kryo-types
(none) List<String> Semicolon separated list of types to be registered with the serialization stack. If the type is eventually serialized as a POJO, then the type is registered with the POJO serializer. If the type ends up being serialized with Kryo, then it will be registered at Kryo to make sure that only tags are written.
pipeline.registered-pojo-types
(none) List<String> Semicolon separated list of types to be registered with the serialization stack. If the type is eventually serialized as a POJO, then the type is registered with the POJO serializer. If the type ends up being serialized with Kryo, then it will be registered at Kryo to make sure that only tags are written.
pipeline.vertex-description-mode
TREE

Enum

The mode how we organize description of a job vertex.

Possible values:
  • "TREE"
  • "CASCADING"
pipeline.vertex-name-include-index-prefix
false Boolean Whether name of vertex includes topological index or not. When it is true, the name will have a prefix of index of the vertex, like '[vertex-0]Source: source'. It is false by default
pipeline.watermark-alignment.allow-unaligned-source-splits
false Boolean If watermark alignment is used, sources with multiple splits will attempt to pause/resume split readers to avoid watermark drift of source splits. However, if split readers don't support pause/resume, an UnsupportedOperationException will be thrown when there is an attempt to pause/resume. To allow use of split readers that don't support pause/resume and, hence, to allow unaligned splits while still using watermark alignment, set this parameter to true. The default value is false. Note: This parameter may be removed in future releases.