• 配置参数
    • Common Options
    • Full Reference
      • HDFS
      • Core
      • JobManager
      • TaskManager
      • Distributed Coordination
      • Distributed Coordination (via Akka)
      • REST
      • Blob Server
      • Heartbeat Manager
      • SSL Settings
      • Netty Shuffle Environment
      • Network Communication (via Netty)
      • Web Frontend
      • File Systems
      • Compiler/Optimizer
      • Runtime Algorithms
      • Resource Manager
      • Shuffle Service
      • YARN
      • Mesos
        • Mesos TaskManager
      • High Availability (HA)
        • ZooKeeper-based HA Mode
      • ZooKeeper Security
      • Kerberos-based Security
      • Environment
      • Checkpointing
      • RocksDB State Backend
      • RocksDB Configurable Options
      • Queryable State
      • Metrics
      • RocksDB Native Metrics
      • History Server
    • Legacy
    • Background
      • Configuring the Network Buffers
        • Setting Memory Fractions
        • Setting the Number of Network Buffers directly
      • Configuring Temporary I/O Directories
      • Configuring TaskManager processing slots

    配置参数

    For single-node setups Flink is ready to go out of the box and you don’t need to change the default configuration to get started.

    The out of the box configuration will use your default Java installation. You can manually set the environment variable JAVA_HOME or the configuration key env.java.home in conf/flink-conf.yaml if you want to manually override the Java runtime to use.

    This page lists the most common options that are typically needed to set up a well performing (distributed) installation. In addition a full list of all available configuration parameters is listed here.

    All configuration is done in conf/flink-conf.yaml, which is expected to be a flat collection of YAML key value pairs with format key: value.

    The system and run scripts parse the config at startup time. Changes to the configuration file require restarting the Flink JobManager and TaskManagers.

    The configuration files for the TaskManagers can be different, Flink does not assume uniform machines in the cluster.

    • Common Options
    • Full Reference
      • HDFS
      • Core
      • JobManager
      • TaskManager
      • Distributed Coordination
      • Distributed Coordination (via Akka)
      • REST
      • Blob Server
      • Heartbeat Manager
      • SSL Settings
      • Netty Shuffle Environment
      • Network Communication (via Netty)
      • Web Frontend
      • File Systems
      • Compiler/Optimizer
      • Runtime Algorithms
      • Resource Manager
      • Shuffle Service
      • YARN
      • Mesos
      • High Availability (HA)
      • ZooKeeper Security
      • Kerberos-based Security
      • Environment
      • Checkpointing
      • RocksDB State Backend
      • RocksDB Configurable Options
      • Queryable State
      • Metrics
      • RocksDB Native Metrics
      • History Server
    • Legacy
    • Background
      • Configuring the Network Buffers
      • Configuring Temporary I/O Directories
      • Configuring TaskManager processing slots

    Common Options

    KeyDefaultDescription
    ##### jobmanager.heap.size"1024m"JVM heap size for the JobManager.
    ##### taskmanager.heap.size"1024m"JVM heap size for the TaskManagers, which are the parallel workers of the system. On YARN setups, this value is automatically configured to the size of the TaskManager's YARN container, minus a certain tolerance value.
    ##### parallelism.default1Default parallelism for jobs.
    ##### taskmanager.numberOfTaskSlots1The number of parallel operator or user function instances that a single TaskManager can run. If this value is larger than 1, a single TaskManager takes multiple instances of a function or operator. That way, the TaskManager can utilize multiple CPU cores, but at the same time, the available memory is divided between the different operator or function instances. This value is typically proportional to the number of physical CPU cores that the TaskManager's machine has (e.g., equal to the number of cores, or half the number of cores).
    ##### state.backend(none)The state backend to be used to store and checkpoint state.
    ##### state.checkpoints.dir(none)The default directory used for storing the data files and meta data of checkpoints in a Flink supported filesystem. The storage path must be accessible from all participating processes/nodes(i.e. all TaskManagers and JobManagers).
    ##### state.savepoints.dir(none)The default directory for savepoints. Used by the state backends that write savepoints to file systems (MemoryStateBackend, FsStateBackend, RocksDBStateBackend).
    ##### high-availability"NONE"Defines high-availability mode used for the cluster execution. To enable high-availability, set this mode to "ZOOKEEPER" or specify FQN of factory class.
    ##### high-availability.storageDir(none)File system path (URI) where Flink persists metadata in high-availability setups.
    ##### security.ssl.internal.enabledfalseTurns on SSL for internal network communication. Optionally, specific components may override this through their own settings (rpc, data transport, REST, etc).
    ##### security.ssl.rest.enabledfalseTurns on SSL for external communication via the REST endpoints.

    Full Reference

    HDFS

    Note: These keys are deprecated and it is recommended to configure the Hadoop path with the environment variable HADOOP_CONF_DIR instead.

    These parameters configure the default HDFS used by Flink. Setups that do not specify a HDFS configuration have to specify the full path to HDFS files (hdfs://address:port/path/to/files) Files will also be written with default HDFS parameters (block size, replication factor).

    • fs.hdfs.hadoopconf: The absolute path to the Hadoop File System’s (HDFS) configuration directory (OPTIONAL VALUE). Specifying this value allows programs to reference HDFS files using short URIs (hdfs:///path/to/files, without including the address and port of the NameNode in the file URI). Without this option, HDFS files can be accessed, but require fully qualified URIs like hdfs://address:port/path/to/files. This option also causes file writers to pick up the HDFS’s default values for block sizes and replication factors. Flink will look for the “core-site.xml” and “hdfs-site.xml” files in the specified directory.

    • fs.hdfs.hdfsdefault: The absolute path of Hadoop’s own configuration file “hdfs-default.xml” (DEFAULT: null).

    • fs.hdfs.hdfssite: The absolute path of Hadoop’s own configuration file “hdfs-site.xml” (DEFAULT: null).

    Core

    KeyDefaultDescription
    ##### classloader.parent-first-patterns.additional(none)A (semicolon-separated) list of patterns that specifies which classes should always be resolved through the parent ClassLoader first. A pattern is a simple prefix that is checked against the fully qualified class name. These patterns are appended to "classloader.parent-first-patterns.default".
    ##### classloader.parent-first-patterns.default"java.;scala.;org.apache.flink.;com.esotericsoftware.kryo;org.apache.hadoop.;javax.annotation.;org.slf4j;org.apache.log4j;org.apache.logging;org.apache.commons.logging;ch.qos.logback"A (semicolon-separated) list of patterns that specifies which classes should always be resolved through the parent ClassLoader first. A pattern is a simple prefix that is checked against the fully qualified class name. This setting should generally not be modified. To add another pattern we recommend to use "classloader.parent-first-patterns.additional" instead.
    ##### classloader.resolve-order"child-first"Defines the class resolution strategy when loading classes from user code, meaning whether to first check the user code jar ("child-first") or the application classpath ("parent-first"). The default settings indicate to load classes first from the user code jar, which means that user code jars can include and load different dependencies than Flink uses (transitively).
    ##### io.tmp.dirs'LOCAL_DIRS' on Yarn. '_FLINK_TMP_DIR' on Mesos. System.getProperty("java.io.tmpdir") in standalone.Directories for temporary files, separated by",", "|", or the system's java.io.File.pathSeparator.
    ##### parallelism.default1Default parallelism for jobs.

    JobManager

    KeyDefaultDescription
    ##### jobmanager.archive.fs.dir(none)Dictionary for JobManager to store the archives of completed jobs.
    ##### jobmanager.execution.attempts-history-size16The maximum number of prior execution attempts kept in history.
    ##### jobmanager.execution.failover-strategy"full"This option specifies how the job computation recovers from task failures. Accepted values are:- 'full': Restarts all tasks to recover the job.- 'region': Restarts all tasks that could be affected by the task failure. More details can be found here.
    ##### jobmanager.heap.size"1024m"JVM heap size for the JobManager.
    ##### jobmanager.rpc.address(none)The config parameter defining the network address to connect to for communication with the job manager. This value is only interpreted in setups where a single JobManager with static name or address exists (simple standalone setups, or container setups with dynamic service name resolution). It is not used in many high-availability setups, when a leader-election service (like ZooKeeper) is used to elect and discover the JobManager leader from potentially multiple standby JobManagers.
    ##### jobmanager.rpc.port6123The config parameter defining the network port to connect to for communication with the job manager. Like jobmanager.rpc.address, this value is only interpreted in setups where a single JobManager with static name/address and port exists (simple standalone setups, or container setups with dynamic service name resolution). This config option is not used in many high-availability setups, when a leader-election service (like ZooKeeper) is used to elect and discover the JobManager leader from potentially multiple standby JobManagers.
    ##### jobstore.cache-size52428800The job store cache size in bytes which is used to keep completed jobs in memory.
    ##### jobstore.expiration-time3600The time in seconds after which a completed job expires and is purged from the job store.
    ##### slot.idle.timeout50000The timeout in milliseconds for a idle slot in Slot Pool.
    ##### slot.request.timeout300000The timeout in milliseconds for requesting a slot from Slot Pool.

    TaskManager

    KeyDefaultDescription
    ##### task.cancellation.interval30000Time interval between two successive task cancellation attempts in milliseconds.
    ##### task.cancellation.timeout180000Timeout in milliseconds after which a task cancellation times out and leads to a fatal TaskManager error. A value of 0 deactivates the watch dog.
    ##### task.cancellation.timers.timeout7500Time we wait for the timers in milliseconds to finish all pending timer threads when the stream task is cancelled.
    ##### task.checkpoint.alignment.max-size-1The maximum number of bytes that a checkpoint alignment may buffer. If the checkpoint alignment buffers more than the configured amount of data, the checkpoint is aborted (skipped). A value of -1 indicates that there is no limit.
    ##### taskmanager.debug.memory.logfalseFlag indicating whether to start a thread, which repeatedly logs the memory usage of the JVM.
    ##### taskmanager.debug.memory.log-interval5000The interval (in ms) for the log thread to log the current memory usage.
    ##### taskmanager.exit-on-fatal-akka-errorfalseWhether the quarantine monitor for task managers shall be started. The quarantine monitor shuts down the actor system if it detects that it has quarantined another actor system or if it has been quarantined by another actor system.
    ##### taskmanager.heap.size"1024m"JVM heap size for the TaskManagers, which are the parallel workers of the system. On YARN setups, this value is automatically configured to the size of the TaskManager's YARN container, minus a certain tolerance value.
    ##### taskmanager.host(none)The address of the network interface that the TaskManager binds to. This option can be used to define explicitly a binding address. Because different TaskManagers need different values for this option, usually it is specified in an additional non-shared TaskManager-specific config file.
    ##### taskmanager.jvm-exit-on-oomfalseWhether to kill the TaskManager when the task thread throws an OutOfMemoryError.
    ##### taskmanager.network.bind-policy"ip"The automatic address binding policy used by the TaskManager if "taskmanager.host" is not set. The value should be one of the following:- "name" - uses hostname as binding address- "ip" - uses host's ip address as binding address
    ##### taskmanager.numberOfTaskSlots1The number of parallel operator or user function instances that a single TaskManager can run. If this value is larger than 1, a single TaskManager takes multiple instances of a function or operator. That way, the TaskManager can utilize multiple CPU cores, but at the same time, the available memory is divided between the different operator or function instances. This value is typically proportional to the number of physical CPU cores that the TaskManager's machine has (e.g., equal to the number of cores, or half the number of cores).
    ##### taskmanager.registration.initial-backoff"500 ms"The initial registration backoff between two consecutive registration attempts. The backoff is doubled for each new registration attempt until it reaches the maximum registration backoff.
    ##### taskmanager.registration.max-backoff"30 s"The maximum registration backoff between two consecutive registration attempts. The max registration backoff requires a time unit specifier (ms/s/min/h/d).
    ##### taskmanager.registration.refused-backoff"10 s"The backoff after a registration has been refused by the job manager before retrying to connect.
    ##### taskmanager.registration.timeout"5 min"Defines the timeout for the TaskManager registration. If the duration is exceeded without a successful registration, then the TaskManager terminates.
    ##### taskmanager.rpc.port"0"The task manager’s IPC port. Accepts a list of ports (“50100,50101”), ranges (“50100-50200”) or a combination of both. It is recommended to set a range of ports to avoid collisions when multiple TaskManagers are running on the same machine.

    For batch jobs (or if taskmanager.memoy.preallocate is enabled) Flink allocates a fraction of 0.7 of the free memory (total memory configured via taskmanager.heap.mb minus memory used for network buffers) for its managed memory. Managed memory helps Flink to run the batch operators efficiently. It prevents OutOfMemoryExceptions because Flink knows how much memory it can use to execute operations. If Flink runs out of managed memory, it utilizes disk space. Using managed memory, some operations can be performed directly on the raw data without having to deserialize the data to convert it into Java objects. All in all, managed memory improves the robustness and speed of the system.

    The default fraction for managed memory can be adjusted using the taskmanager.memory.fraction parameter. An absolute value may be set using taskmanager.memory.size (overrides the fraction parameter). If desired, the managed memory may be allocated outside the JVM heap. This may improve performance in setups with large memory sizes.

    KeyDefaultDescription
    ##### taskmanager.memory.fraction0.7The relative amount of memory (after subtracting the amount of memory used by network buffers) that the task manager reserves for sorting, hash tables, and caching of intermediate results. For example, a value of 0.8 means that a task manager reserves 80% of its memory (on-heap or off-heap depending on taskmanager.memory.off-heap) for internal data buffers, leaving 20% of free memory for the task manager's heap for objects created by user-defined functions. This parameter is only evaluated, if taskmanager.memory.size is not set.
    ##### taskmanager.memory.off-heapfalseMemory allocation method (JVM heap or off-heap), used for managed memory of the TaskManager. For setups with larger quantities of memory, this can improve the efficiency of the operations performed on the memory.When set to true, then it is advised that taskmanager.memory.preallocate is also set to true.
    ##### taskmanager.memory.preallocatefalseWhether TaskManager managed memory should be pre-allocated when the TaskManager is starting. When taskmanager.memory.off-heap is set to true, then it is advised that this configuration is also set to true. If this configuration is set to false cleaning up of the allocated off-heap memory happens only when the configured JVM parameter MaxDirectMemorySize is reached by triggering a full GC. For streaming setups, it is highly recommended to set this value to false as the core state backends currently do not use the managed memory.
    ##### taskmanager.memory.segment-size"32kb"Size of memory buffers used by the network stack and the memory manager.
    ##### taskmanager.memory.size"0"The amount of memory (in megabytes) that the task manager reserves on-heap or off-heap (depending on taskmanager.memory.off-heap) for sorting, hash tables, and caching of intermediate results. If unspecified, the memory manager will take a fixed ratio with respect to the size of the task manager JVM as specified by taskmanager.memory.fraction.

    Distributed Coordination

    KeyDefaultDescription
    ##### cluster.evenly-spread-out-slotsfalseEnable the slot spread out allocation strategy. This strategy tries to spread out the slots evenly across all available TaskExecutors.
    ##### cluster.registration.error-delay10000The pause made after an registration attempt caused an exception (other than timeout) in milliseconds.
    ##### cluster.registration.initial-timeout100Initial registration timeout between cluster components in milliseconds.
    ##### cluster.registration.max-timeout30000Maximum registration timeout between cluster components in milliseconds.
    ##### cluster.registration.refused-registration-delay30000The pause made after the registration attempt was refused in milliseconds.
    ##### cluster.services.shutdown-timeout30000The shutdown timeout for cluster services like executors in milliseconds.

    Distributed Coordination (via Akka)

    KeyDefaultDescription
    ##### akka.ask.timeout"10 s"Timeout used for all futures and blocking Akka calls. If Flink fails due to timeouts then you should try to increase this value. Timeouts can be caused by slow machines or a congested network. The timeout value requires a time-unit specifier (ms/s/min/h/d).
    ##### akka.client-socket-worker-pool.pool-size-factor1.0The pool size factor is used to determine thread pool size using the following formula: ceil(available processors factor). Resulting size is then bounded by the pool-size-min and pool-size-max values.
    ##### akka.client-socket-worker-pool.pool-size-max2Max number of threads to cap factor-based number to.
    ##### akka.client-socket-worker-pool.pool-size-min1Min number of threads to cap factor-based number to.
    ##### akka.client.timeout"60 s"Timeout for all blocking calls on the client side.
    ##### akka.fork-join-executor.parallelism-factor2.0The parallelism factor is used to determine thread pool size using the following formula: ceil(available processors factor). Resulting size is then bounded by the parallelism-min and parallelism-max values.
    ##### akka.fork-join-executor.parallelism-max64Max number of threads to cap factor-based parallelism number to.
    ##### akka.fork-join-executor.parallelism-min8Min number of threads to cap factor-based parallelism number to.
    ##### akka.framesize"10485760b"Maximum size of messages which are sent between the JobManager and the TaskManagers. If Flink fails because messages exceed this limit, then you should increase it. The message size requires a size-unit specifier.
    ##### akka.jvm-exit-on-fatal-errortrueExit JVM on fatal Akka errors.
    ##### akka.log.lifecycle.eventsfalseTurns on the Akka’s remote logging of events. Set this value to 'true' in case of debugging.
    ##### akka.lookup.timeout"10 s"Timeout used for the lookup of the JobManager. The timeout value has to contain a time-unit specifier (ms/s/min/h/d).
    ##### akka.retry-gate-closed-for50Milliseconds a gate should be closed for after a remote connection was disconnected.
    ##### akka.server-socket-worker-pool.pool-size-factor1.0The pool size factor is used to determine thread pool size using the following formula: ceil(available processors * factor). Resulting size is then bounded by the pool-size-min and pool-size-max values.
    ##### akka.server-socket-worker-pool.pool-size-max2Max number of threads to cap factor-based number to.
    ##### akka.server-socket-worker-pool.pool-size-min1Min number of threads to cap factor-based number to.
    ##### akka.ssl.enabledtrueTurns on SSL for Akka’s remote communication. This is applicable only when the global ssl flag security.ssl.enabled is set to true.
    ##### akka.startup-timeout(none)Timeout after which the startup of a remote component is considered being failed.
    ##### akka.tcp.timeout"20 s"Timeout for all outbound connections. If you should experience problems with connecting to a TaskManager due to a slow network, you should increase this value.
    ##### akka.throughput15Number of messages that are processed in a batch before returning the thread to the pool. Low values denote a fair scheduling whereas high values can increase the performance at the cost of unfairness.
    ##### akka.transport.heartbeat.interval"1000 s"Heartbeat interval for Akka’s transport failure detector. Since Flink uses TCP, the detector is not necessary. Therefore, the detector is disabled by setting the interval to a very high value. In case you should need the transport failure detector, set the interval to some reasonable value. The interval value requires a time-unit specifier (ms/s/min/h/d).
    ##### akka.transport.heartbeat.pause"6000 s"Acceptable heartbeat pause for Akka’s transport failure detector. Since Flink uses TCP, the detector is not necessary. Therefore, the detector is disabled by setting the pause to a very high value. In case you should need the transport failure detector, set the pause to some reasonable value. The pause value requires a time-unit specifier (ms/s/min/h/d).
    ##### akka.transport.threshold300.0Threshold for the transport failure detector. Since Flink uses TCP, the detector is not necessary and, thus, the threshold is set to a high value.
    ##### akka.watch.heartbeat.interval"10 s"Heartbeat interval for Akka’s DeathWatch mechanism to detect dead TaskManagers. If TaskManagers are wrongly marked dead because of lost or delayed heartbeat messages, then you should decrease this value or increase akka.watch.heartbeat.pause. A thorough description of Akka’s DeathWatch can be found here
    ##### akka.watch.heartbeat.pause"60 s"Acceptable heartbeat pause for Akka’s DeathWatch mechanism. A low value does not allow an irregular heartbeat. If TaskManagers are wrongly marked dead because of lost or delayed heartbeat messages, then you should increase this value or decrease akka.watch.heartbeat.interval. Higher value increases the time to detect a dead TaskManager. A thorough description of Akka’s DeathWatch can be found here
    ##### akka.watch.threshold12Threshold for the DeathWatch failure detector. A low value is prone to false positives whereas a high value increases the time to detect a dead TaskManager. A thorough description of Akka’s DeathWatch can be found here

    REST

    KeyDefaultDescription
    ##### rest.address(none)The address that should be used by clients to connect to the server.
    ##### rest.await-leader-timeout30000The time in ms that the client waits for the leader address, e.g., Dispatcher or WebMonitorEndpoint
    ##### rest.bind-address(none)The address that the server binds itself.
    ##### rest.bind-port"8081"The port that the server binds itself. Accepts a list of ports (“50100,50101”), ranges (“50100-50200”) or a combination of both. It is recommended to set a range of ports to avoid collisions when multiple Rest servers are running on the same machine.
    ##### rest.client.max-content-length104857600The maximum content length in bytes that the client will handle.
    ##### rest.connection-timeout15000The maximum time in ms for the client to establish a TCP connection.
    ##### rest.idleness-timeout300000The maximum time in ms for a connection to stay idle before failing.
    ##### rest.port8081The port that the client connects to. If rest.bind-port has not been specified, then the REST server will bind to this port.
    ##### rest.retry.delay3000The time in ms that the client waits between retries (See also rest.retry.max-attempts).
    ##### rest.retry.max-attempts20The number of retries the client will attempt if a retryable operations fails.
    ##### rest.server.max-content-length104857600The maximum content length in bytes that the server will handle.
    ##### rest.server.numThreads4The number of threads for the asynchronous processing of requests.
    ##### rest.server.thread-priority5Thread priority of the REST server's executor for processing asynchronous requests. Lowering the thread priority will give Flink's main components more CPU time whereas increasing will allocate more time for the REST server's processing.

    Blob Server

    KeyDefaultDescription
    ##### blob.client.connect.timeout0The connection timeout in milliseconds for the blob client.
    ##### blob.client.socket.timeout300000The socket timeout in milliseconds for the blob client.
    ##### blob.fetch.backlog1000The config parameter defining the backlog of BLOB fetches on the JobManager.
    ##### blob.fetch.num-concurrent50The config parameter defining the maximum number of concurrent BLOB fetches that the JobManager serves.
    ##### blob.fetch.retries5The config parameter defining number of retires for failed BLOB fetches.
    ##### blob.offload.minsize1048576The minimum size for messages to be offloaded to the BlobServer.
    ##### blob.server.port"0"The config parameter defining the server port of the blob service.
    ##### blob.service.cleanup.interval3600Cleanup interval of the blob caches at the task managers (in seconds).
    ##### blob.service.ssl.enabledtrueFlag to override ssl support for the blob service transport.
    ##### blob.storage.directory(none)The config parameter defining the storage directory to be used by the blob server.

    Heartbeat Manager

    KeyDefaultDescription
    ##### heartbeat.interval10000Time interval for requesting heartbeat from sender side.
    ##### heartbeat.timeout50000Timeout for requesting and receiving heartbeat for both sender and receiver sides.

    SSL Settings

    KeyDefaultDescription
    ##### security.ssl.algorithms"TLS_RSA_WITH_AES_128_CBC_SHA"The comma separated list of standard SSL algorithms to be supported. Read more here
    ##### security.ssl.internal.close-notify-flush-timeout-1The timeout (in ms) for flushing the close_notify that was triggered by closing a channel. If the close_notify was not flushed in the given timeout the channel will be closed forcibly. (-1 = use system default)
    ##### security.ssl.internal.enabledfalseTurns on SSL for internal network communication. Optionally, specific components may override this through their own settings (rpc, data transport, REST, etc).
    ##### security.ssl.internal.handshake-timeout-1The timeout (in ms) during SSL handshake. (-1 = use system default)
    ##### security.ssl.internal.key-password(none)The secret to decrypt the key in the keystore for Flink's internal endpoints (rpc, data transport, blob server).
    ##### security.ssl.internal.keystore(none)The Java keystore file with SSL Key and Certificate, to be used Flink's internal endpoints (rpc, data transport, blob server).
    ##### security.ssl.internal.keystore-password(none)The secret to decrypt the keystore file for Flink's for Flink's internal endpoints (rpc, data transport, blob server).
    ##### security.ssl.internal.session-cache-size-1The size of the cache used for storing SSL session objects. According to https://github.com/netty/netty/issues/832, you should always set this to an appropriate number to not run into a bug with stalling IO threads during garbage collection. (-1 = use system default).
    ##### security.ssl.internal.session-timeout-1The timeout (in ms) for the cached SSL session objects. (-1 = use system default)
    ##### security.ssl.internal.truststore(none)The truststore file containing the public CA certificates to verify the peer for Flink's internal endpoints (rpc, data transport, blob server).
    ##### security.ssl.internal.truststore-password(none)The password to decrypt the truststore for Flink's internal endpoints (rpc, data transport, blob server).
    ##### security.ssl.key-password(none)The secret to decrypt the server key in the keystore.
    ##### security.ssl.keystore(none)The Java keystore file to be used by the flink endpoint for its SSL Key and Certificate.
    ##### security.ssl.keystore-password(none)The secret to decrypt the keystore file.
    ##### security.ssl.protocol"TLSv1.2"The SSL protocol version to be supported for the ssl transport. Note that it doesn’t support comma separated list.
    ##### security.ssl.provider"JDK"The SSL engine provider to use for the ssl transport:- JDK: default Java-based SSL engine- OPENSSL: openSSL-based SSL engine using system librariesOPENSSL is based on netty-tcnative and comes in two flavours:- dynamically linked: This will use your system's openSSL libraries (if compatible) and requires opt/flink-shaded-netty-tcnative-dynamic-*.jar to be copied to lib/- statically linked: Due to potential licensing issues with openSSL (see LEGAL-393), we cannot ship pre-built libraries. However, you can build the required library yourself and put it into lib/:git clone https://github.com/apache/flink-shaded.git && cd flink-shaded && mvn clean package -Pinclude-netty-tcnative-static -pl flink-shaded-netty-tcnative-static
    ##### security.ssl.rest.authentication-enabledfalseTurns on mutual SSL authentication for external communication via the REST endpoints.
    ##### security.ssl.rest.enabledfalseTurns on SSL for external communication via the REST endpoints.
    ##### security.ssl.rest.key-password(none)The secret to decrypt the key in the keystore for Flink's external REST endpoints.
    ##### security.ssl.rest.keystore(none)The Java keystore file with SSL Key and Certificate, to be used Flink's external REST endpoints.
    ##### security.ssl.rest.keystore-password(none)The secret to decrypt the keystore file for Flink's for Flink's external REST endpoints.
    ##### security.ssl.rest.truststore(none)The truststore file containing the public CA certificates to verify the peer for Flink's external REST endpoints.
    ##### security.ssl.rest.truststore-password(none)The password to decrypt the truststore for Flink's external REST endpoints.
    ##### security.ssl.truststore(none)The truststore file containing the public CA certificates to be used by flink endpoints to verify the peer’s certificate.
    ##### security.ssl.truststore-password(none)The secret to decrypt the truststore.
    ##### security.ssl.verify-hostnametrueFlag to enable peer’s hostname verification during ssl handshake.

    Netty Shuffle Environment

    KeyDefaultDescription
    ##### taskmanager.data.port0The task manager’s port used for data exchange operations.
    ##### taskmanager.data.ssl.enabledtrueEnable SSL support for the taskmanager data transport. This is applicable only when the global flag for internal SSL (security.ssl.internal.enabled) is set to true
    ##### taskmanager.network.detailed-metricsfalseBoolean flag to enable/disable more detailed metrics about inbound/outbound network queue lengths.
    ##### taskmanager.network.memory.buffers-per-channel2Maximum number of network buffers to use for each outgoing/incoming channel (subpartition/input channel).In credit-based flow control mode, this indicates how many credits are exclusive in each input channel. It should be configured at least 2 for good performance. 1 buffer is for receiving in-flight data in the subpartition and 1 buffer is for parallel serialization.
    ##### taskmanager.network.memory.floating-buffers-per-gate8Number of extra network buffers to use for each outgoing/incoming gate (result partition/input gate). In credit-based flow control mode, this indicates how many floating credits are shared among all the input channels. The floating buffers are distributed based on backlog (real-time output buffers in the subpartition) feedback, and can help relieve back-pressure caused by unbalanced data distribution among the subpartitions. This value should be increased in case of higher round trip times between nodes and/or larger number of machines in the cluster.
    ##### taskmanager.network.memory.fraction0.1Fraction of JVM memory to use for network buffers. This determines how many streaming data exchange channels a TaskManager can have at the same time and how well buffered the channels are. If a job is rejected or you get a warning that the system has not enough buffers available, increase this value or the min/max values below. Also note, that "taskmanager.network.memory.min"` and "taskmanager.network.memory.max" may override this fraction.
    ##### taskmanager.network.memory.max"1gb"Maximum memory size for network buffers.
    ##### taskmanager.network.memory.min"64mb"Minimum memory size for network buffers.
    ##### taskmanager.network.request-backoff.initial100Minimum backoff in milliseconds for partition requests of input channels.
    ##### taskmanager.network.request-backoff.max10000Maximum backoff in milliseconds for partition requests of input channels.

    Network Communication (via Netty)

    These parameters allow for advanced tuning. The default values are sufficient when running concurrent high-throughput jobs on a large cluster.

    KeyDefaultDescription
    ##### taskmanager.network.netty.client.connectTimeoutSec120The Netty client connection timeout.
    ##### taskmanager.network.netty.client.numThreads-1The number of Netty client threads.
    ##### taskmanager.network.netty.num-arenas-1The number of Netty arenas.
    ##### taskmanager.network.netty.sendReceiveBufferSize0The Netty send and receive buffer size. This defaults to the system buffer size (cat /proc/sys/net/ipv4/tcp_[rw]mem) and is 4 MiB in modern Linux.
    ##### taskmanager.network.netty.server.backlog0The netty server connection backlog.
    ##### taskmanager.network.netty.server.numThreads-1The number of Netty server threads.
    ##### taskmanager.network.netty.transport"nio"The Netty transport type, either "nio" or "epoll"

    Web Frontend

    KeyDefaultDescription
    ##### web.access-control-allow-origin"*"Access-Control-Allow-Origin header for all responses from the web-frontend.
    ##### web.address(none)Address for runtime monitor web-frontend server.
    ##### web.backpressure.cleanup-interval600000Time, in milliseconds, after which cached stats are cleaned up if not accessed.
    ##### web.backpressure.delay-between-samples50Delay between stack trace samples to determine back pressure in milliseconds.
    ##### web.backpressure.num-samples100Number of stack trace samples to take to determine back pressure.
    ##### web.backpressure.refresh-interval60000Time, in milliseconds, after which available stats are deprecated and need to be refreshed (by resampling).
    ##### web.checkpoints.history10Number of checkpoints to remember for recent history.
    ##### web.history5Number of archived jobs for the JobManager.
    ##### web.log.path(none)Path to the log file (may be in /log for standalone but under log directory when using YARN).
    ##### web.refresh-interval3000Refresh interval for the web-frontend in milliseconds.
    ##### web.ssl.enabledtrueFlag indicating whether to override SSL support for the JobManager Web UI.
    ##### web.submit.enabletrueFlag indicating whether jobs can be uploaded and run from the web-frontend.
    ##### web.timeout10000Timeout for asynchronous operations by the web monitor in milliseconds.
    ##### web.tmpdirSystem.getProperty("java.io.tmpdir")Flink web directory which is used by the webmonitor.
    ##### web.upload.dir(none)Directory for uploading the job jars. If not specified a dynamic directory will be used under the directory specified by JOB_MANAGER_WEB_TMPDIR_KEY.

    File Systems

    KeyDefaultDescription
    ##### fs.default-scheme(none)The default filesystem scheme, used for paths that do not declare a scheme explicitly. May contain an authority, e.g. host:port in case of a HDFS NameNode.
    ##### fs.output.always-create-directoryfalseFile writers running with a parallelism larger than one create a directory for the output file path and put the different result files (one per parallel writer task) into that directory. If this option is set to "true", writers with a parallelism of 1 will also create a directory and place a single result file into it. If the option is set to "false", the writer will directly create the file directly at the output path, without creating a containing directory.
    ##### fs.overwrite-filesfalseSpecifies whether file output writers should overwrite existing files by default. Set to "true" to overwrite by default,"false" otherwise.

    Compiler/Optimizer

    KeyDefaultDescription
    ##### compiler.delimited-informat.max-line-samples10he maximum number of line samples taken by the compiler for delimited inputs. The samples are used to estimate the number of records. This value can be overridden for a specific input with the input format’s parameters.
    ##### compiler.delimited-informat.max-sample-len2097152The maximal length of a line sample that the compiler takes for delimited inputs. If the length of a single sample exceeds this value (possible because of misconfiguration of the parser), the sampling aborts. This value can be overridden for a specific input with the input format’s parameters.
    ##### compiler.delimited-informat.min-line-samples2The minimum number of line samples taken by the compiler for delimited inputs. The samples are used to estimate the number of records. This value can be overridden for a specific input with the input format’s parameters

    Runtime Algorithms

    KeyDefaultDescription
    ##### taskmanager.runtime.hashjoin-bloom-filtersfalseFlag to activate/deactivate bloom filters in the hybrid hash join implementation. In cases where the hash join needs to spill to disk (datasets larger than the reserved fraction of memory), these bloom filters can greatly reduce the number of spilled records, at the cost some CPU cycles.
    ##### taskmanager.runtime.max-fan128The maximal fan-in for external merge joins and fan-out for spilling hash tables. Limits the number of file handles per operator, but may cause intermediate merging/partitioning, if set too small.
    ##### taskmanager.runtime.sort-spilling-threshold0.8A sort operation starts spilling when this fraction of its memory budget is full.

    Resource Manager

    The configuration keys in this section are independent of the used resource management framework (YARN, Mesos, Standalone, …)

    KeyDefaultDescription
    ##### containerized.heap-cutoff-min600Minimum amount of heap memory to remove in containers, as a safety margin.
    ##### containerized.heap-cutoff-ratio0.25Percentage of heap space to remove from containers (YARN / Mesos), to compensate for other JVM memory usage.
    ##### local.number-resourcemanager1The number of resource managers start.
    ##### resourcemanager.job.timeout"5 minutes"Timeout for jobs which don't have a job manager as leader assigned.
    ##### resourcemanager.rpc.port0Defines the network port to connect to for communication with the resource manager. By default, the port of the JobManager, because the same ActorSystem is used. Its not possible to use this configuration key to define port ranges.
    ##### resourcemanager.standalone.start-up-time-1Time in milliseconds of the start-up period of a standalone cluster. During this time, resource manager of the standalone cluster expects new task executors to be registered, and will not fail slot requests that can not be satisfied by any current registered slots. After this time, it will fail pending and new coming requests immediately that can not be satisfied by registered slots. If not set, 'slotmanager.request-timeout' will be used by default.
    ##### resourcemanager.taskmanager-timeout30000The timeout for an idle task manager to be released.

    Shuffle Service

    KeyDefaultDescription
    ##### shuffle-service-factory.class"org.apache.flink.runtime.io.network.NettyShuffleServiceFactory"The full class name of the shuffle service factory implementation to be used by the cluster. The default implementation uses Netty for network communication and local memory as well disk space to store results on a TaskExecutor.

    YARN

    KeyDefaultDescription
    ##### yarn.application-attempt-failures-validity-interval10000Time window in milliseconds which defines the number of application attempt failures when restarting the AM. Failures which fall outside of this window are not being considered. Set this value to -1 in order to count globally. See here for more information.
    ##### yarn.application-attempts(none)Number of ApplicationMaster restarts. Note that that the entire Flink cluster will restart and the YARN Client will loose the connection. Also, the JobManager address will change and you’ll need to set the JM host:port manually. It is recommended to leave this option at 1.
    ##### yarn.application-master.port"0"With this configuration option, users can specify a port, a range of ports or a list of ports for the Application Master (and JobManager) RPC port. By default we recommend using the default value (0) to let the operating system choose an appropriate port. In particular when multiple AMs are running on the same physical host, fixed port assignments prevent the AM from starting. For example when running Flink on YARN on an environment with a restrictive firewall, this option allows specifying a range of allowed ports.
    ##### yarn.appmaster.rpc.address(none)The hostname or address where the application master RPC system is listening.
    ##### yarn.appmaster.rpc.port-1The port where the application master RPC system is listening.
    ##### yarn.appmaster.vcores1The number of virtual cores (vcores) used by YARN application master.
    ##### yarn.containers.vcores-1The number of virtual cores (vcores) per YARN container. By default, the number of vcores is set to the number of slots per TaskManager, if set, or to 1, otherwise. In order for this parameter to be used your cluster must have CPU scheduling enabled. You can do this by setting the org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.
    ##### yarn.heartbeat.container-request-interval500Time between heartbeats with the ResourceManager in milliseconds if Flink requests containers:- The lower this value is, the faster Flink will get notified about container allocations since requests and allocations are transmitted via heartbeats.- The lower this value is, the more excessive containers might get allocated which will eventually be released but put pressure on Yarn.If you observe too many container allocations on the ResourceManager, then it is recommended to increase this value. See this link for more information.
    ##### yarn.heartbeat.interval5Time between heartbeats with the ResourceManager in seconds.
    ##### yarn.maximum-failed-containers(none)Maximum number of containers the system is going to reallocate in case of a failure.
    ##### yarn.per-job-cluster.include-user-jar"ORDER"Defines whether user-jars are included in the system class path for per-job-clusters as well as their positioning in the path. They can be positioned at the beginning ("FIRST"), at the end ("LAST"), or be positioned based on their name ("ORDER").
    ##### yarn.properties-file.location(none)When a Flink job is submitted to YARN, the JobManager’s host and the number of available processing slots is written into a properties file, so that the Flink client is able to pick those details up. This configuration parameter allows changing the default location of that file (for example for environments sharing a Flink installation between users).
    ##### yarn.tags(none)A comma-separated list of tags to apply to the Flink YARN application.

    Mesos

    KeyDefaultDescription
    ##### mesos.failover-timeout604800The failover timeout in seconds for the Mesos scheduler, after which running tasks are automatically shut down.
    ##### mesos.master(none)The Mesos master URL. The value should be in one of the following forms:- host:port- zk://host1:port1,host2:port2,…/path- zk://username:password@host1:port1,host2:port2,…/path- file:///path/to/file
    ##### mesos.resourcemanager.artifactserver.port0The config parameter defining the Mesos artifact server port to use. Setting the port to 0 will let the OS choose an available port.
    ##### mesos.resourcemanager.artifactserver.ssl.enabledtrueEnables SSL for the Flink artifact server. Note that security.ssl.enabled also needs to be set to true encryption to enable encryption.
    ##### mesos.resourcemanager.framework.name"Flink"Mesos framework name
    ##### mesos.resourcemanager.framework.principal(none)Mesos framework principal
    ##### mesos.resourcemanager.framework.role"*"Mesos framework role definition
    ##### mesos.resourcemanager.framework.secret(none)Mesos framework secret
    ##### mesos.resourcemanager.framework.user(none)Mesos framework user
    ##### mesos.resourcemanager.tasks.port-assignments(none)Comma-separated list of configuration keys which represent a configurable port. All port keys will dynamically get a port assigned through Mesos.

    Mesos TaskManager

    KeyDefaultDescription
    ##### mesos.constraints.hard.hostattribute(none)Constraints for task placement on Mesos based on agent attributes. Takes a comma-separated list of key:value pairs corresponding to the attributes exposed by the target mesos agents. Example: az:eu-west-1a,series:t2
    ##### mesos.resourcemanager.tasks.bootstrap-cmd(none)A command which is executed before the TaskManager is started.
    ##### mesos.resourcemanager.tasks.container.docker.force-pull-imagefalseInstruct the docker containerizer to forcefully pull the image rather than reuse a cached version.
    ##### mesos.resourcemanager.tasks.container.docker.parameters(none)Custom parameters to be passed into docker run command when using the docker containerizer. Comma separated list of "key=value" pairs. The "value" may contain '='.
    ##### mesos.resourcemanager.tasks.container.image.name(none)Image name to use for the container.
    ##### mesos.resourcemanager.tasks.container.type"mesos"Type of the containerization used: “mesos” or “docker”.
    ##### mesos.resourcemanager.tasks.container.volumes(none)A comma separated list of [hostpath:]container_path[:RO|RW]. This allows for mounting additional volumes into your container.
    ##### mesos.resourcemanager.tasks.cpus0.0CPUs to assign to the Mesos workers.
    ##### mesos.resourcemanager.tasks.disk0Disk space to assign to the Mesos workers in MB.
    ##### mesos.resourcemanager.tasks.gpus0GPUs to assign to the Mesos workers.
    ##### mesos.resourcemanager.tasks.hostname(none)Optional value to define the TaskManager’s hostname. The pattern _TASK is replaced by the actual id of the Mesos task. This can be used to configure the TaskManager to use Mesos DNS (e.g. TASK.flink-service.mesos) for name lookups.
    ##### mesos.resourcemanager.tasks.mem1024Memory to assign to the Mesos workers in MB.
    ##### mesos.resourcemanager.tasks.taskmanager-cmd"$FLINK_HOME/bin/mesos-taskmanager.sh"
    ##### mesos.resourcemanager.tasks.uris(none)A comma separated list of URIs of custom artifacts to be downloaded into the sandbox of Mesos workers.
    ##### taskmanager.numberOfTaskSlots1The number of parallel operator or user function instances that a single TaskManager can run. If this value is larger than 1, a single TaskManager takes multiple instances of a function or operator. That way, the TaskManager can utilize multiple CPU cores, but at the same time, the available memory is divided between the different operator or function instances. This value is typically proportional to the number of physical CPU cores that the TaskManager's machine has (e.g., equal to the number of cores, or half the number of cores).

    High Availability (HA)

    KeyDefaultDescription
    ##### high-availability"NONE"Defines high-availability mode used for the cluster execution. To enable high-availability, set this mode to "ZOOKEEPER" or specify FQN of factory class.
    ##### high-availability.cluster-id"/default"The ID of the Flink cluster, used to separate multiple Flink clusters from each other. Needs to be set for standalone clusters but is automatically inferred in YARN and Mesos.
    ##### high-availability.job.delay(none)The time before a JobManager after a fail over recovers the current jobs.
    ##### high-availability.jobmanager.port"0"Optional port (range) used by the job manager in high-availability mode.
    ##### high-availability.storageDir(none)File system path (URI) where Flink persists metadata in high-availability setups.

    ZooKeeper-based HA Mode

    KeyDefaultDescription
    ##### high-availability.zookeeper.client.acl"open"Defines the ACL (open|creator) to be configured on ZK node. The configuration value can be set to “creator” if the ZooKeeper server configuration has the “authProvider” property mapped to use SASLAuthenticationProvider and the cluster is configured to run in secure mode (Kerberos).
    ##### high-availability.zookeeper.client.connection-timeout15000Defines the connection timeout for ZooKeeper in ms.
    ##### high-availability.zookeeper.client.max-retry-attempts3Defines the number of connection retries before the client gives up.
    ##### high-availability.zookeeper.client.retry-wait5000Defines the pause between consecutive retries in ms.
    ##### high-availability.zookeeper.client.session-timeout60000Defines the session timeout for the ZooKeeper session in ms.
    ##### high-availability.zookeeper.path.checkpoint-counter"/checkpoint-counter"ZooKeeper root path (ZNode) for checkpoint counters.
    ##### high-availability.zookeeper.path.checkpoints"/checkpoints"ZooKeeper root path (ZNode) for completed checkpoints.
    ##### high-availability.zookeeper.path.jobgraphs"/jobgraphs"ZooKeeper root path (ZNode) for job graphs
    ##### high-availability.zookeeper.path.latch"/leaderlatch"Defines the znode of the leader latch which is used to elect the leader.
    ##### high-availability.zookeeper.path.leader"/leader"Defines the znode of the leader which contains the URL to the leader and the current leader session ID.
    ##### high-availability.zookeeper.path.mesos-workers"/mesos-workers"The ZooKeeper root path for persisting the Mesos worker information.
    ##### high-availability.zookeeper.path.root"/flink"The root path under which Flink stores its entries in ZooKeeper.
    ##### high-availability.zookeeper.path.running-registry"/running_job_registry/"
    ##### high-availability.zookeeper.quorum(none)The ZooKeeper quorum to use, when running Flink in a high-availability mode with ZooKeeper.

    ZooKeeper Security

    KeyDefaultDescription
    ##### zookeeper.sasl.disablefalse
    ##### zookeeper.sasl.login-context-name"Client"
    ##### zookeeper.sasl.service-name"zookeeper"

    Kerberos-based Security

    KeyDefaultDescription
    ##### security.kerberos.login.contexts(none)A comma-separated list of login contexts to provide the Kerberos credentials to (for example, Client,KafkaClient to use the credentials for ZooKeeper authentication and for Kafka authentication)
    ##### security.kerberos.login.keytab(none)Absolute path to a Kerberos keytab file that contains the user credentials.
    ##### security.kerberos.login.principal(none)Kerberos principal name associated with the keytab.
    ##### security.kerberos.login.use-ticket-cachetrueIndicates whether to read from your Kerberos ticket cache.

    Environment

    KeyDefaultDescription
    ##### env.hadoop.conf.dir(none)Path to hadoop configuration directory. It is required to read HDFS and/or YARN configuration. You can also set it via environment variable.
    ##### env.java.opts(none)Java options to start the JVM of all Flink processes with.
    ##### env.java.opts.historyserver(none)Java options to start the JVM of the HistoryServer with.
    ##### env.java.opts.jobmanager(none)Java options to start the JVM of the JobManager with.
    ##### env.java.opts.taskmanager(none)Java options to start the JVM of the TaskManager with.
    ##### env.log.dir(none)Defines the directory where the Flink logs are saved. It has to be an absolute path. (Defaults to the log directory under Flink’s home)
    ##### env.log.max5The maximum number of old log files to keep.
    ##### env.ssh.opts(none)Additional command line options passed to SSH clients when starting or stopping JobManager, TaskManager, and Zookeeper services (start-cluster.sh, stop-cluster.sh, start-zookeeper-quorum.sh, stop-zookeeper-quorum.sh).
    ##### env.yarn.conf.dir(none)Path to yarn configuration directory. It is required to run flink on YARN. You can also set it via environment variable.

    Checkpointing

    KeyDefaultDescription
    ##### state.backend(none)The state backend to be used to store and checkpoint state.
    ##### state.backend.asynctrueOption whether the state backend should use an asynchronous snapshot method where possible and configurable. Some state backends may not support asynchronous snapshots, or only support asynchronous snapshots, and ignore this option.
    ##### state.backend.fs.memory-threshold1024The minimum size of state data files. All state chunks smaller than that are stored inline in the root checkpoint metadata file.
    ##### state.backend.fs.write-buffer-size4096The default size of the write buffer for the checkpoint streams that write to file systems. The actual write buffer size is determined to be the maximum of the value of this option and option 'state.backend.fs.memory-threshold'.
    ##### state.backend.incrementalfalseOption whether the state backend should create incremental checkpoints, if possible. For an incremental checkpoint, only a diff from the previous checkpoint is stored, rather than the complete checkpoint state. Some state backends may not support incremental checkpoints and ignore this option.
    ##### state.backend.local-recoveryfalseThis option configures local recovery for this state backend. By default, local recovery is deactivated. Local recovery currently only covers keyed state backends. Currently, MemoryStateBackend does not support local recovery and ignore this option.
    ##### state.checkpoints.dir(none)The default directory used for storing the data files and meta data of checkpoints in a Flink supported filesystem. The storage path must be accessible from all participating processes/nodes(i.e. all TaskManagers and JobManagers).
    ##### state.checkpoints.num-retained1The maximum number of completed checkpoints to retain.
    ##### state.savepoints.dir(none)The default directory for savepoints. Used by the state backends that write savepoints to file systems (MemoryStateBackend, FsStateBackend, RocksDBStateBackend).
    ##### taskmanager.state.local.root-dirs(none)The config parameter defining the root directories for storing file-based state for local recovery. Local recovery currently only covers keyed state backends. Currently, MemoryStateBackend does not support local recovery and ignore this option

    RocksDB State Backend

    KeyDefaultDescription
    ##### state.backend.rocksdb.checkpoint.transfer.thread.num1The number of threads (per stateful operator) used to transfer (download and upload) files in RocksDBStateBackend.
    ##### state.backend.rocksdb.localdir(none)The local directory (on the TaskManager) where RocksDB puts its files.
    ##### state.backend.rocksdb.options-factory"org.apache.flink.contrib.streaming.state.DefaultConfigurableOptionsFactory"The options factory class for RocksDB to create DBOptions and ColumnFamilyOptions. The default options factory is org.apache.flink.contrib.streaming.state.DefaultConfigurableOptionsFactory, and it would read the configured options which provided in 'RocksDBConfigurableOptions'.
    ##### state.backend.rocksdb.predefined-options"DEFAULT"The predefined settings for RocksDB DBOptions and ColumnFamilyOptions by Flink community. Current supported candidate predefined-options are DEFAULT, SPINNING_DISK_OPTIMIZED, SPINNING_DISK_OPTIMIZED_HIGH_MEM or FLASH_SSD_OPTIMIZED. Note that user customized options and options from the OptionsFactory are applied on top of these predefined ones.
    ##### state.backend.rocksdb.timer-service.factory"HEAP"This determines the factory for timer service state implementation. Options are either HEAP (heap-based, default) or ROCKSDB for an implementation based on RocksDB .
    ##### state.backend.rocksdb.ttl.compaction.filter.enabledfalseThis determines if compaction filter to cleanup state with TTL is enabled for backend.Note: User can still decide in state TTL configuration in state descriptor whether the filter is active for particular state or not.

    RocksDB Configurable Options

    Specific RocksDB configurable options, provided by Flink, to create a corresponding ConfigurableOptionsFactory.And the created one would be used as default OptionsFactory in RocksDBStateBackendunless user define a OptionsFactory and set via RocksDBStateBackend.setOptions(optionsFactory)

    KeyDefaultDescription
    ##### state.backend.rocksdb.block.blocksize(none)The approximate size (in bytes) of user data packed per block. RocksDB has default blocksize as '4KB'.
    ##### state.backend.rocksdb.block.cache-size(none)The amount of the cache for data blocks in RocksDB. RocksDB has default block-cache size as '8MB'.
    ##### state.backend.rocksdb.compaction.level.max-size-level-base(none)The upper-bound of the total size of level base files in bytes. RocksDB has default configuration as '10MB'.
    ##### state.backend.rocksdb.compaction.level.target-file-size-base(none)The target file size for compaction, which determines a level-1 file size. RocksDB has default configuration as '2MB'.
    ##### state.backend.rocksdb.compaction.level.use-dynamic-size(none)If true, RocksDB will pick target size of each level dynamically. From an empty DB, RocksDB would make last level the base level, which means merging L0 data into the last level, until it exceeds max_bytes_for_level_base. And then repeat this process for second last level and so on. RocksDB has default configuration as 'false'. For more information, please refer to RocksDB's doc.
    ##### state.backend.rocksdb.compaction.style(none)The specified compaction style for DB. Candidate compaction style is LEVEL, FIFO or UNIVERSAL, and RocksDB choose 'LEVEL' as default style.
    ##### state.backend.rocksdb.files.open(none)The maximum number of open files (per TaskManager) that can be used by the DB, '-1' means no limit. RocksDB has default configuration as '5000'.
    ##### state.backend.rocksdb.thread.num(none)The maximum number of concurrent background flush and compaction jobs (per TaskManager). RocksDB has default configuration as '1'.
    ##### state.backend.rocksdb.writebuffer.count(none)Tne maximum number of write buffers that are built up in memory. RocksDB has default configuration as '2'.
    ##### state.backend.rocksdb.writebuffer.number-to-merge(none)The minimum number of write buffers that will be merged together before writing to storage. RocksDB has default configuration as '1'.
    ##### state.backend.rocksdb.writebuffer.size(none)The amount of data built up in memory (backed by an unsorted log on disk) before converting to a sorted on-disk files. RocksDB has default writebuffer size as '4MB'.

    Queryable State

    KeyDefaultDescription
    ##### queryable-state.client.network-threads0Number of network (Netty's event loop) Threads for queryable state client.
    ##### queryable-state.enablefalseOption whether the queryable state proxy and server should be enabled where possible and configurable.
    ##### queryable-state.proxy.network-threads0Number of network (Netty's event loop) Threads for queryable state proxy.
    ##### queryable-state.proxy.ports"9069"The port range of the queryable state proxy. The specified range can be a single port: "9123", a range of ports: "50100-50200", or a list of ranges and ports: "50100-50200,50300-50400,51234".
    ##### queryable-state.proxy.query-threads0Number of query Threads for queryable state proxy. Uses the number of slots if set to 0.
    ##### queryable-state.server.network-threads0Number of network (Netty's event loop) Threads for queryable state server.
    ##### queryable-state.server.ports"9067"The port range of the queryable state server. The specified range can be a single port: "9123", a range of ports: "50100-50200", or a list of ranges and ports: "50100-50200,50300-50400,51234".
    ##### queryable-state.server.query-threads0Number of query Threads for queryable state server. Uses the number of slots if set to 0.

    Metrics

    KeyDefaultDescription
    ##### metrics.fetcher.update-interval10000Update interval for the metric fetcher used by the web UI in milliseconds. Decrease this value for faster updating metrics. Increase this value if the metric fetcher causes too much load. Setting this value to 0 disables the metric fetching completely.
    ##### metrics.internal.query-service.port"0"The port range used for Flink's internal metric query service. Accepts a list of ports (“50100,50101”), ranges(“50100-50200”) or a combination of both. It is recommended to set a range of ports to avoid collisions when multiple Flink components are running on the same machine. Per default Flink will pick a random port.
    ##### metrics.internal.query-service.thread-priority1The thread priority used for Flink's internal metric query service. The thread is created by Akka's thread pool executor. The range of the priority is from 1 (MIN_PRIORITY) to 10 (MAX_PRIORITY). Warning, increasing this value may bring the main Flink components down.
    ##### metrics.latency.granularity"operator"Defines the granularity of latency metrics. Accepted values are:- single - Track latency without differentiating between sources and subtasks.- operator - Track latency while differentiating between sources, but not subtasks.- subtask - Track latency while differentiating between sources and subtasks.
    ##### metrics.latency.history-size128Defines the number of measured latencies to maintain at each operator.
    ##### metrics.latency.interval0Defines the interval at which latency tracking marks are emitted from the sources. Disables latency tracking if set to 0 or a negative value. Enabling this feature can significantly impact the performance of the cluster.
    ##### metrics.reporter.<name>.<parameter>(none)Configures the parameter <parameter> for the reporter named <name>.
    ##### metrics.reporter.<name>.class(none)The reporter class to use for the reporter named <name>.
    ##### metrics.reporter.<name>.interval(none)The reporter interval to use for the reporter named <name>.
    ##### metrics.reporters(none)An optional list of reporter names. If configured, only reporters whose name matches any of the names in the list will be started. Otherwise, all reporters that could be found in the configuration will be started.
    ##### metrics.scope.delimiter"."Delimiter used to assemble the metric identifier.
    ##### metrics.scope.jm"<host>.jobmanager"Defines the scope format string that is applied to all metrics scoped to a JobManager.
    ##### metrics.scope.jm.job"<host>.jobmanager.<job_name>"Defines the scope format string that is applied to all metrics scoped to a job on a JobManager.
    ##### metrics.scope.operator"<host>.taskmanager.<tm_id>.<job_name>.<operator_name>.<subtask_index>"Defines the scope format string that is applied to all metrics scoped to an operator.
    ##### metrics.scope.task"<host>.taskmanager.<tm_id>.<job_name>.<task_name>.<subtask_index>"Defines the scope format string that is applied to all metrics scoped to a task.
    ##### metrics.scope.tm"<host>.taskmanager.<tm_id>"Defines the scope format string that is applied to all metrics scoped to a TaskManager.
    ##### metrics.scope.tm.job"<host>.taskmanager.<tm_id>.<job_name>"Defines the scope format string that is applied to all metrics scoped to a job on a TaskManager.
    ##### metrics.system-resourcefalseFlag indicating whether Flink should report system resource metrics such as machine's CPU, memory or network usage.
    ##### metrics.system-resource-probing-interval5000Interval between probing of system resource metrics specified in milliseconds. Has an effect only when 'metrics.system-resource' is enabled.

    RocksDB Native Metrics

    Certain RocksDB native metrics may be forwarded to Flink’s metrics reporter.All native metrics are scoped to operators and then further broken down by column family; values are reported as unsigned longs.

    Note: Enabling native metrics may cause degraded performance and should be set carefully.

    KeyDefaultDescription
    ##### state.backend.rocksdb.metrics.actual-delayed-write-ratefalseMonitor the current actual delayed write rate. 0 means no delay.
    ##### state.backend.rocksdb.metrics.background-errorsfalseMonitor the number of background errors in RocksDB.
    ##### state.backend.rocksdb.metrics.compaction-pendingfalseTrack pending compactions in RocksDB. Returns 1 if a compaction is pending, 0 otherwise.
    ##### state.backend.rocksdb.metrics.cur-size-active-mem-tablefalseMonitor the approximate size of the active memtable in bytes.
    ##### state.backend.rocksdb.metrics.cur-size-all-mem-tablesfalseMonitor the approximate size of the active and unflushed immutable memtables in bytes.
    ##### state.backend.rocksdb.metrics.estimate-live-data-sizefalseEstimate of the amount of live data in bytes.
    ##### state.backend.rocksdb.metrics.estimate-num-keysfalseEstimate the number of keys in RocksDB.
    ##### state.backend.rocksdb.metrics.estimate-pending-compaction-bytesfalseEstimated total number of bytes compaction needs to rewrite to get all levels down to under target size. Not valid for other compactions than level-based.
    ##### state.backend.rocksdb.metrics.estimate-table-readers-memfalseEstimate the memory used for reading SST tables, excluding memory used in block cache (e.g.,filter and index blocks) in bytes.
    ##### state.backend.rocksdb.metrics.mem-table-flush-pendingfalseMonitor the number of pending memtable flushes in RocksDB.
    ##### state.backend.rocksdb.metrics.num-deletes-active-mem-tablefalseMonitor the total number of delete entries in the active memtable.
    ##### state.backend.rocksdb.metrics.num-deletes-imm-mem-tablesfalseMonitor the total number of delete entries in the unflushed immutable memtables.
    ##### state.backend.rocksdb.metrics.num-entries-active-mem-tablefalseMonitor the total number of entries in the active memtable.
    ##### state.backend.rocksdb.metrics.num-entries-imm-mem-tablesfalseMonitor the total number of entries in the unflushed immutable memtables.
    ##### state.backend.rocksdb.metrics.num-immutable-mem-tablefalseMonitor the number of immutable memtables in RocksDB.
    ##### state.backend.rocksdb.metrics.num-live-versionsfalseMonitor number of live versions. Version is an internal data structure. See RocksDB file version_set.h for details. More live versions often mean more SST files are held from being deleted, by iterators or unfinished compactions.
    ##### state.backend.rocksdb.metrics.num-running-compactionsfalseMonitor the number of currently running compactions.
    ##### state.backend.rocksdb.metrics.num-running-flushesfalseMonitor the number of currently running flushes.
    ##### state.backend.rocksdb.metrics.num-snapshotsfalseMonitor the number of unreleased snapshots of the database.
    ##### state.backend.rocksdb.metrics.size-all-mem-tablesfalseMonitor the approximate size of the active, unflushed immutable, and pinned immutable memtables in bytes.
    ##### state.backend.rocksdb.metrics.total-sst-files-sizefalseMonitor the total size (bytes) of all SST files.WARNING: may slow down online queries if there are too many files.

    History Server

    You have to configure jobmanager.archive.fs.dir in order to archive terminated jobs and add it to the list of monitored directories via historyserver.archive.fs.dir if you want to display them via the HistoryServer’s web frontend.

    • jobmanager.archive.fs.dir: Directory to upload information about terminated jobs to. You have to add this directory to the list of monitored directories of the history server via historyserver.archive.fs.dir.
    KeyDefaultDescription
    ##### historyserver.archive.fs.dir(none)Comma separated list of directories to fetch archived jobs from. The history server will monitor these directories for archived jobs. You can configure the JobManager to archive jobs to a directory via jobmanager.archive.fs.dir.
    ##### historyserver.archive.fs.refresh-interval10000Interval in milliseconds for refreshing the archived job directories.
    ##### historyserver.web.address(none)Address of the HistoryServer's web interface.
    ##### historyserver.web.port8082Port of the HistoryServers's web interface.
    ##### historyserver.web.refresh-interval10000The refresh interval for the HistoryServer web-frontend in milliseconds.
    ##### historyserver.web.ssl.enabledfalseEnable HTTPs access to the HistoryServer web frontend. This is applicable only when the global SSL flag security.ssl.enabled is set to true.
    ##### historyserver.web.tmpdir(none)This configuration parameter allows defining the Flink web directory to be used by the history server web interface. The web interface will copy its static files into the directory.

    Legacy

    • mode: Execution mode of Flink. Possible values are legacy and new. In order to start the legacy components, you have to specify legacy (DEFAULT: new).

    Background

    Configuring the Network Buffers

    If you ever see the Exception java.io.IOException: Insufficient number of network buffers, youneed to adapt the amount of memory used for network buffers in order for your program to run on yourtask managers.

    Network buffers are a critical resource for the communication layers. They are used to bufferrecords before transmission over a network, and to buffer incoming data before dissecting it intorecords and handing them to the application. A sufficient number of network buffers is critical toachieve a good throughput.

    Since Flink 1.3, you may follow the idiom "more is better" without any penalty on the latency (weprevent excessive buffering in each outgoing and incoming channel, i.e. buffer bloat, by limitingthe actual number of buffers used by each channel).

    In general, configure the task manager to have enough buffers that each logical network connectionyou expect to be open at the same time has a dedicated buffer. A logical network connection existsfor each point-to-point exchange of data over the network, which typically happens atrepartitioning or broadcasting steps (shuffle phase). In those, each parallel task inside theTaskManager has to be able to talk to all other parallel tasks.

    Note: Since Flink 1.5, network buffers will always be allocated off-heap, i.e. outside of the JVM heap, irrespective of the value of taskmanager.memory.off-heap. This way, we can pass these buffers directly to the underlying network stack layers.

    Setting Memory Fractions

    Previously, the number of network buffers was set manually which became a quite error-prone task(see below). Since Flink 1.3, it is possible to define a fraction of memory that is being used fornetwork buffers with the following configuration parameters:

    • taskmanager.network.memory.fraction: Fraction of JVM memory to use for network buffers (DEFAULT: 0.1),
    • taskmanager.network.memory.min: Minimum memory size for network buffers (DEFAULT: 64MB),
    • taskmanager.network.memory.max: Maximum memory size for network buffers (DEFAULT: 1GB), and
    • taskmanager.memory.segment-size: Size of memory buffers used by the memory manager and thenetwork stack in bytes (DEFAULT: 32KB).

    Setting the Number of Network Buffers directly

    Note: This way of configuring the amount of memory used for network buffers is deprecated. Please consider using the method above by defining a fraction of memory to use.

    The required number of buffers on a task manager istotal-degree-of-parallelism (number of targets) intra-node-parallelism (number of sources in one task manager) n_with _n being a constant that defines how many repartitioning-/broadcasting steps you expect to beactive at the same time. Since the intra-node-parallelism is typically the number of cores, andmore than 4 repartitioning or broadcasting channels are rarely active in parallel, it frequentlyboils down to

    1. #slots-per-TM^2 * #TMs * 4

    Where #slots per TM are the number of slots per TaskManager and #TMs are the total number of task managers.

    To support, for example, a cluster of 20 8-slot machines, you should use roughly 5000 networkbuffers for optimal throughput.

    Each network buffer has by default a size of 32 KiBytes. In the example above, the system would thusallocate roughly 300 MiBytes for network buffers.

    The number and size of network buffers can be configured with the following parameters:

    • taskmanager.network.numberOfBuffers, and
    • taskmanager.memory.segment-size.

    Configuring Temporary I/O Directories

    Although Flink aims to process as much data in main memory as possible, it is not uncommon that more data needs to be processed than memory is available. Flink’s runtime is designed to write temporary data to disk to handle these situations.

    The io.tmp.dirs parameter specifies a list of directories into which Flink writes temporary files. The paths of the directories need to be separated by ‘:’ (colon character). Flink will concurrently write (or read) one temporary file to (from) each configured directory. This way, temporary I/O can be evenly distributed over multiple independent I/O devices such as hard disks to improve performance. To leverage fast I/O devices (e.g., SSD, RAID, NAS), it is possible to specify a directory multiple times.

    If the io.tmp.dirs parameter is not explicitly specified, Flink writes temporary data to the temporary directory of the operating system, such as /tmp in Linux systems.

    Configuring TaskManager processing slots

    Flink executes a program in parallel by splitting it into subtasks and scheduling these subtasks to processing slots.

    Each Flink TaskManager provides processing slots in the cluster. The number of slots is typically proportional to the number of available CPU cores of each TaskManager. As a general recommendation, the number of available CPU cores is a good default for taskmanager.numberOfTaskSlots.

    When starting a Flink application, users can supply the default number of slots to use for that job. The command line value therefore is called -p (for parallelism). In addition, it is possible to set the number of slots in the programming APIs for the whole application and for individual operators.

    配置参数 - 图1