• 命令行界面
    • Examples
      • 作业提交示例
      • 作业管理示例
      • Savepoints
        • Trigger a Savepoint
        • Trigger a Savepoint with YARN
        • Stop
        • Cancel with a savepoint (deprecated)
        • Restore a savepoint
        • Dispose a savepoint
    • Usage

    命令行界面

    Flink provides a Command-Line Interface (CLI) to run programs that are packagedas JAR files, and control their execution. The CLI is partof any Flink setup, available in local single node setups and indistributed setups. It is located under <flink-home>/bin/flinkand connects by default to the running Flink master (JobManager) that wasstarted from the same installation directory.

    The command line can be used to

    • submit jobs for execution,
    • cancel a running job,
    • provide information about a job,
    • list running and waiting jobs,
    • trigger and dispose savepoints, and

    A prerequisite to using the command line interface is that the Flinkmaster (JobManager) has been started (via<flink-home>/bin/start-cluster.sh) or that a YARN environment isavailable.

    • Examples
      • 作业提交示例
      • 作业管理示例
      • Savepoints
    • Usage

    Examples

    作业提交示例


    这些示例是关于如何通过脚本提交一个作业

    • Run example program with no arguments:
    1. ./bin/flink run ./examples/batch/WordCount.jar
    • Run example program with arguments for input and result files:
    1. ./bin/flink run ./examples/batch/WordCount.jar \
    2. --input file:///home/user/hamlet.txt --output file:///home/user/wordcount_out
    • Run example program with parallelism 16 and arguments for input and result files:
    1. ./bin/flink run -p 16 ./examples/batch/WordCount.jar \
    2. --input file:///home/user/hamlet.txt --output file:///home/user/wordcount_out
    • Run example program with flink log output disabled:
    1. ./bin/flink run -q ./examples/batch/WordCount.jar
    • Run example program in detached mode:
    1. ./bin/flink run -d ./examples/batch/WordCount.jar
    • Run example program on a specific JobManager:
    1. ./bin/flink run -m myJMHost:8081 \
    2. ./examples/batch/WordCount.jar \
    3. --input file:///home/user/hamlet.txt --output file:///home/user/wordcount_out
    • Run example program with a specific class as an entry point:
    1. ./bin/flink run -c org.apache.flink.examples.java.wordcount.WordCount \
    2. ./examples/batch/WordCount.jar \
    3. --input file:///home/user/hamlet.txt --output file:///home/user/wordcount_out
    • Run example program using a per-job YARN cluster with 2 TaskManagers:
    1. ./bin/flink run -m yarn-cluster -yn 2 \
    2. ./examples/batch/WordCount.jar \
    3. --input hdfs:///user/hamlet.txt --output hdfs:///user/wordcount_out
    • 提交一个Python Table的作业:
    1. ./bin/flink run -py WordCount.py
    • 提交一个有多个依赖的Python Table的作业:
    1. ./bin/flink run -py examples/python/table/batch/word_count.py \
    2. -pyfs file:///user.txt,hdfs:///$namenode_address/username.txt
    • 提交一个有多个依赖的Python Table的作业,Python作业的主入口通过pym选项指定:
    1. ./bin/flink run -pym batch.word_count -pyfs examples/python/table/batch
    • 提交一个指定并发度为16的Python Table的作业:
    1. ./bin/flink run -p 16 -py examples/python/table/batch/word_count.py
    • 提交一个关闭flink日志输出的Python Table的作业:
    1. ./bin/flink run -q -py examples/python/table/batch/word_count.py
    • 提交一个运行在detached模式下的Python Table的作业:
    1. ./bin/flink run -d -py examples/python/table/batch/word_count.py
    • 提交一个运行在指定JobManager上的Python Table的作业:
    1. ./bin/flink run -m myJMHost:8081 \
    2. -py examples/python/table/batch/word_count.py
    • 提交一个运行在有两个TaskManager的per-job YARN cluster的Python Table的作业:
    1. ./bin/flink run -m yarn-cluster -yn 2 \
    2. -py examples/python/table/batch/word_count.py

    作业管理示例


    • Display the optimized execution plan for the WordCount example program as JSON:
    1. ./bin/flink info ./examples/batch/WordCount.jar \
    2. --input file:///home/user/hamlet.txt --output file:///home/user/wordcount_out
    • List scheduled and running jobs (including their JobIDs):
    1. ./bin/flink list
    • List scheduled jobs (including their JobIDs):
    1. ./bin/flink list -s
    • List running jobs (including their JobIDs):
    1. ./bin/flink list -r
    • List all existing jobs (including their JobIDs):
    1. ./bin/flink list -a
    • List running Flink jobs inside Flink YARN session:
    1. ./bin/flink list -m yarn-cluster -yid <yarnApplicationID> -r
    • Cancel a job:
    1. ./bin/flink cancel <jobID>
    • Cancel a job with a savepoint (deprecated; use “stop” instead):
    1. ./bin/flink cancel -s [targetDirectory] <jobID>
    • Gracefully stop a job with a savepoint (streaming jobs only):
    1. ./bin/flink stop [-p targetDirectory] [-d] <jobID>

    Savepoints

    Savepoints are controlled via the command line client:

    Trigger a Savepoint

    1. ./bin/flink savepoint <jobId> [savepointDirectory]

    This will trigger a savepoint for the job with ID jobId, and returns the path of the created savepoint. You need this path to restore and dispose savepoints.

    Furthermore, you can optionally specify a target file system directory to store the savepoint in. The directory needs to be accessible by the JobManager.

    If you don’t specify a target directory, you need to have configured a default directory. Otherwise, triggering the savepoint will fail.

    Trigger a Savepoint with YARN

    1. ./bin/flink savepoint <jobId> [savepointDirectory] -yid <yarnAppId>

    This will trigger a savepoint for the job with ID jobId and YARN application ID yarnAppId, and returns the path of the created savepoint.

    Everything else is the same as described in the above Trigger a Savepoint section.

    Stop

    Use the stop to gracefully stop a running streaming job with a savepoint.

    1. ./bin/flink stop [-p targetDirectory] [-d] <jobID>

    A “stop” call is a more graceful way of stopping a running streaming job, as the “stop” signal flows fromsource to sink. When the user requests to stop a job, all sources will be requested to send the last checkpoint barrierthat will trigger a savepoint, and after the successful completion of that savepoint, they will finish by calling theircancel() method. If the -d flag is specified, then a MAX_WATERMARK will be emitted before the last checkpointbarrier. This will result all registered event-time timers to fire, thus flushing out any state that is waiting fora specific watermark, e.g. windows. The job will keep running until all sources properly shut down. This allows the job to finish processing all in-flight data.

    Cancel with a savepoint (deprecated)

    You can atomically trigger a savepoint and cancel a job.

    1. ./bin/flink cancel -s [savepointDirectory] <jobID>

    If no savepoint directory is configured, you need to configure a default savepoint directory for the Flink installation (see Savepoints).

    The job will only be cancelled if the savepoint succeeds.

    Note: Cancelling a job with savepoint is deprecated. Use "stop" instead.

    Restore a savepoint

    1. ./bin/flink run -s <savepointPath> ...

    The run command has a savepoint flag to submit a job, which restores its state from a savepoint. The savepoint path is returned by the savepoint trigger command.

    By default, we try to match all savepoint state to the job being submitted. If you want to allow to skip savepoint state that cannot be restored with the new job you can set the allowNonRestoredState flag. You need to allow this if you removed an operator from your program that was part of the program when the savepoint was triggered and you still want to use the savepoint.

    1. ./bin/flink run -s <savepointPath> -n ...

    This is useful if your program dropped an operator that was part of the savepoint.

    Dispose a savepoint

    1. ./bin/flink savepoint -d <savepointPath>

    Disposes the savepoint at the given path. The savepoint path is returned by the savepoint trigger command.

    If you use custom state instances (for example custom reducing state or RocksDB state), you have to specify the path to the program JAR with which the savepoint was triggered in order to dispose the savepoint with the user code class loader:

    1. ./bin/flink savepoint -d <savepointPath> -j <jarFile>

    Otherwise, you will run into a ClassNotFoundException.

    Usage

    The command line syntax is as follows:

    1. ./flink <ACTION> [OPTIONS] [ARGUMENTS]
    2. The following actions are available:
    3. Action "run" compiles and runs a program.
    4. Syntax: run [OPTIONS] <jar-file> <arguments>
    5. "run" action options:
    6. -c,--class <classname> Class with the program entry point
    7. ("main()" method or "getPlan()" method).
    8. Only needed if the JAR file does not
    9. specify the class in its manifest.
    10. -C,--classpath <url> Adds a URL to each user code
    11. classloader on all nodes in the
    12. cluster. The paths must specify a
    13. protocol (e.g. file://) and be
    14. accessible on all nodes (e.g. by means
    15. of a NFS share). You can use this
    16. option multiple times for specifying
    17. more than one URL. The protocol must
    18. be supported by the {@link
    19. java.net.URLClassLoader}.
    20. -d,--detached If present, runs the job in detached
    21. mode
    22. -n,--allowNonRestoredState Allow to skip savepoint state that
    23. cannot be restored. You need to allow
    24. this if you removed an operator from
    25. your program that was part of the
    26. program when the savepoint was
    27. triggered.
    28. -p,--parallelism <parallelism> The parallelism with which to run the
    29. program. Optional flag to override the
    30. default value specified in the
    31. configuration.
    32. -py,--python <python-file> 指定Python作业的入口,依赖的资源文件可以通过
    33. `--pyFiles`进行指定。
    34. -pyfs,--pyFiles <python-files> 指定Python作业依赖的一些自定义的python文件,
    35. 如果有多个文件,可以通过逗号(,)进行分隔。支持
    36. 常用的python资源文件,例如(.py/.egg/.zip)。
    37. (例如:--pyFiles file:///tmp/myresource.zip
    38. ,hdfs:///$namenode_address/myresource2.zip)
    39. -pym,--pyModule <python-module> 指定python程序的运行的模块入口,这个选项必须配合
    40. `--pyFiles`一起使用。
    41. -q,--sysoutLogging If present, suppress logging output to
    42. standard out.
    43. -s,--fromSavepoint <savepointPath> Path to a savepoint to restore the job
    44. from (for example
    45. hdfs:///flink/savepoint-1537).
    46. -sae,--shutdownOnAttachedExit If the job is submitted in attached
    47. mode, perform a best-effort cluster
    48. shutdown when the CLI is terminated
    49. abruptly, e.g., in response to a user
    50. interrupt, such as typing Ctrl + C.
    51. Options for yarn-cluster mode:
    52. -d,--detached If present, runs the job in detached
    53. mode
    54. -m,--jobmanager <arg> Address of the JobManager (master) to
    55. which to connect. Use this flag to
    56. connect to a different JobManager than
    57. the one specified in the
    58. configuration.
    59. -sae,--shutdownOnAttachedExit If the job is submitted in attached
    60. mode, perform a best-effort cluster
    61. shutdown when the CLI is terminated
    62. abruptly, e.g., in response to a user
    63. interrupt, such as typing Ctrl + C.
    64. -yat,--yarnapplicationType <arg> Set a custom application type for the
    65. application on YARN
    66. -yD <property=value> use value for given property
    67. -yd,--yarndetached If present, runs the job in detached
    68. mode (deprecated; use non-YARN
    69. specific option instead)
    70. -yh,--yarnhelp Help for the Yarn session CLI.
    71. -yid,--yarnapplicationId <arg> Attach to running YARN session
    72. -yj,--yarnjar <arg> Path to Flink jar file
    73. -yjm,--yarnjobManagerMemory <arg> Memory for JobManager Container
    74. with optional unit (default: MB)
    75. -yn,--yarncontainer <arg> Number of YARN container to allocate
    76. (=Number of Task Managers)
    77. -ynm,--yarnname <arg> Set a custom name for the application
    78. on YARN
    79. -yq,--yarnquery Display available YARN resources
    80. (memory, cores)
    81. -yqu,--yarnqueue <arg> Specify YARN queue.
    82. -ys,--yarnslots <arg> Number of slots per TaskManager
    83. -yst,--yarnstreaming Start Flink in streaming mode
    84. -yt,--yarnship <arg> Ship files in the specified directory
    85. (t for transfer), multiple options are
    86. supported.
    87. -ytm,--yarntaskManagerMemory <arg> Memory per TaskManager Container
    88. with optional unit (default: MB)
    89. -yz,--yarnzookeeperNamespace <arg> Namespace to create the Zookeeper
    90. sub-paths for high availability mode
    91. -ynl,--yarnnodeLabel <arg> Specify YARN node label for
    92. the YARN application
    93. -z,--zookeeperNamespace <arg> Namespace to create the Zookeeper
    94. sub-paths for high availability mode
    95. Options for default mode:
    96. -m,--jobmanager <arg> Address of the JobManager (master) to which
    97. to connect. Use this flag to connect to a
    98. different JobManager than the one specified
    99. in the configuration.
    100. -z,--zookeeperNamespace <arg> Namespace to create the Zookeeper sub-paths
    101. for high availability mode
    102. Action "info" shows the optimized execution plan of the program (JSON).
    103. Syntax: info [OPTIONS] <jar-file> <arguments>
    104. "info" action options:
    105. -c,--class <classname> Class with the program entry point ("main()"
    106. method or "getPlan()" method). Only needed
    107. if the JAR file does not specify the class
    108. in its manifest.
    109. -p,--parallelism <parallelism> The parallelism with which to run the
    110. program. Optional flag to override the
    111. default value specified in the
    112. configuration.
    113. Action "list" lists running and scheduled programs.
    114. Syntax: list [OPTIONS]
    115. "list" action options:
    116. -r,--running Show only running programs and their JobIDs
    117. -s,--scheduled Show only scheduled programs and their JobIDs
    118. Options for yarn-cluster mode:
    119. -m,--jobmanager <arg> Address of the JobManager (master) to
    120. which to connect. Use this flag to connect
    121. to a different JobManager than the one
    122. specified in the configuration.
    123. -yid,--yarnapplicationId <arg> Attach to running YARN session
    124. -z,--zookeeperNamespace <arg> Namespace to create the Zookeeper
    125. sub-paths for high availability mode
    126. Options for default mode:
    127. -m,--jobmanager <arg> Address of the JobManager (master) to which
    128. to connect. Use this flag to connect to a
    129. different JobManager than the one specified
    130. in the configuration.
    131. -z,--zookeeperNamespace <arg> Namespace to create the Zookeeper sub-paths
    132. for high availability mode
    133. Action "stop" stops a running program with a savepoint (streaming jobs only).
    134. Syntax: stop [OPTIONS] <Job ID>
    135. "stop" action options:
    136. -d,--drain Send MAX_WATERMARK before taking the
    137. savepoint and stopping the pipelne.
    138. -p,--savepointPath <savepointPath> Path to the savepoint (for example
    139. hdfs:///flink/savepoint-1537). If no
    140. directory is specified, the configured
    141. default will be used
    142. ("state.savepoints.dir").
    143. Options for yarn-cluster mode:
    144. -m,--jobmanager <arg> Address of the JobManager (master) to
    145. which to connect. Use this flag to connect
    146. to a different JobManager than the one
    147. specified in the configuration.
    148. -yid,--yarnapplicationId <arg> Attach to running YARN session
    149. -z,--zookeeperNamespace <arg> Namespace to create the Zookeeper
    150. sub-paths for high availability mode
    151. Options for default mode:
    152. -m,--jobmanager <arg> Address of the JobManager (master) to which
    153. to connect. Use this flag to connect to a
    154. different JobManager than the one specified
    155. in the configuration.
    156. -z,--zookeeperNamespace <arg> Namespace to create the Zookeeper sub-paths
    157. for high availability mode
    158. Action "cancel" cancels a running program.
    159. Syntax: cancel [OPTIONS] <Job ID>
    160. "cancel" action options:
    161. -s,--withSavepoint <targetDirectory> **DEPRECATION WARNING**: Cancelling
    162. a job with savepoint is deprecated.
    163. Use "stop" instead.
    164. Trigger savepoint and cancel job.
    165. The target directory is optional. If
    166. no directory is specified, the
    167. configured default directory
    168. (state.savepoints.dir) is used.
    169. Options for yarn-cluster mode:
    170. -m,--jobmanager <arg> Address of the JobManager (master) to
    171. which to connect. Use this flag to connect
    172. to a different JobManager than the one
    173. specified in the configuration.
    174. -yid,--yarnapplicationId <arg> Attach to running YARN session
    175. -z,--zookeeperNamespace <arg> Namespace to create the Zookeeper
    176. sub-paths for high availability mode
    177. Options for default mode:
    178. -m,--jobmanager <arg> Address of the JobManager (master) to which
    179. to connect. Use this flag to connect to a
    180. different JobManager than the one specified
    181. in the configuration.
    182. -z,--zookeeperNamespace <arg> Namespace to create the Zookeeper sub-paths
    183. for high availability mode
    184. Action "savepoint" triggers savepoints for a running job or disposes existing ones.
    185. Syntax: savepoint [OPTIONS] <Job ID> [<target directory>]
    186. "savepoint" action options:
    187. -d,--dispose <arg> Path of savepoint to dispose.
    188. -j,--jarfile <jarfile> Flink program JAR file.
    189. Options for yarn-cluster mode:
    190. -m,--jobmanager <arg> Address of the JobManager (master) to
    191. which to connect. Use this flag to connect
    192. to a different JobManager than the one
    193. specified in the configuration.
    194. -yid,--yarnapplicationId <arg> Attach to running YARN session
    195. -z,--zookeeperNamespace <arg> Namespace to create the Zookeeper
    196. sub-paths for high availability mode
    197. Options for default mode:
    198. -m,--jobmanager <arg> Address of the JobManager (master) to which
    199. to connect. Use this flag to connect to a
    200. different JobManager than the one specified
    201. in the configuration.
    202. -z,--zookeeperNamespace <arg> Namespace to create the Zookeeper sub-paths
    203. for high availability mode