菜鸟笔记
提升您的技术认知

jupyter notebook基础(6)jupyter notebook命令行命令帮助-ag真人游戏

jupyter notebook当前版本为6.3.0

jupyter命令行命令

jupyter命令是整个jupyter项目命令的命名空间,单独执行jupyter命令无任何意义。

jupyter命令的格式为jupyter <子命令> [选项]

jupyter命令的子命令包括:bundlerextension console kernel kernelspec lab labextension labhub migrate nbclassic nbconvert nbextension notebook qtconsole run script server serverextension troubleshoot trust

每个子命令在python解释器的scripts目录均有一个对应的jupyter-subcommand的可执行文件,比如notebook子命令对应文件为jupyter-notebook.exe

jupyter命令的选项如下:

  • -h, --help:显示帮助信息。

  • --version:显示jupyter项目所有组件的版本信息。

  • --config-dir:显示配置文件目录路径。

  • --data-dir:显示数据文件目录路径。

  • --runtime-dir:显示运行时目录路径。

  • --paths:显示所有jupyter目录和搜索路径。

  • --json:以json格式输出所有jupyter目录和搜索路径。

jupyter notebook命令行命令

  • jupyter notebook [filename.ipynb]:启动jupyter notebook服务器,默认端口为8888,如果8888被占用(例如已经启动过jupyter notebook服务器),端口id会依次增加。通过可选的filename.ipynb,可以在启动jupyter notebook服务器时,在编辑器中打开filename.ipynb文件。

jupyter notebook命令选项

jupyter notebook --help命令结果。

使用格式:jupyter notebook [选项]

  • --version:显示jupyter notebook的版本信息。
  • --debug:将日志级别设置为logging.debug(日志输出信息最多)。
    等价于可配置类选项[--application.log_level=10]
  • --generate-config:生成默认的配置文件。
    等价于可配置类选项[--jupyterapp.generate_config=true]
  • -y:所有提示都自动回答yes
    等价于可配置类选项[--jupyterapp.answer_yes=true]
  • --no-browser:启动jupyter notebook不直接打开网页客户端。
    等价于可配置类选项 [--notebookapp.open_browser=false]
  • --pylab已被禁用。 使用魔术命令%pylab%matplotlib在notebook中启用matplotlib支持。当前notebook已默认启用matplotlib支持。
    等价于可配置类选项[--notebookapp.pylab=warn]
  • --no-mathjax:禁用mathjaxmathjax是一个支持渲染math/latex公式的javascript库。该库比较大可能影响性能。禁用后,公式等将不会自动渲染。
    等价于可配置类选项[--notebookapp.enable_mathjax=false]
  • --allow-root:允许notebook运行在root用户环境下。
    等价于可配置类选项[--notebookapp.allow_root=true]
  • --autoreload:当源码发生变化时,自动重新加载网页客户端并重新导入python包。
    等价于可配置类选项[--notebookapp.autoreload=true]
  • --script:已废弃。
    等价于可配置类选项[--filecontentsmanager.save_script=true]
  • --no-script:已废弃。
    等价于可配置类选项[--filecontentsmanager.save_script=false]
  • --log-level=:通过整数值和字符串指定日志级别。取值范围为[0, 10, 20, 30, 40, 50, 'debug', 'info', 'warn', 'error', 'critical'],默认值为30
    等价于可配置类选项[--application.log_level]
  • --config=<unicode>:指定配置文件的绝对路径。默认值为''
    等价于可配置类选项[--jupyterapp.config_file]
  • --ip=: 指定jupyter notebook服务器的监听地址。默认值为'localhost'
    等价于可配置类选项[--notebookapp.ip]
  • --port=: 指定jupyter notebook服务器的监听端口。即环境变量jupyter_port。默认值为8888
    等价于可配置类选项[--notebookapp.port]
  • --port-retries=:指定jupyter notebook服务器的默认监听端口被占用时重试其他端口的次数。即环境变量jupyter_port_retries。默认值为50
    等价于可配置类选项[--notebookapp.port_retries]
  • --sock=:指定jupyter notebook服务器的监听的unix socket。默认值为''
    等价于可配置类选项[--notebookapp.sock]
  • --sock-mode=:指定创建unix socket的权限模式。默认值为'0600'
    等价于可配置类选项[--notebookapp.sock_mode]
  • --transport=:指定通讯模式,取值范围为['tcp', 'ipc'],不区分大小写。默认值为'tcp'
    等价于可配置类选项[--kernelmanager.transport]
  • --keyfile=: 指定使用ssl/tls协议时私钥的绝对路径。默认为''
    等价于可配置类选项[--notebookapp.keyfile]
  • --certfile=: 指定使用ssl/tls协议时证书的绝对路径。默认为''
    等价于可配置类选项[--notebookapp.certfile]
  • --client-ca=:指定使用ssl/tls协议时客户端认证的ca证书的绝对路径。默认为''
    等价于可配置类选项[--notebookapp.client_ca]
  • --notebook-dir=:指定jupyter notebook服务器的当前路径。默认为'',即命令行当前目录。
    等价于可配置类选项[--notebookapp.notebook_dir]
  • --browser=:指定打开jupyter notebook网页客户端的浏览器。如果不指定将通过 webbrowser库选择默认浏览器。通过环境变量browser可指定浏览器。默认为''
    等价于可配置类选项[–notebookapp.browser]`
  • --pylab=已被禁用。 使用魔术命令%pylab%matplotlib在notebook中启用matplotlib支持。默认值为disabled
    等价于可配置类选项[--notebookapp.pylab]
  • --gateway-url=:设置网关地址,jupyter notebook服务器将作为代理使用。即环境变量jupyter_gateway_url。默认值为none
    等价于可配置类选项[--gatewayclient.url]

jupyter notebook命令行子命令

jupyter notebook命令还具有3个子命令。

  • jupyter notebook list:列出正在运行的jupyter notebook服务器
  • jupyter notebook stop [portid]:关闭正在运行的jupyter notebook服务器 ,默认关闭端口为8888的服务器,通过可选的portid参数可关闭运行在指定端口的jupyter notebook服务器 ,例如jupyter notebook stop 8889
  • jupyter notebook password:为jupyter notebook服务器设置密码,设置密码后运行jupyter notebook客户端启动后会开启密码验证。

jupyter notebook list命令选项

jupyter notebook list --help命令结果。

使用格式:jupyter notebook list [选项]

  • --jsonlist:结果输出为json格式。
    等价于可配置类选项[--nbserverlistapp.jsonlist=true]
  • --json:结果输出为json格式。每行输出一个文件。
    等价于可配置类选项[--nbserverlistapp.json=true]
  • --log-level=:通过整数值和字符串指定日志级别。取值范围为[0, 10, 20, 30, 40, 50, 'debug', 'info', 'warn', 'error', 'critical'],默认值为30
    等价于可配置类选项[--application.log_level]
  • --config=:指定配置文件的绝对路径。默认值为''
    等价于可配置类选项[--jupyterapp.config_file]

jupyter notebook stop命令选项

jupyter notebook stop --help命令结果。

使用格式:jupyter notebook stop [选项]

  • --debug:将日志级别设置为logging.debug(日志输出信息最多)。
    等价于可配置类选项[--application.log_level=10]
  • --generate-config:生成默认的配置文件。
    等价于可配置类选项[--jupyterapp.generate_config=true]
  • -y:所有提示都自动回答yes
    等价于可配置类选项[--jupyterapp.answer_yes=true]
  • --log-level=:通过整数值和字符串指定日志级别。取值范围为[0, 10, 20, 30, 40, 50, 'debug', 'info', 'warn', 'error', 'critical'],默认值为30
    等价于可配置类选项[--application.log_level]
  • --config=:指定配置文件的绝对路径。默认值为''
    等价于可配置类选项[--jupyterapp.config_file]

jupyter notebook password命令选项

jupyter notebook password --help命令结果。

使用格式:jupyter notebook password [选项]

  • --debug:将日志级别设置为logging.debug(日志输出信息最多)。
    等价于可配置类选项[--application.log_level=10]
  • --generate-config:生成默认的配置文件。
    等价于可配置类选项[--jupyterapp.generate_config=true]
  • -y:所有提示都自动回答yes
    等价于可配置类选项[--jupyterapp.answer_yes=true]
  • --log-level=:通过整数值和字符串指定日志级别。取值范围为[0, 10, 20, 30, 40, 50, 'debug', 'info', 'warn', 'error', 'critical'],默认值为30
    等价于可配置类选项[--application.log_level]
  • --config=:指定配置文件的绝对路径。默认值为''
    等价于可配置类选项[--jupyterapp.config_file]

jupyter notebook可配置类命令选项

jupyter notebook命令和子命令都可以使用命令选项进行进一步配置。
格式为命令 [选项]
通过--help选项或-h选项可以查看命令参数帮助。
通过--help-all选项可以可配置类选项的完全帮助。
选项是可配置类选项的别名。

application(singletonconfigurable) options
------------------------------------------
--application.log_datefmt=
    the date format used by logging formatters for %(asctime)s
    default: '%y-%m-%d %h:%m:%s'
--application.log_format=
    the logging format template
    default: '[%(name)s]%(highlevel)s %(message)s'
--application.log_level=
    set the log level by value or name.
    choices: any of [0, 10, 20, 30, 40, 50, 'debug', 'info', 'warn', 'error', 'critical']
    default: 30
--application.show_config=
    instead of starting the application, dump configuration to stdout
    default: false
--application.show_config_json=
    instead of starting the application, dump configuration to stdout (as json)
    default: false
jupyterapp(application) options
-------------------------------
--jupyterapp.answer_yes=
    answer yes to any prompts.
    default: false
--jupyterapp.config_file=
    full path of a config file.
    default: ''
--jupyterapp.config_file_name=
    specify a config file to load.
    default: ''
--jupyterapp.generate_config=
    generate default config file.
    default: false
--jupyterapp.log_datefmt=
    the date format used by logging formatters for %(asctime)s
    default: '%y-%m-%d %h:%m:%s'
--jupyterapp.log_format=
    the logging format template
    default: '[%(name)s]%(highlevel)s %(message)s'
--jupyterapp.log_level=
    set the log level by value or name.
    choices: any of [0, 10, 20, 30, 40, 50, 'debug', 'info', 'warn', 'error', 'critical']
    default: 30
--jupyterapp.show_config=
    instead of starting the application, dump configuration to stdout
    default: false
--jupyterapp.show_config_json=
    instead of starting the application, dump configuration to stdout (as json)
    default: false
notebookapp(jupyterapp) options
-------------------------------
--notebookapp.allow_credentials=
    set the access-control-allow-credentials: true header
    default: false
--notebookapp.allow_origin=
    set the access-control-allow-origin header
    use '*' to allow any origin to access your server.
    takes precedence over allow_origin_pat.
    default: ''
--notebookapp.allow_origin_pat=
    use a regular expression for the access-control-allow-origin header
    requests from an origin matching the expression will get replies with:
        access-control-allow-origin: origin
    where `origin` is the origin of the request.
    ignored if allow_origin is set.
    default: ''
--notebookapp.allow_password_change=
    allow password to be changed at login for the notebook server.
    while loggin in with a token, the notebook server ui will give the
    opportunity to the user to enter a new password at the same time that will
    replace the token login mechanism.
    this can be set to false to prevent changing password from the ui/api.
    default: true
--notebookapp.allow_remote_access=
    allow requests where the host header doesn't point to a local server
    by default, requests get a 403 forbidden response if the 'host' header shows
    that the browser thinks it's on a non-local domain. setting this option to
    true disables this check.
    this protects against 'dns rebinding' attacks, where a remote web server
    serves you a page and then changes its dns to send later requests to a local
    ip, bypassing same-origin checks.
    local ip addresses (such as 127.0.0.1 and ::1) are allowed as local, along
    with hostnames configured in local_hostnames.
    default: false
--notebookapp.allow_root=
    whether to allow the user to run the notebook as root.
    default: false
--notebookapp.answer_yes=
    answer yes to any prompts.
    default: false
--notebookapp.authenticate_prometheus=
    " require authentication to access prometheus metrics.
    default: true
--notebookapp.autoreload=
    reload the webapp when changes are made to any python src files.
    default: false
--notebookapp.base_project_url=
    deprecated use base_url
    default: '/'
--notebookapp.base_url=
    the base url for the notebook server.
    leading and trailing slashes can be omitted, and will automatically be
    added.
    default: '/'
--notebookapp.browser=
    specify what command to use to invoke a web browser when opening the
    notebook. if not specified, the default browser will be determined by the
    `webbrowser` standard library module, which allows setting of the browser
    environment variable to override it.
    default: ''
--notebookapp.certfile=
    the full path to an ssl/tls certificate file.
    default: ''
--notebookapp.client_ca=
    the full path to a certificate authority certificate for ssl/tls client
    authentication.
    default: ''
--notebookapp.config_file=
    full path of a config file.
    default: ''
--notebookapp.config_file_name=
    specify a config file to load.
    default: ''
--notebookapp.config_manager_class=
    the config manager class to use
    default: 'notebook.services.config.manager.configmanager'
--notebookapp.contents_manager_class=
    the notebook manager class to use.
    default: 'notebook.services.contents.largefilemanager.largefilemanager'
--notebookapp.cookie_options==...
    extra keyword arguments to pass to `set_secure_cookie`. see tornado's
    set_secure_cookie docs for details.
    default: {}
--notebookapp.cookie_secret=
    the random bytes used to secure cookies. by default this is a new random
    number every time you start the notebook. set it to a value in a config file
    to enable logins to persist across server sessions.
    note: cookie secrets should be kept private, do not share config files with
    cookie_secret stored in plaintext (you can read the value from a file).
    default: b''
--notebookapp.cookie_secret_file=
    the file where the cookie secret is stored.
    default: ''
--notebookapp.custom_display_url=
    override url shown to users.
    replace actual url, including protocol, address, port and base url, with the
    given value when displaying url to the users. do not change the actual
    connection url. if authentication token is enabled, the token is added to
    the custom url automatically.
    this option is intended to be used when the url to display to the user
    cannot be determined reliably by the jupyter notebook server (proxified or
    containerized setups for example).
    default: ''
--notebookapp.default_url=
    the default url to redirect to from `/`
    default: '/tree'
--notebookapp.disable_check_xsrf=
    disable cross-site-request-forgery protection
    jupyter notebook 4.3.1 introduces protection from cross-site request
    forgeries, requiring api requests to either:
    - originate from pages served by this server (validated with xsrf cookie and
    token), or - authenticate with a token
    some anonymous compute resources still desire the ability to run code,
    completely without authentication. these services can disable all
    authentication and security checks, with the full knowledge of what that
    implies.
    default: false
--notebookapp.enable_mathjax=
    whether to enable mathjax for typesetting math/tex
    mathjax is the javascript library jupyter uses to render math/latex. it is
    very large, so you may want to disable it if you have a slow internet
    connection, or for offline use of the notebook.
    when disabled, equations etc. will appear as their untransformed tex source.
    default: true
--notebookapp.extra_nbextensions_path=...
    extra paths to look for javascript notebook extensions
    default: []
--notebookapp.extra_services=...
    handlers that should be loaded at higher priority than the default services
    default: []
--notebookapp.extra_static_paths=...
    extra paths to search for serving static files.
    this allows adding javascript/css to be available from the notebook server
    machine, or overriding individual files in the ipython
    default: []
--notebookapp.extra_template_paths=...
    extra paths to search for serving jinja templates.
    can be used to override templates from notebook.templates.
    default: []
--notebookapp.file_to_run=
    default: ''
--notebookapp.generate_config=
    generate default config file.
    default: false
--notebookapp.get_secure_cookie_kwargs==...
    extra keyword arguments to pass to `get_secure_cookie`. see tornado's
    get_secure_cookie docs for details.
    default: {}
--notebookapp.ignore_minified_js=
    deprecated: use minified js file or not, mainly use during dev to avoid js
    recompilation
    default: false
--notebookapp.iopub_data_rate_limit=
    (bytes/sec) maximum rate at which stream output can be sent on iopub before
    they are limited.
    default: 1000000
--notebookapp.iopub_msg_rate_limit=
    (msgs/sec) maximum rate at which messages can be sent on iopub before they
    are limited.
    default: 1000
--notebookapp.ip=
    the ip address the notebook server will listen on.
    default: 'localhost'
--notebookapp.jinja_environment_options==...
    supply extra arguments that will be passed to jinja environment.
    default: {}
--notebookapp.jinja_template_vars==...
    extra variables to supply to jinja templates when rendering.
    default: {}
--notebookapp.kernel_manager_class=
    the kernel manager class to use.
    default: 'notebook.services.kernels.kernelmanager.mappingkernelmanager'
--notebookapp.kernel_spec_manager_class=
    the kernel spec manager class to use. should be a subclass of
    `jupyter_client.kernelspec.kernelspecmanager`.
    the api of kernelspecmanager is provisional and might change without warning
    between this version of jupyter and the next stable one.
    default: 'jupyter_client.kernelspec.kernelspecmanager'
--notebookapp.keyfile=
    the full path to a private key file for usage with ssl/tls.
    default: ''
--notebookapp.local_hostnames=...
    hostnames to allow as local when allow_remote_access is false.
    local ip addresses (such as 127.0.0.1 and ::1) are automatically accepted as
    local as well.
    default: ['localhost']
--notebookapp.log_datefmt=
    the date format used by logging formatters for %(asctime)s
    default: '%y-%m-%d %h:%m:%s'
--notebookapp.log_format=
    the logging format template
    default: '[%(name)s]%(highlevel)s %(message)s'
--notebookapp.log_json=
    set to true to enable json formatted logs. run "pip install notebook[json-
    logging]" to install the required dependent packages. can also be set using
    the environment variable jupyter_enable_json_logging=true.
    default: false
--notebookapp.log_level=
    set the log level by value or name.
    choices: any of [0, 10, 20, 30, 40, 50, 'debug', 'info', 'warn', 'error', 'critical']
    default: 30
--notebookapp.login_handler_class=
    the login handler class to use.
    default: 'notebook.auth.login.loginhandler'
--notebookapp.logout_handler_class=
    the logout handler class to use.
    default: 'notebook.auth.logout.logouthandler'
--notebookapp.mathjax_config=
    the mathjax.js configuration file that is to be used.
    default: 'tex-ams-mml_htmlormml-full,safe'
--notebookapp.mathjax_url=
    a custom url for mathjax.js. should be in the form of a case-sensitive url
    to mathjax, for example:  /static/components/mathjax/mathjax.js
    default: ''
--notebookapp.max_body_size=
    sets the maximum allowed size of the client request body, specified in the
    content-length request header field. if the size in a request exceeds the
    configured value, a malformed http message is returned to the client.
    note: max_body_size is applied even in streaming mode.
    default: 536870912
--notebookapp.max_buffer_size=
    gets or sets the maximum amount of memory, in bytes, that is allocated for
    use by the buffer manager.
    default: 536870912
--notebookapp.min_open_files_limit=
    gets or sets a lower bound on the open file handles process resource limit.
    this may need to be increased if you run into an oserror: [errno 24] too
    many open files. this is not applicable when running on windows.
    default: 0
--notebookapp.nbserver_extensions==...
    dict of python modules to load as notebook server extensions.entry values
    can be used to enable and disable the loading ofthe extensions. the
    extensions will be loaded in alphabetical order.
    default: {}
--notebookapp.notebook_dir=
    the directory to use for notebooks and kernels.
    default: ''
--notebookapp.open_browser=
    whether to open in a browser after starting. the specific browser used is
    platform dependent and determined by the python standard library
    `webbrowser` module, unless it is overridden using the --browser
    (notebookapp.browser) configuration option.
    default: true
--notebookapp.password=
    hashed password to use for web authentication.
    to generate, type in a python/ipython shell:
      from notebook.auth import passwd; passwd()
    the string should be of the form type:salt:hashed-password.
    default: ''
--notebookapp.password_required=
    forces users to use a password for the notebook server. this is useful in a
    multi user environment, for instance when everybody in the lan can access
    each other's machine through ssh.
    in such a case, serving the notebook server on localhost is not secure since
    any user can connect to the notebook server via ssh.
    default: false
--notebookapp.port=
    the port the notebook server will listen on (env: jupyter_port).
    default: 8888
--notebookapp.port_retries=
    the number of additional ports to try if the specified port is not available
    (env: jupyter_port_retries).
    default: 50
--notebookapp.pylab=
    disabled: use %pylab or %matplotlib in the notebook to enable matplotlib.
    default: 'disabled'
--notebookapp.quit_button=
    if true, display a button in the dashboard to quit (shutdown the notebook
    server).
    default: true
--notebookapp.rate_limit_window=
    (sec) time window used to check the message and data rate limits.
    default: 3
--notebookapp.reraise_server_extension_failures=
    reraise exceptions encountered loading server extensions?
    default: false
--notebookapp.server_extensions=...
    deprecated use the nbserver_extensions dict instead
    default: []
--notebookapp.session_manager_class=
    the session manager class to use.
    default: 'notebook.services.sessions.sessionmanager.sessionmanager'
--notebookapp.show_config=
    instead of starting the application, dump configuration to stdout
    default: false
--notebookapp.show_config_json=
    instead of starting the application, dump configuration to stdout (as json)
    default: false
--notebookapp.shutdown_no_activity_timeout=
    shut down the server after n seconds with no kernels or terminals running
    and no activity. this can be used together with culling idle kernels
    (mappingkernelmanager.cull_idle_timeout) to shutdown the notebook server
    when it's not in use. this is not precisely timed: it may shut down up to a
    minute later. 0 (the default) disables this automatic shutdown.
    default: 0
--notebookapp.sock=
    the unix socket the notebook server will listen on.
    default: ''
--notebookapp.sock_mode=
    the permissions mode for unix socket creation (default: 0600).
    default: '0600'
--notebookapp.ssl_options==...
    supply ssl options for the tornado httpserver. see the tornado docs for
    details.
    default: {}
--notebookapp.terminado_settings==...
    supply overrides for terminado. currently only supports "shell_command". on
    unix, if "shell_command" is not provided, a non-login shell is launched by
    default when the notebook server is connected to a terminal, a login shell
    otherwise.
    default: {
  }
--notebookapp.terminals_enabled=
    set to false to disable terminals.
    this does *not* make the notebook server more secure by itself. anything the
    user can in a terminal, they can also do in a notebook.
    terminals may also be automatically disabled if the terminado package is not
    available.
    default: true
--notebookapp.token=
    token used for authenticating first-time connections to the server.
    the token can be read from the file referenced by jupyter_token_file or set
    directly with the jupyter_token environment variable.
    when no password is enabled, the default is to generate a new, random token.
    setting to an empty string disables authentication altogether, which is not
    recommended.
    default: ''
--notebookapp.tornado_settings==...
    supply overrides for the tornado.web.application that the jupyter notebook
    uses.
    default: {
  }
--notebookapp.trust_xheaders=
    whether to trust or not x-scheme/x-forwarded-proto and x-real-
    ip/x-forwarded-for headerssent by the upstream reverse proxy. necessary if
    the proxy handles ssl
    default: false
--notebookapp.use_redirect_file=
    disable launching browser by redirect file
    for versions of notebook > 5.7.2, a security feature measure was added that
    prevented the authentication token used to launch the browser from being
    visible. this feature makes it difficult for other users on a multi-user
    system from running code in your jupyter session as you.
    however, some environments (like windows subsystem for linux (wsl) and
    chromebooks), launching a browser using a redirect file can lead the browser
    failing to load. this is because of the difference in file structures/paths
    between the runtime and the browser.
    disabling this setting to false will disable this behavior, allowing the
    browser to launch by using a url and visible token (as before).
    default: true
--notebookapp.webapp_settings==...
    deprecated, use tornado_settings
    default: {
  }
--notebookapp.webbrowser_open_new=
    specify where to open the notebook on startup. this is the `new` argument
    passed to the standard library method `webbrowser.open`. the behaviour is
    not guaranteed, but depends on browser support. valid values are:
     - 2 opens a new tab,
     - 1 opens a new window,
     - 0 opens in an existing window.
    see the `webbrowser.open` documentation for details.
    default: 2
--notebookapp.websocket_compression_options=
    set the tornado compression options for websocket connections.
    this value will be returned from
    :meth:`websockethandler.get_compression_options`. none (default) will
    disable compression. a dict (even an empty one) will enable compression.
    see the tornado docs for websockethandler.get_compression_options for
    details.
    default: none
--notebookapp.websocket_url=
    the base url for websockets, if it differs from the http server (hint: it
    almost certainly doesn't).
    should be in the form of an http origin: ws[s]://hostname[:port]
    default: ''
connectionfilemixin(loggingconfigurable) options
------------------------------------------------
--connectionfilemixin.connection_file=
    json file in which to store connection info [default: kernel-.json]
    this file will contain the ip, ports, and authentication key needed to
    connect clients to this kernel. by default, this file will be created in the
    security dir of the current profile, but can be specified by absolute path.
    default: ''
--connectionfilemixin.control_port=
    set the control (router) port [default: random]
    default: 0
--connectionfilemixin.hb_port=
    set the heartbeat port [default: random]
    default: 0
--connectionfilemixin.iopub_port=
    set the iopub (pub) port [default: random]
    default: 0
--connectionfilemixin.ip=
    set the kernel's ip address [default localhost]. if the ip address is
    something other than localhost, then consoles on other machines will be able
    to connect to the kernel, so be careful!
    default: ''
--connectionfilemixin.shell_port=
    set the shell (router) port [default: random]
    default: 0
--connectionfilemixin.stdin_port=
    set the stdin (router) port [default: random]
    default: 0
--connectionfilemixin.transport=
    choices: any of ['tcp', 'ipc'] (case-insensitive)
    default: 'tcp'
kernelmanager(connectionfilemixin) options
------------------------------------------
--kernelmanager.autorestart=
    should we autorestart the kernel if it dies.
    default: true
--kernelmanager.connection_file=
    json file in which to store connection info [default: kernel-.json]
    this file will contain the ip, ports, and authentication key needed to
    connect clients to this kernel. by default, this file will be created in the
    security dir of the current profile, but can be specified by absolute path.
    default: ''
--kernelmanager.control_port=
    set the control (router) port [default: random]
    default: 0
--kernelmanager.hb_port=
    set the heartbeat port [default: random]
    default: 0
--kernelmanager.iopub_port=
    set the iopub (pub) port [default: random]
    default: 0
--kernelmanager.ip=
    set the kernel's ip address [default localhost]. if the ip address is
    something other than localhost, then consoles on other machines will be able
    to connect to the kernel, so be careful!
    default: ''
--kernelmanager.kernel_cmd=...
    deprecated: use kernel_name instead.
    the popen command to launch the kernel. override this if you have a custom
    kernel. if kernel_cmd is specified in a configuration file, jupyter does not
    pass any arguments to the kernel, because it cannot make any assumptions
    about the arguments that the kernel understands. in particular, this means
    that the kernel does not receive the option --debug if it given on the
    jupyter command line.
    default: []
--kernelmanager.shell_port=
    set the shell (router) port [default: random]
    default: 0
--kernelmanager.shutdown_wait_time=
    time to wait for a kernel to terminate before killing it, in seconds. when a
    shutdown request is initiated, the kernel will be immediately send and
    interrupt (sigint), followedby a shutdown_request message, after 1/2 of
    `shutdown_wait_time`it will be sent a terminate (sigterm) request, and
    finally at the end of `shutdown_wait_time` will be killed (sigkill).
    terminate and kill may be equivalent on windows.
    default: 5.0
--kernelmanager.stdin_port=
    set the stdin (router) port [default: random]
    default: 0
--kernelmanager.transport=
    choices: any of ['tcp', 'ipc'] (case-insensitive)
    default: 'tcp'
session(configurable) options
-----------------------------
--session.buffer_threshold=
    threshold (in bytes) beyond which an object's buffer should be extracted to
    avoid pickling.
    default: 1024
--session.check_pid=
    whether to check pid to protect against calls after fork.
    this check can be disabled if fork-safety is handled elsewhere.
    default: true
--session.copy_threshold=
    threshold (in bytes) beyond which a buffer should be sent without copying.
    default: 65536
--session.debug=
    debug output in the session
    default: false
--session.digest_history_size=
    the maximum number of digests to remember.
    the digest history will be culled when it exceeds this value.
    default: 65536
--session.item_threshold=
    the maximum number of items for a container to be introspected for custom
    serialization. containers larger than this are pickled outright.
    default: 64
--session.key=
    execution key, for signing messages.
    default: b''
--session.keyfile=
    path to file containing execution key.
    default: ''
--session.metadata==...
    metadata dictionary, which serves as the default top-level metadata dict for
    each message.
    default: {
  }
--session.packer=
    the name of the packer for serializing messages. should be one of 'json',
    'pickle', or an import name for a custom callable serializer.
    default: 'json'
--session.session=
    the uuid identifying this session.
    default: ''
--session.signature_scheme=
    the digest scheme used to construct the message signatures. must have the
    form 'hmac-hash'.
    default: 'hmac-sha256'
--session.unpacker=
    the name of the unpacker for unserializing messages. only used with custom
    functions for `packer`.
    default: 'json'
--session.username=
    username for the session. default is your system username.
    default: 'username'
multikernelmanager(loggingconfigurable) options
-----------------------------------------------
--multikernelmanager.default_kernel_name=
    the name of the default kernel to start
    default: 'python3'
--multikernelmanager.kernel_manager_class=
    the kernel manager class.  this is configurable to allow subclassing of the
    kernelmanager for customized behavior.
    default: 'jupyter_client.ioloop.ioloopkernelmanager'
--multikernelmanager.shared_context=
    share a single zmq.context to talk to all my kernels
    default: true
mappingkernelmanager(multikernelmanager) options
------------------------------------------------
--mappingkernelmanager.allowed_message_types=...
    white list of allowed kernel message types. when the list is empty, all
    message types are allowed.
    default: []
--mappingkernelmanager.buffer_offline_messages=
    whether messages from kernels whose frontends have disconnected should be
    buffered in-memory. when true (default), messages are buffered and replayed
    on reconnect, avoiding lost messages due to interrupted connectivity.
    disable if long-running kernels will produce too much output while no
    frontends are connected.
    default: true
--mappingkernelmanager.cull_busy=
    whether to consider culling kernels which are busy. only effective if
    cull_idle_timeout > 0.
    default: false
--mappingkernelmanager.cull_connected=
    whether to consider culling kernels which have one or more connections. only
    effective if cull_idle_timeout > 0.
    default: false
--mappingkernelmanager.cull_idle_timeout=
    timeout (in seconds) after which a kernel is considered idle and ready to be
    culled. values of 0 or lower disable culling. very short timeouts may result
    in kernels being culled for users with poor network connections.
    default: 0
--mappingkernelmanager.cull_interval=
    the interval (in seconds) on which to check for idle kernels exceeding the
    cull timeout value.
    default: 300
--mappingkernelmanager.default_kernel_name=
    the name of the default kernel to start
    default: 'python3'
--mappingkernelmanager.kernel_info_timeout=
    timeout for giving up on a kernel (in seconds). on starting and restarting
    kernels, we check whether the kernel is running and responsive by sending
    kernel_info_requests. this sets the timeout in seconds for how long the
    kernel can take before being presumed dead. this affects the
    mappingkernelmanager (which handles kernel restarts) and the
    zmqchannelshandler (which handles the startup).
    default: 60
--mappingkernelmanager.kernel_manager_class=
    the kernel manager class.  this is configurable to allow subclassing of the
    kernelmanager for customized behavior.
    default: 'jupyter_client.ioloop.ioloopkernelmanager'
--mappingkernelmanager.root_dir=
    default: ''
--mappingkernelmanager.shared_context=
    share a single zmq.context to talk to all my kernels
    default: true
kernelspecmanager(loggingconfigurable) options
----------------------------------------------
--kernelspecmanager.ensure_native_kernel=
    if there is no python kernelspec registered and the ipython kernel is
    available, ensure it is added to the spec list.
    default: true
--kernelspecmanager.kernel_spec_class=
    the kernel spec class.  this is configurable to allow subclassing of the
    kernelspecmanager for customized behavior.
    default: 'jupyter_client.kernelspec.kernelspec'
--kernelspecmanager.whitelist=...
    whitelist of allowed kernel names.
    by default, all installed kernels are allowed.
    default: set()
contentsmanager(loggingconfigurable) options
--------------------------------------------
--contentsmanager.allow_hidden=
    allow access to hidden files
    default: false
--contentsmanager.checkpoints=
    default: none
--contentsmanager.checkpoints_class=
    default: 'notebook.services.contents.checkpoints.checkpoints'
--contentsmanager.checkpoints_kwargs==...
    default: {
  }
--contentsmanager.files_handler_class=
    handler class to use when serving raw file requests.
    default is a fallback that talks to the contentsmanager api, which may be
    inefficient, especially for large files.
    local files-based contentsmanagers can use a staticfilehandler subclass,
    which will be much more efficient.
    access to these files should be authenticated.
    default: 'notebook.files.handlers.fileshandler'
--contentsmanager.files_handler_params==...
    extra parameters to pass to files_handler_class.
    for example, staticfilehandlers generally expect a `path` argument
    specifying the root directory from which to serve files.
    default: {
  }
--contentsmanager.hide_globs=...
    glob patterns to hide in file and directory listings.
    default: ['__pycache__', '*.pyc', '*.pyo', '.ds_store', '*.so', '*.dyl...
--contentsmanager.pre_save_hook=
    python callable or importstring thereof
    to be called on a contents model prior to save.
    this can be used to process the structure, such as removing notebook outputs
    or other side effects that should not be saved.
    it will be called as (all arguments passed by keyword)::
        hook(path=path, model=model, contents_manager=self)
    - model: the model to be saved. includes file contents.
      modifying this dict will affect the file that is stored.
    - path: the api path of the save destination
    - contents_manager: this contentsmanager instance
    default: none
--contentsmanager.root_dir=
    default: '/'
--contentsmanager.untitled_directory=
    the base name used when creating untitled directories.
    default: 'untitled folder'
--contentsmanager.untitled_file=
    the base name used when creating untitled files.
    default: 'untitled'
--contentsmanager.untitled_notebook=
    the base name used when creating untitled notebooks.
    default: 'untitled'
filemanagermixin(configurable) options
--------------------------------------
--filemanagermixin.use_atomic_writing=
    by default notebooks are saved on disk on a temporary file and then if
    successfully written, it replaces the old ones. this procedure, namely
    'atomic_writing', causes some bugs on file system without operation order
    enforcement (like some networked fs). if set to false, the new notebook is
    written directly on the old one which could fail (eg: full filesystem or
    quota )
    default: true
filecontentsmanager(filemanagermixin, contentsmanager) options
--------------------------------------------------------------
--filecontentsmanager.allow_hidden=
    allow access to hidden files
    default: false
--filecontentsmanager.checkpoints=
    default: none
--filecontentsmanager.checkpoints_class=
    default: 'notebook.services.contents.checkpoints.checkpoints'
--filecontentsmanager.checkpoints_kwargs==...
    default: {}
--filecontentsmanager.delete_to_trash=
    if true (default), deleting files will send them to the platform's
    trash/recycle bin, where they can be recovered. if false, deleting files
    really deletes them.
    default: true
--filecontentsmanager.files_handler_class=
    handler class to use when serving raw file requests.
    default is a fallback that talks to the contentsmanager api, which may be
    inefficient, especially for large files.
    local files-based contentsmanagers can use a staticfilehandler subclass,
    which will be much more efficient.
    access to these files should be authenticated.
    default: 'notebook.files.handlers.fileshandler'
--filecontentsmanager.files_handler_params==...
    extra parameters to pass to files_handler_class.
    for example, staticfilehandlers generally expect a `path` argument
    specifying the root directory from which to serve files.
    default: {
  }
--filecontentsmanager.hide_globs=...
    glob patterns to hide in file and directory listings.
    default: ['__pycache__', '*.pyc', '*.pyo', '.ds_store', '*.so', '*.dyl...
--filecontentsmanager.post_save_hook=
    python callable or importstring thereof
    to be called on the path of a file just saved.
    this can be used to process the file on disk, such as converting the
    notebook to a script or html via nbconvert.
    it will be called as (all arguments passed by keyword)::
        hook(os_path=os_path, model=model, contents_manager=instance)
    - path: the filesystem path to the file just written - model: the model
    representing the file - contents_manager: this contentsmanager instance
    default: none
--filecontentsmanager.pre_save_hook=
    python callable or importstring thereof
    to be called on a contents model prior to save.
    this can be used to process the structure, such as removing notebook outputs
    or other side effects that should not be saved.
    it will be called as (all arguments passed by keyword)::
        hook(path=path, model=model, contents_manager=self)
    - model: the model to be saved. includes file contents.
      modifying this dict will affect the file that is stored.
    - path: the api path of the save destination
    - contents_manager: this contentsmanager instance
    default: none
--filecontentsmanager.root_dir=
    default: ''
--filecontentsmanager.save_script=
    deprecated, use post_save_hook. will be removed in notebook 5.0
    default: false
--filecontentsmanager.untitled_directory=
    the base name used when creating untitled directories.
    default: 'untitled folder'
--filecontentsmanager.untitled_file=
    the base name used when creating untitled files.
    default: 'untitled'
--filecontentsmanager.untitled_notebook=
    the base name used when creating untitled notebooks.
    default: 'untitled'
--filecontentsmanager.use_atomic_writing=
    by default notebooks are saved on disk on a temporary file and then if
    successfully written, it replaces the old ones. this procedure, namely
    'atomic_writing', causes some bugs on file system without operation order
    enforcement (like some networked fs). if set to false, the new notebook is
    written directly on the old one which could fail (eg: full filesystem or
    quota )
    default: true
notebooknotary(loggingconfigurable) options
-------------------------------------------
--notebooknotary.algorithm=
    the hashing algorithm used to sign notebooks.
    choices: any of ['blake2b', 'md5', 'sha3_256', 'sha256', 'sha3_224', 'sha3_512', 'sha384', 'sha512', 'sha1', 'sha224', 'sha3_384', 'blake2s']
    default: 'sha256'
--notebooknotary.data_dir=
    the storage directory for notary secret and database.
    default: ''
--notebooknotary.db_file=
    the sqlite file in which to store notebook signatures. by default, this will
    be in your jupyter data directory. you can set it to ':memory:' to disable
    sqlite writing to the filesystem.
    default: ''
--notebooknotary.secret=
    the secret key with which notebooks are signed.
    default: b''
--notebooknotary.secret_file=
    the file where the secret key is stored.
    default: ''
--notebooknotary.store_factory=
    a callable returning the storage backend for notebook signatures. the
    default uses an sqlite database.
    default: traitlets.undefined
asyncmultikernelmanager(multikernelmanager) options
---------------------------------------------------
--asyncmultikernelmanager.default_kernel_name=
    the name of the default kernel to start
    default: 'python3'
--asyncmultikernelmanager.kernel_manager_class=
    the kernel manager class.  this is configurable to allow subclassing of the
    asynckernelmanager for customized behavior.
    default: 'jupyter_client.ioloop.asyncioloopkernelmanager'
--asyncmultikernelmanager.shared_context=
    share a single zmq.context to talk to all my kernels
    default: true
asyncmappingkernelmanager(mappingkernelmanager, asyncmultikernelmanager) options
--------------------------------------------------------------------------------
--asyncmappingkernelmanager.allowed_message_types=...
    white list of allowed kernel message types. when the list is empty, all
    message types are allowed.
    default: []
--asyncmappingkernelmanager.buffer_offline_messages=
    whether messages from kernels whose frontends have disconnected should be
    buffered in-memory. when true (default), messages are buffered and replayed
    on reconnect, avoiding lost messages due to interrupted connectivity.
    disable if long-running kernels will produce too much output while no
    frontends are connected.
    default: true
--asyncmappingkernelmanager.cull_busy=
    whether to consider culling kernels which are busy. only effective if
    cull_idle_timeout > 0.
    default: false
--asyncmappingkernelmanager.cull_connected=
    whether to consider culling kernels which have one or more connections. only
    effective if cull_idle_timeout > 0.
    default: false
--asyncmappingkernelmanager.cull_idle_timeout=
    timeout (in seconds) after which a kernel is considered idle and ready to be
    culled. values of 0 or lower disable culling. very short timeouts may result
    in kernels being culled for users with poor network connections.
    default: 0
--asyncmappingkernelmanager.cull_interval=
    the interval (in seconds) on which to check for idle kernels exceeding the
    cull timeout value.
    default: 300
--asyncmappingkernelmanager.default_kernel_name=
    the name of the default kernel to start
    default: 'python3'
--asyncmappingkernelmanager.kernel_info_timeout=
    timeout for giving up on a kernel (in seconds). on starting and restarting
    kernels, we check whether the kernel is running and responsive by sending
    kernel_info_requests. this sets the timeout in seconds for how long the
    kernel can take before being presumed dead. this affects the
    mappingkernelmanager (which handles kernel restarts) and the
    zmqchannelshandler (which handles the startup).
    default: 60
--asyncmappingkernelmanager.kernel_manager_class=
    the kernel manager class.  this is configurable to allow subclassing of the
    asynckernelmanager for customized behavior.
    default: 'jupyter_client.ioloop.asyncioloopkernelmanager'
--asyncmappingkernelmanager.root_dir=
    default: ''
--asyncmappingkernelmanager.shared_context=
    share a single zmq.context to talk to all my kernels
    default: true
gatewaykernelmanager(asyncmappingkernelmanager) options
-------------------------------------------------------
--gatewaykernelmanager.allowed_message_types=...
    white list of allowed kernel message types. when the list is empty, all
    message types are allowed.
    default: []
--gatewaykernelmanager.buffer_offline_messages=
    whether messages from kernels whose frontends have disconnected should be
    buffered in-memory. when true (default), messages are buffered and replayed
    on reconnect, avoiding lost messages due to interrupted connectivity.
    disable if long-running kernels will produce too much output while no
    frontends are connected.
    default: true
--gatewaykernelmanager.cull_busy=
    whether to consider culling kernels which are busy. only effective if
    cull_idle_timeout > 0.
    default: false
--gatewaykernelmanager.cull_connected=
    whether to consider culling kernels which have one or more connections. only
    effective if cull_idle_timeout > 0.
    default: false
--gatewaykernelmanager.cull_idle_timeout=
    timeout (in seconds) after which a kernel is considered idle and ready to be
    culled. values of 0 or lower disable culling. very short timeouts may result
    in kernels being culled for users with poor network connections.
    default: 0
--gatewaykernelmanager.cull_interval=
    the interval (in seconds) on which to check for idle kernels exceeding the
    cull timeout value.
    default: 300
--gatewaykernelmanager.default_kernel_name=
    the name of the default kernel to start
    default: 'python3'
--gatewaykernelmanager.kernel_info_timeout=
    timeout for giving up on a kernel (in seconds). on starting and restarting
    kernels, we check whether the kernel is running and responsive by sending
    kernel_info_requests. this sets the timeout in seconds for how long the
    kernel can take before being presumed dead. this affects the
    mappingkernelmanager (which handles kernel restarts) and the
    zmqchannelshandler (which handles the startup).
    default: 60
--gatewaykernelmanager.kernel_manager_class=
    the kernel manager class.  this is configurable to allow subclassing of the
    asynckernelmanager for customized behavior.
    default: 'jupyter_client.ioloop.asyncioloopkernelmanager'
--gatewaykernelmanager.root_dir=
    default: ''
--gatewaykernelmanager.shared_context=
    share a single zmq.context to talk to all my kernels
    default: true
gatewaykernelspecmanager(kernelspecmanager) options
---------------------------------------------------
--gatewaykernelspecmanager.ensure_native_kernel=
    if there is no python kernelspec registered and the ipython kernel is
    available, ensure it is added to the spec list.
    default: true
--gatewaykernelspecmanager.kernel_spec_class=
    the kernel spec class.  this is configurable to allow subclassing of the
    kernelspecmanager for customized behavior.
    default: 'jupyter_client.kernelspec.kernelspec'
--gatewaykernelspecmanager.whitelist=...
    whitelist of allowed kernel names.
    by default, all installed kernels are allowed.
    default: set()
gatewayclient(singletonconfigurable) options
--------------------------------------------
--gatewayclient.auth_token=
    the authorization token used in the http headers.
    (jupyter_gateway_auth_token env var)
    default: none
--gatewayclient.ca_certs=
    the filename of ca certificates or none to use defaults.
    (jupyter_gateway_ca_certs env var)
    default: none
--gatewayclient.client_cert=
    the filename for client ssl certificate, if any.
    (jupyter_gateway_client_cert env var)
    default: none
--gatewayclient.client_key=
    the filename for client ssl key, if any.  (jupyter_gateway_client_key env
    var)
    default: none
--gatewayclient.connect_timeout=
    the time allowed for http connection establishment with the gateway server.
    (jupyter_gateway_connect_timeout env var)
    default: 40.0
--gatewayclient.env_whitelist=
    a comma-separated list of environment variable names that will be included,
    along with their values, in the kernel startup request.  the corresponding
    `env_whitelist` configuration value must also be set on the gateway server -
    since that configuration value indicates which environmental values to make
    available to the kernel. (jupyter_gateway_env_whitelist env var)
    default: ''
--gatewayclient.gateway_retry_interval=
    the time allowed for http reconnection with the gateway server for the first
    time. next will be jupyter_gateway_retry_interval multiplied by two in
    factor of numbers of retries but less than
    jupyter_gateway_retry_interval_max. (jupyter_gateway_retry_interval env var)
    default: 1.0
--gatewayclient.gateway_retry_interval_max=
    the maximum time allowed for http reconnection retry with the gateway
    server. (jupyter_gateway_retry_interval_max env var)
    default: 30.0
--gatewayclient.gateway_retry_max=
    the maximum retries allowed for http reconnection with the gateway server.
    (jupyter_gateway_retry_max env var)
    default: 5
--gatewayclient.headers=
    additional http headers to pass on the request.  this value will be
    converted to a dict. (jupyter_gateway_headers env var)
    default: '{
  }'
--gatewayclient.http_pwd=
    the password for http authentication.  (jupyter_gateway_http_pwd env var)
    default: none
--gatewayclient.http_user=
    the username for http authentication. (jupyter_gateway_http_user env var)
    default: none
--gatewayclient.kernels_endpoint=
    the gateway api endpoint for accessing kernel resources
    (jupyter_gateway_kernels_endpoint env var)
    default: '/api/kernels'
--gatewayclient.kernelspecs_endpoint=
    the gateway api endpoint for accessing kernelspecs
    (jupyter_gateway_kernelspecs_endpoint env var)
    default: '/api/kernelspecs'
--gatewayclient.kernelspecs_resource_endpoint=
    the gateway endpoint for accessing kernelspecs resources
    (jupyter_gateway_kernelspecs_resource_endpoint env var)
    default: '/kernelspecs'
--gatewayclient.request_timeout=
    the time allowed for http request completion.
    (jupyter_gateway_request_timeout env var)
    default: 40.0
--gatewayclient.url=
    the url of the kernel or enterprise gateway server where kernel
    specifications are defined and kernel management takes place. if defined,
    this notebook server acts as a proxy for all kernel management and kernel
    specification retrieval.  (jupyter_gateway_url env var)
    default: none
--gatewayclient.validate_cert=
    for https requests, determines if server's certificate should be validated
    or not. (jupyter_gateway_validate_cert env var)
    default: true
--gatewayclient.ws_url=
    the websocket url of the kernel or enterprise gateway server.  if not
    provided, this value will correspond to the value of the gateway url with
    'ws' in place of 'http'.  (jupyter_gateway_ws_url env var)
    default: none
terminalmanager(loggingconfigurable, namedtermmanager) options
--------------------------------------------------------------
--terminalmanager.cull_inactive_timeout=
    timeout (in seconds) in which a terminal has been inactive and ready to be
    culled. values of 0 or lower disable culling.
    default: 0
--terminalmanager.cull_interval=
    the interval (in seconds) on which to check for terminals exceeding the
    inactive timeout value.
    default: 300
网站地图