Settings¶
Using --setting¶
Datasette supports a number of settings. These can be set using the --setting name value
option to datasette serve
.
You can set multiple settings at once like this:
datasette mydatabase.db \
--setting default_page_size 50 \
--setting sql_time_limit_ms 3500 \
--setting max_returned_rows 2000
Settings can also be specified in the database.yaml configuration file.
Configuration directory mode¶
Normally you configure Datasette using command-line options. For a Datasette instance with custom templates, custom plugins, a static directory and several databases this can get quite verbose:
datasette one.db two.db \
--metadata=metadata.json \
--template-dir=templates/ \
--plugins-dir=plugins \
--static css:css
As an alternative to this, you can run Datasette in configuration directory mode. Create a directory with the following structure:
# In a directory called my-app:
my-app/one.db
my-app/two.db
my-app/datasette.yaml
my-app/metadata.json
my-app/templates/index.html
my-app/plugins/my_plugin.py
my-app/static/my.css
Now start Datasette by providing the path to that directory:
datasette my-app/
Datasette will detect the files in that directory and automatically configure itself using them. It will serve all *.db
files that it finds, will load metadata.json
if it exists, and will load the templates
, plugins
and static
folders if they are present.
The files that can be included in this directory are as follows. All are optional.
*.db
(or*.sqlite3
or*.sqlite
) - SQLite database files that will be served by Datasettedatasette.yaml
- Configuration for the Datasette instancemetadata.json
- Metadata for those databases -metadata.yaml
ormetadata.yml
can be used as wellinspect-data.json
- the result of runningdatasette inspect *.db --inspect-file=inspect-data.json
from the configuration directory - any database files listed here will be treated as immutable, so they should not be changed while Datasette is runningtemplates/
- a directory containing Custom templatesplugins/
- a directory containing plugins, see Writing one-off pluginsstatic/
- a directory containing static files - these will be served from/static/filename.txt
, see Serving static files
Settings¶
The following options can be set using --setting name value
, or by storing them in the settings.json
file for use with Configuration directory mode.
default_allow_sql¶
Should users be able to execute arbitrary SQL queries by default?
Setting this to off
causes permission checks for execute-sql to fail by default.
datasette mydatabase.db --setting default_allow_sql off
Another way to achieve this is to add "allow_sql": false
to your datasette.yaml
file, as described in Controlling the ability to execute arbitrary SQL. This setting offers a more convenient way to do this.
default_page_size¶
The default number of rows returned by the table page. You can over-ride this on a per-page basis using the ?_size=80
query string parameter, provided you do not specify a value higher than the max_returned_rows
setting. You can set this default using --setting
like so:
datasette mydatabase.db --setting default_page_size 50
sql_time_limit_ms¶
By default, queries have a time limit of one second. If a query takes longer than this to run Datasette will terminate the query and return an error.
If this time limit is too short for you, you can customize it using the sql_time_limit_ms
limit - for example, to increase it to 3.5 seconds:
datasette mydatabase.db --setting sql_time_limit_ms 3500
You can optionally set a lower time limit for an individual query using the ?_timelimit=100
query string argument:
/my-database/my-table?qSpecies=44&_timelimit=100
This would set the time limit to 100ms for that specific query. This feature is useful if you are working with databases of unknown size and complexity - a query that might make perfect sense for a smaller table could take too long to execute on a table with millions of rows. By setting custom time limits you can execute queries "optimistically" - e.g. give me an exact count of rows matching this query but only if it takes less than 100ms to calculate.
max_returned_rows¶
Datasette returns a maximum of 1,000 rows of data at a time. If you execute a query that returns more than 1,000 rows, Datasette will return the first 1,000 and include a warning that the result set has been truncated. You can use OFFSET/LIMIT or other methods in your SQL to implement pagination if you need to return more than 1,000 rows.
You can increase or decrease this limit like so:
datasette mydatabase.db --setting max_returned_rows 2000
max_insert_rows¶
Maximum rows that can be inserted at a time using the bulk insert API, see Inserting rows. Defaults to 100.
You can increase or decrease this limit like so:
datasette mydatabase.db --setting max_insert_rows 1000
num_sql_threads¶
Maximum number of threads in the thread pool Datasette uses to execute SQLite queries. Defaults to 3.
datasette mydatabase.db --setting num_sql_threads 10
Setting this to 0 turns off threaded SQL queries entirely - useful for environments that do not support threading such as Pyodide.
allow_facet¶
Allow users to specify columns they would like to facet on using the ?_facet=COLNAME
URL parameter to the table view.
This is enabled by default. If disabled, facets will still be displayed if they have been specifically enabled in metadata.json
configuration for the table.
Here's how to disable this feature:
datasette mydatabase.db --setting allow_facet off
default_facet_size¶
The default number of unique rows returned by Facets is 30. You can customize it like this:
datasette mydatabase.db --setting default_facet_size 50
facet_time_limit_ms¶
This is the time limit Datasette allows for calculating a facet, which defaults to 200ms:
datasette mydatabase.db --setting facet_time_limit_ms 1000
facet_suggest_time_limit_ms¶
When Datasette calculates suggested facets it needs to run a SQL query for every column in your table. The default for this time limit is 50ms to account for the fact that it needs to run once for every column. If the time limit is exceeded the column will not be suggested as a facet.
You can increase this time limit like so:
datasette mydatabase.db --setting facet_suggest_time_limit_ms 500
suggest_facets¶
Should Datasette calculate suggested facets? On by default, turn this off like so:
datasette mydatabase.db --setting suggest_facets off
allow_download¶
Should users be able to download the original SQLite database using a link on the database index page? This is turned on by default. However, databases can only be downloaded if they are served in immutable mode and not in-memory. If downloading is unavailable for either of these reasons, the download link is hidden even if allow_download
is on. To disable database downloads, use the following:
datasette mydatabase.db --setting allow_download off
allow_signed_tokens¶
Should users be able to create signed API tokens to access Datasette?
This is turned on by default. Use the following to turn it off:
datasette mydatabase.db --setting allow_signed_tokens off
Turning this setting off will disable the /-/create-token
page, described here. It will also cause any incoming Authorization: Bearer dstok_...
API tokens to be ignored.
max_signed_tokens_ttl¶
Maximum allowed expiry time for signed API tokens created by users.
Defaults to 0
which means no limit - tokens can be created that will never expire.
Set this to a value in seconds to limit the maximum expiry time. For example, to set that limit to 24 hours you would use:
datasette mydatabase.db --setting max_signed_tokens_ttl 86400
This setting is enforced when incoming tokens are processed.
default_cache_ttl¶
Default HTTP caching max-age header in seconds, used for Cache-Control: max-age=X
. Can be over-ridden on a per-request basis using the ?_ttl=
query string parameter. Set this to 0
to disable HTTP caching entirely. Defaults to 5 seconds.
datasette mydatabase.db --setting default_cache_ttl 60
cache_size_kb¶
Sets the amount of memory SQLite uses for its per-connection cache, in KB.
datasette mydatabase.db --setting cache_size_kb 5000
allow_csv_stream¶
Enables the CSV export feature where an entire table (potentially hundreds of thousands of rows) can be exported as a single CSV file. This is turned on by default - you can turn it off like this:
datasette mydatabase.db --setting allow_csv_stream off
max_csv_mb¶
The maximum size of CSV that can be exported, in megabytes. Defaults to 100MB. You can disable the limit entirely by settings this to 0:
datasette mydatabase.db --setting max_csv_mb 0
truncate_cells_html¶
In the HTML table view, truncate any strings that are longer than this value. The full value will still be available in CSV, JSON and on the individual row HTML page. Set this to 0 to disable truncation.
datasette mydatabase.db --setting truncate_cells_html 0
force_https_urls¶
Forces self-referential URLs in the JSON output to always use the https://
protocol. This is useful for cases where the application itself is hosted using
HTTP but is served to the outside world via a proxy that enables HTTPS.
datasette mydatabase.db --setting force_https_urls 1
template_debug¶
This setting enables template context debug mode, which is useful to help understand what variables are available to custom templates when you are writing them.
Enable it like this:
datasette mydatabase.db --setting template_debug 1
Now you can add ?_context=1
or &_context=1
to any Datasette page to see the context that was passed to that template.
Some examples:
trace_debug¶
This setting enables appending ?_trace=1
to any page in order to see the SQL queries and other trace information that was used to generate that page.
Enable it like this:
datasette mydatabase.db --setting trace_debug 1
Some examples:
See datasette.tracer for details on how to hook into this mechanism as a plugin author.
base_url¶
If you are running Datasette behind a proxy, it may be useful to change the root path used for the Datasette instance.
For example, if you are sending traffic from https://www.example.com/tools/datasette/
through to a proxied Datasette instance you may wish Datasette to use /tools/datasette/
as its root URL.
You can do that like so:
datasette mydatabase.db --setting base_url /tools/datasette/
Configuring the secret¶
Datasette uses a secret string to sign secure values such as cookies.
If you do not provide a secret, Datasette will create one when it starts up. This secret will reset every time the Datasette server restarts though, so things like authentication cookies and API tokens will not stay valid between restarts.
You can pass a secret to Datasette in two ways: with the --secret
command-line option or by setting a DATASETTE_SECRET
environment variable.
datasette mydb.db --secret=SECRET_VALUE_HERE
Or:
export DATASETTE_SECRET=SECRET_VALUE_HERE
datasette mydb.db
One way to generate a secure random secret is to use Python like this:
python3 -c 'import secrets; print(secrets.token_hex(32))'
cdb19e94283a20f9d42cca50c5a4871c0aa07392db308755d60a1a5b9bb0fa52
Plugin authors make use of this signing mechanism in their plugins using .sign(value, namespace="default") and .unsign(value, namespace="default").
Using secrets with datasette publish¶
The datasette publish and datasette package commands both generate a secret for you automatically when Datasette is deployed.
This means that every time you deploy a new version of a Datasette project, a new secret will be generated. This will cause signed cookies to become invalid on every fresh deploy.
You can fix this by creating a secret that will be used for multiple deploys and passing it using the --secret
option:
datasette publish cloudrun mydb.db --service=my-service --secret=cdb19e94283a20f9d42cca5