first commit

This commit is contained in:
hailin 2025-05-13 14:00:38 +08:00
commit e9b25a11e3
547 changed files with 77775 additions and 0 deletions

1
.github/CODEOWNERS vendored Normal file
View File

@ -0,0 +1 @@
* @supabase/dev-workflows

38
.github/ISSUE_TEMPLATE/bug_report.md vendored Normal file
View File

@ -0,0 +1,38 @@
---
name: Bug report
about: Create a report to help us improve
title: ''
labels: ''
assignees: ''
---
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**System information**
Rerun the failing command with `--create-ticket` flag.
- Ticket ID: [e.g. ab1ac733e31e4f928a4d7c8402543712]
- Version of OS: [e.g. Ubuntu 22.04]
- Version of CLI: [e.g. v1.60.0]
- Version of Docker: [e.g. v25.0.3]
- Versions of services: [output from `supabase services` command]
**Additional context**
If applicable, add any other context about the problem here.
- Browser [e.g. chrome, safari]
- Version of supabase-js [e.g. v2.22.0]
- Version of Node.js [e.g. v16.20.0]

View File

@ -0,0 +1,20 @@
---
name: Feature request
about: Suggest an idea for this project
title: ''
labels: ''
assignees: ''
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.

View File

@ -0,0 +1,20 @@
---
name: Improve documentation
about: Suggest an improvement to our documentation
title: ''
labels: ''
assignees: ''
---
**Link**
Add a link to the page which needs improvement (if relevant)
**Describe the problem**
Is the documentation missing? Or is it confusing? Why is it confusing?
**Describe the improvement**
A clear and concise description of the improvement.
**Additional context**
Add any other context or screenshots that help clarify your question.

23
.github/dependabot.yml vendored Normal file
View File

@ -0,0 +1,23 @@
version: 2
updates:
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "daily"
- package-ecosystem: "gomod"
directory: "/"
schedule:
interval: "daily"
- package-ecosystem: "npm"
directory: "/"
schedule:
interval: "daily"
- package-ecosystem: "docker"
directory: "pkg/config/templates"
schedule:
interval: "daily"
ignore:
- dependency-name: "library/kong"
- dependency-name: "inbucket/inbucket"
- dependency-name: "darthsim/imgproxy"
- dependency-name: "timberio/vector"

38
.github/workflows/automerge.yml vendored Normal file
View File

@ -0,0 +1,38 @@
# Adapted from https://blog.somewhatabstract.com/2021/10/11/setting-up-dependabot-with-github-actions-to-approve-and-merge/
name: Dependabot auto-merge
on: pull_request
permissions:
pull-requests: write
contents: write
jobs:
dependabot:
runs-on: ubuntu-latest
# Checking the actor will prevent your Action run failing on non-Dependabot
# PRs but also ensures that it only does work for Dependabot PRs.
if: ${{ github.actor == 'dependabot[bot]' }}
steps:
# This first step will fail if there's no metadata and so the approval
# will not occur.
- name: Dependabot metadata
id: meta
uses: dependabot/fetch-metadata@v2
with:
github-token: "${{ secrets.GITHUB_TOKEN }}"
# Here the PR gets approved.
- name: Approve a PR
if: ${{ steps.meta.outputs.update-type == 'version-update:semver-patch' || (!startsWith(steps.meta.outputs.previous-version, '0.') && steps.meta.outputs.update-type == 'version-update:semver-minor') }}
run: gh pr review --approve "${{ github.event.pull_request.html_url }}"
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Finally, this sets the PR to allow auto-merging for patch and minor
# updates if all checks pass
- name: Enable auto-merge for Dependabot PRs
if: ${{ steps.meta.outputs.update-type == 'version-update:semver-patch' || (!startsWith(steps.meta.outputs.previous-version, '0.') && steps.meta.outputs.update-type == 'version-update:semver-minor') }}
run: gh pr merge --auto --squash "${{ github.event.pull_request.html_url }}"
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

80
.github/workflows/ci.yml vendored Normal file
View File

@ -0,0 +1,80 @@
name: CI
on:
pull_request:
push:
branches:
- develop
jobs:
test:
name: Test
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-go@v5
with:
go-version-file: go.mod
cache: true
# Required by: internal/utils/credentials/keyring_test.go
- uses: t1m0thyj/unlock-keyring@v1
- run: |
go run gotest.tools/gotestsum -- -race -v -count=1 -coverprofile=coverage.out \
`go list ./... | grep -Ev 'cmd|docs|examples|pkg/api|tools'`
- uses: coverallsapp/github-action@v2
with:
file: coverage.out
format: golang
lint:
name: Lint
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-go@v5
with:
go-version-file: go.mod
# Linter requires no cache
cache: false
- uses: golangci/golangci-lint-action@v6
with:
args: --timeout 3m --verbose
start:
name: Start
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-go@v5
with:
go-version-file: go.mod
cache: true
- run: go build main.go
- run: ./main init
- run: ./main start
env:
SUPABASE_INTERNAL_IMAGE_REGISTRY: ghcr.io
codegen:
name: Codegen
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-go@v5
with:
go-version-file: go.mod
cache: true
- run: go generate
- run: |
if ! git diff --ignore-space-at-eol --exit-code --quiet pkg; then
echo "Detected uncommitted changes after codegen. See status below:"
git diff
exit 1
fi

93
.github/workflows/codeql-analysis.yml vendored Normal file
View File

@ -0,0 +1,93 @@
# For most projects, this workflow file will not need changing; you simply need
# to commit it to your repository.
#
# You may wish to alter this file to override the set of languages analyzed,
# or to provide custom queries or build logic.
#
# ******** NOTE ********
# We have attempted to detect the languages in your repository. Please check
# the `language` matrix defined below to confirm you have the correct set of
# supported CodeQL languages.
#
name: "CodeQL"
on:
pull_request:
push:
branches:
- develop
jobs:
analyze:
name: Analyze (${{ matrix.language }})
# Runner size impacts CodeQL analysis time. To learn more, please see:
# - https://gh.io/recommended-hardware-resources-for-running-codeql
# - https://gh.io/supported-runners-and-hardware-resources
# - https://gh.io/using-larger-runners (GitHub.com only)
# Consider using larger runners or machines with greater resources for possible analysis time improvements.
runs-on: ${{ (matrix.language == 'swift' && 'macos-latest') || 'ubuntu-latest' }}
timeout-minutes: ${{ (matrix.language == 'swift' && 120) || 360 }}
permissions:
# required for all workflows
security-events: write
# required to fetch internal or private CodeQL packs
packages: read
# only required for workflows in private repositories
actions: read
contents: read
strategy:
fail-fast: false
matrix:
include:
- language: go
build-mode: autobuild
- language: javascript-typescript
build-mode: none
# CodeQL supports the following values keywords for 'language': 'c-cpp', 'csharp', 'go', 'java-kotlin', 'javascript-typescript', 'python', 'ruby', 'swift'
# Use `c-cpp` to analyze code written in C, C++ or both
# Use 'java-kotlin' to analyze code written in Java, Kotlin or both
# Use 'javascript-typescript' to analyze code written in JavaScript, TypeScript or both
# To learn more about changing the languages that are analyzed or customizing the build mode for your analysis,
# see https://docs.github.com/en/code-security/code-scanning/creating-an-advanced-setup-for-code-scanning/customizing-your-advanced-setup-for-code-scanning.
# If you are analyzing a compiled language, you can modify the 'build-mode' for that language to customize how
# your codebase is analyzed, see https://docs.github.com/en/code-security/code-scanning/creating-an-advanced-setup-for-code-scanning/codeql-code-scanning-for-compiled-languages
steps:
- name: Checkout repository
uses: actions/checkout@v4
# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@v3
with:
languages: ${{ matrix.language }}
build-mode: ${{ matrix.build-mode }}
# If you wish to specify custom queries, you can do so here or in a config file.
# By default, queries listed here will override any specified in a config file.
# Prefix the list here with "+" to use these queries and those in the config file.
# For more details on CodeQL's query packs, refer to: https://docs.github.com/en/code-security/code-scanning/automatically-scanning-your-code-for-vulnerabilities-and-errors/configuring-code-scanning#using-queries-in-ql-packs
# queries: security-extended,security-and-quality
# If the analyze step fails for one of the languages you are analyzing with
# "We were unable to automatically build your code", modify the matrix above
# to set the build mode to "manual" for that language. Then modify this step
# to build your code.
# Command-line programs to run using the OS shell.
# 📚 See https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idstepsrun
- if: matrix.build-mode == 'manual'
shell: bash
run: |
echo 'If you are using a "manual" build mode for one or more of the' \
'languages you are analyzing, replace this with the commands to build' \
'your code, for example:'
echo ' make bootstrap'
echo ' make release'
exit 1
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v3
with:
category: "/language:${{matrix.language}}"

20
.github/workflows/deploy-check.yml vendored Normal file
View File

@ -0,0 +1,20 @@
name: Check Deploy
on:
pull_request_target:
types:
- opened
- reopened
- synchronize
- edited
branches:
- main
jobs:
check:
if: github.head_ref != 'develop'
runs-on: ubuntu-latest
steps:
- run: |
echo "Pull requests to main branch are only allowed from develop branch."
exit 1

22
.github/workflows/deploy.yml vendored Normal file
View File

@ -0,0 +1,22 @@
name: Prod Deploy
on:
# Run this action every Tuesday at 02:00 UTC (Singapore 10AM)
schedule:
- cron: "0 2 * * 2"
workflow_dispatch:
permissions:
pull-requests: write
contents: write
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- run: gh pr create -B main -H develop --title 'Prod deploy' --fill
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

32
.github/workflows/fast-forward.yml vendored Normal file
View File

@ -0,0 +1,32 @@
name: Fast-forward
on:
pull_request_review:
types:
- submitted
permissions:
contents: write
jobs:
approved:
if: |
github.event.pull_request.head.ref == 'develop' &&
github.event.pull_request.base.ref == 'main' &&
github.event.review.state == 'approved'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- run: |
git checkout main
git merge --ff-only "${{ github.event.pull_request.head.sha }}"
git push origin main
publish:
needs:
- approved
# Call workflow explicitly because events from actions cannot trigger more actions
uses: ./.github/workflows/release.yml
secrets: inherit

124
.github/workflows/install.yml vendored Normal file
View File

@ -0,0 +1,124 @@
name: Install
on:
pull_request:
paths:
- '.github/workflows/install.yml'
- 'package.json'
- 'scripts/**'
push:
branches:
- develop
paths:
- '.github/workflows/install.yml'
- 'package.json'
- 'scripts/**'
jobs:
pack:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: |
jq -c '.version = "1.28.0"' package.json > tmp.$$.json
mv tmp.$$.json package.json
npm pack
- uses: actions/upload-artifact@v4
with:
name: installer
path: supabase-1.28.0.tgz
npm:
needs: pack
strategy:
fail-fast: false
matrix:
os: [ubuntu-latest, macos-latest, windows-latest]
runs-on: ${{ matrix.os }}
steps:
- uses: actions/download-artifact@v4
with:
name: installer
- run: npm init -y
- run: npm i --save-dev ./supabase-1.28.0.tgz
- run: npx --no-install supabase --version
yarn:
needs: pack
strategy:
fail-fast: false
matrix:
os: [ubuntu-latest, macos-latest, windows-latest]
runs-on: ${{ matrix.os }}
steps:
- uses: actions/download-artifact@v4
with:
name: installer
- run: yarn init -y
- run: yarn add -D ./supabase-1.28.0.tgz
- run: yarn supabase --version
yarn_berry:
needs: pack
strategy:
fail-fast: false
matrix:
os: [ubuntu-latest, macos-latest, windows-latest]
runs-on: ${{ matrix.os }}
steps:
- uses: actions/download-artifact@v4
with:
name: installer
- run: yarn set version berry
# - run: yarn config set nodeLinker node-modules
- run: yarn init -y
- run: yarn add -D ./supabase-1.28.0.tgz
- if: ${{ matrix.os != 'windows-latest' }}
run: yarn supabase --version
# Workaround for running extensionless executable on windows
- if: ${{ matrix.os == 'windows-latest' }}
run: |
& "$(yarn bin supabase).exe" --version
pnpm:
needs: pack
strategy:
fail-fast: false
matrix:
os: [ubuntu-latest, macos-latest, windows-latest]
runs-on: ${{ matrix.os }}
steps:
- uses: actions/download-artifact@v4
with:
name: installer
- run: npm install -g pnpm
- run: pnpm init
- run: pnpm i --save-dev ./supabase-1.28.0.tgz
- run: pnpm supabase --version
bun:
needs: pack
strategy:
fail-fast: false
matrix:
# Bun build is experimental on windows
os: [ubuntu-latest, macos-latest]
runs-on: ${{ matrix.os }}
steps:
- uses: actions/download-artifact@v4
with:
name: installer
- uses: oven-sh/setup-bun@v2
with:
bun-version: latest
- run: |
echo '{"trustedDependencies": ["supabase"]}' > package.json
- run: bun add -D ./supabase-1.28.0.tgz
- run: bunx supabase --version

46
.github/workflows/mirror-image.yml vendored Normal file
View File

@ -0,0 +1,46 @@
name: Mirror Image
on:
workflow_call:
inputs:
image:
required: true
type: string
workflow_dispatch:
inputs:
image:
description: "org/image:tag"
required: true
type: string
jobs:
mirror:
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
id-token: write
steps:
- id: strip
run: |
TAG=${{ inputs.image }}
echo "image=${TAG##*/}" >> $GITHUB_OUTPUT
- name: configure aws credentials
uses: aws-actions/configure-aws-credentials@v4.1.0
with:
role-to-assume: ${{ secrets.PROD_AWS_ROLE }}
aws-region: us-east-1
- uses: docker/login-action@v3
with:
registry: public.ecr.aws
- uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- uses: akhilerm/tag-push-action@v2.2.0
with:
src: docker.io/${{ inputs.image }}
dst: |
public.ecr.aws/supabase/${{ steps.strip.outputs.image }}
ghcr.io/supabase/${{ steps.strip.outputs.image }}

57
.github/workflows/mirror.yml vendored Normal file
View File

@ -0,0 +1,57 @@
name: Mirror Dependencies
# We mirror upstream container images like Migra, imgproxy, etc. because these
# are usually only available on certain image registry and not others (e.g. only
# on Docker Hub and not on ghcr.io or AWS ECR).
#
# For container images that we control, we usually publish to Docker Hub,
# ghcr.io, and AWS ECR.
on:
# We can't trigger the mirror job on PR merge because certain tests would fail
# until we mirror some images. E.g. a PR to update the imgproxy image version
# would fail, because there is a test that creates a container from the
# updated image version, which would fail because the image hasn't been
# mirrored yet. It's a catch-22!
#
# TODO: Make the cli start test run *after* we mirror images (if needed).
pull_request_review:
types:
- submitted
workflow_dispatch:
jobs:
setup:
runs-on: ubuntu-latest
if: ${{ github.event_name == 'workflow_dispatch' || github.event.review.state == 'approved' }}
outputs:
tags: ${{ steps.list.outputs.tags }}
curr: ${{ steps.curr.outputs.tags }}
steps:
- uses: actions/checkout@v4
- uses: actions/setup-go@v5
with:
go-version-file: go.mod
cache: true
- id: list
run: |
echo "tags=$(go run tools/listdep/main.go)" >> $GITHUB_OUTPUT
- id: curr
name: List main branch dependencies
if: github.ref != 'refs/heads/main'
run: |
git fetch origin main
git checkout main
echo "tags=$(go run tools/listdep/main.go)" >> $GITHUB_OUTPUT
publish:
needs:
- setup
if: ${{ needs.setup.outputs.tags != needs.setup.outputs.curr }}
strategy:
matrix:
src: ${{ fromJson(needs.setup.outputs.tags) }}
# Call workflow explicitly because events from actions cannot trigger more actions
uses: ./.github/workflows/mirror-image.yml
with:
image: ${{ matrix.src }}
secrets: inherit

85
.github/workflows/pg-prove.yml vendored Normal file
View File

@ -0,0 +1,85 @@
name: Publish pg_prove
on:
workflow_dispatch:
jobs:
settings:
runs-on: ubuntu-latest
outputs:
image_tag: supabase/pg_prove:${{ steps.version.outputs.pg_prove }}
steps:
- uses: docker/setup-buildx-action@v3
- uses: docker/build-push-action@v6
with:
load: true
context: https://github.com/horrendo/pg_prove.git
target: builder
tags: supabase/pg_prove:builder
- id: version
# Replace space with equal to get the raw version string, ie. pg_prove=3.36
run: |
docker run --rm -a STDOUT supabase/pg_prove:builder pg_prove --version \
| tr ' ' '=' \
>> $GITHUB_OUTPUT
shell: bash
build_image:
needs:
- settings
strategy:
matrix:
include:
- runner: [self-hosted, X64]
arch: amd64
- runner: arm-runner
arch: arm64
runs-on: ${{ matrix.runner }}
timeout-minutes: 180
outputs:
image_digest: ${{ steps.build.outputs.digest }}
steps:
- run: docker context create builders
- uses: docker/setup-buildx-action@v3
with:
endpoint: builders
- uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- id: build
uses: docker/build-push-action@v6
with:
push: true
context: https://github.com/horrendo/pg_prove.git
tags: ${{ needs.settings.outputs.image_tag }}_${{ matrix.arch }}
platforms: linux/${{ matrix.arch }}
cache-from: type=gha,scope=${{ github.ref_name }}-pg_prove-${{ matrix.arch }}
cache-to: type=gha,mode=max,scope=${{ github.ref_name }}-pg_prove-${{ matrix.arch }}
merge_manifest:
needs:
- settings
- build_image
runs-on: ubuntu-latest
steps:
- uses: docker/setup-buildx-action@v3
- uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Merge multi-arch manifests
run: |
docker buildx imagetools create -t ${{ needs.settings.outputs.image_tag }} \
${{ needs.settings.outputs.image_tag }}_amd64 \
${{ needs.settings.outputs.image_tag }}_arm64
publish:
needs:
- settings
- merge_manifest
# Call workflow explicitly because events from actions cannot trigger more actions
uses: ./.github/workflows/mirror-image.yml
with:
image: ${{ needs.settings.outputs.image_tag }}
secrets: inherit

85
.github/workflows/publish-migra.yml vendored Normal file
View File

@ -0,0 +1,85 @@
name: Publish migra
on:
workflow_dispatch:
jobs:
settings:
runs-on: ubuntu-latest
outputs:
image_tag: supabase/migra:${{ steps.version.outputs.migra }}
steps:
- uses: docker/setup-buildx-action@v3
- uses: docker/build-push-action@v6
with:
load: true
context: https://github.com/djrobstep/migra.git
tags: supabase/migra:builder
- id: version
# Replace space with equal to get the raw version string, ie. migra=3.0.1663481299
run: |
docker run --rm -a STDOUT supabase/migra:builder pip show migra \
| grep 'Version' \
| sed -E 's/Version: (.*)/migra=\1/g' \
>> $GITHUB_OUTPUT
shell: bash
build_image:
needs:
- settings
strategy:
matrix:
include:
- runner: [self-hosted, X64]
arch: amd64
- runner: arm-runner
arch: arm64
runs-on: ${{ matrix.runner }}
timeout-minutes: 180
outputs:
image_digest: ${{ steps.build.outputs.digest }}
steps:
- run: docker context create builders
- uses: docker/setup-buildx-action@v3
with:
endpoint: builders
- uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- id: build
uses: docker/build-push-action@v6
with:
push: true
context: https://github.com/djrobstep/migra.git
tags: ${{ needs.settings.outputs.image_tag }}_${{ matrix.arch }}
platforms: linux/${{ matrix.arch }}
cache-from: type=gha,scope=${{ github.ref_name }}-migra-${{ matrix.arch }}
cache-to: type=gha,mode=max,scope=${{ github.ref_name }}-migra-${{ matrix.arch }}
merge_manifest:
needs:
- settings
- build_image
runs-on: ubuntu-latest
steps:
- uses: docker/setup-buildx-action@v3
- uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Merge multi-arch manifests
run: |
docker buildx imagetools create -t ${{ needs.settings.outputs.image_tag }} \
${{ needs.settings.outputs.image_tag }}_amd64 \
${{ needs.settings.outputs.image_tag }}_arm64
publish:
needs:
- settings
- merge_manifest
# Call workflow explicitly because events from actions cannot trigger more actions
uses: ./.github/workflows/mirror-image.yml
with:
image: ${{ needs.settings.outputs.image_tag }}
secrets: inherit

90
.github/workflows/release-beta.yml vendored Normal file
View File

@ -0,0 +1,90 @@
name: Release (Beta)
on:
push:
branches:
- develop
workflow_dispatch:
jobs:
release:
name: semantic-release
runs-on: ubuntu-latest
permissions:
contents: write
outputs:
new-release-published: ${{ steps.semantic-release.outputs.new_release_published }}
new-release-version: ${{ steps.semantic-release.outputs.new_release_version }}
new-release-channel: ${{ steps.semantic-release.outputs.new_release_channel }}
steps:
- uses: actions/checkout@v4
- id: semantic-release
uses: cycjimmy/semantic-release-action@v4
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
goreleaser:
name: GoReleaser
needs:
- release
if: needs.release.outputs.new-release-published == 'true'
permissions:
contents: write
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- uses: actions/setup-go@v5
with:
go-version-file: go.mod
cache: true
- uses: goreleaser/goreleaser-action@v6
with:
distribution: goreleaser
version: ~> v2
args: release --clean
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
SENTRY_DSN: ${{ secrets.SENTRY_DSN }}
- run: gh release edit v${{ needs.release.outputs.new-release-version }} --draft=false --prerelease
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
commit:
name: Publish Brew and Scoop
needs:
- release
- goreleaser
if: needs.release.outputs.new-release-published == 'true'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-go@v5
with:
go-version-file: go.mod
cache: true
- run: go run tools/publish/main.go --beta "${{ needs.release.outputs.new-release-version }}"
env:
GITHUB_TOKEN: ${{ secrets.GH_PAT }}
publish:
name: Publish NPM
needs:
- release
- goreleaser
if: needs.release.outputs.new-release-published == 'true'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: "16.x"
registry-url: "https://registry.npmjs.org"
- run: npm --git-tag-version=false version ${{ needs.release.outputs.new-release-version }}
- run: npm publish --tag ${{ needs.release.outputs.new-release-channel }}
env:
NODE_AUTH_TOKEN: ${{ secrets.NPM_TOKEN }}

91
.github/workflows/release.yml vendored Normal file
View File

@ -0,0 +1,91 @@
name: Release
on:
push:
branches:
- main
workflow_call:
jobs:
settings:
runs-on: ubuntu-latest
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
outputs:
release_tag: ${{ steps.prerelease.outputs.tagName }}
steps:
- uses: actions/checkout@v4
- id: prerelease
run: |
gh release list --limit 1 --json tagName --jq \
'.[]|to_entries|map("\(.key)=\(.value|tostring)")|.[]' >> $GITHUB_OUTPUT
- run: gh release edit ${{ steps.prerelease.outputs.tagName }} --latest --prerelease=false
commit:
name: Publish Brew and Scoop
needs:
- settings
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-go@v5
with:
go-version-file: go.mod
cache: true
- run: go run tools/publish/main.go ${{ needs.settings.outputs.release_tag }}
env:
GITHUB_TOKEN: ${{ secrets.GH_PAT }}
publish:
name: Publish NPM
needs:
- settings
uses: ./.github/workflows/tag-npm.yml
with:
release: ${{ needs.settings.outputs.release_tag }}
secrets: inherit
compose:
name: Bump self-hosted versions
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-go@v5
with:
go-version-file: go.mod
cache: true
- run: go run tools/selfhost/main.go
env:
GITHUB_TOKEN: ${{ secrets.GH_PAT }}
changelog:
name: Publish changelog
needs:
- commit
- publish
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-go@v5
with:
go-version-file: go.mod
cache: true
- run: go run tools/changelog/main.go ${{ secrets.SLACK_CHANNEL }}
env:
GITHUB_TOKEN: ${{ secrets.GH_PAT }}
SLACK_TOKEN: ${{ secrets.SLACK_TOKEN }}
docs:
name: Publish reference docs
needs:
- settings
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-go@v5
with:
go-version-file: go.mod
cache: true
- run: go run docs/main.go ${{ needs.settings.outputs.release_tag }} | go run tools/bumpdoc/main.go apps/docs/spec/cli_v1_commands.yaml
env:
GITHUB_TOKEN: ${{ secrets.GH_PAT }}

29
.github/workflows/tag-npm.yml vendored Normal file
View File

@ -0,0 +1,29 @@
name: Tag NPM
on:
workflow_call:
inputs:
release:
required: true
type: string
workflow_dispatch:
inputs:
release:
description: "v1.0.0"
required: true
type: string
jobs:
tag:
name: Move latest tag
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: "16.x"
registry-url: "https://registry.npmjs.org"
- run: npm dist-tag add "supabase@${RELEASE_TAG#v}" latest
env:
RELEASE_TAG: ${{ inputs.release }}
NODE_AUTH_TOKEN: ${{ secrets.NPM_TOKEN }}

31
.gitignore vendored Normal file
View File

@ -0,0 +1,31 @@
# General
.DS_Store
.env
# Binaries for programs and plugins
*.exe
*.exe~
*.dll
*.so
*.dylib
/cli
# Test binary, built with `go test -c`
*.test
# Output of the go coverage tool, specifically when used with LiteIDE
*.out
# Dependency directories (remove the comment below to include it)
# vendor/
# IDE
/.vscode
/.idea
# NPM
node_modules
package-lock.json
# Initialized by cli for local testing
/supabase

23
.golangci.yml Normal file
View File

@ -0,0 +1,23 @@
linters:
enable:
- dogsled
- dupl
- gofmt
- goimports
- gosec
- misspell
- nakedret
- stylecheck
- unconvert
- unparam
- whitespace
- errcheck
- gosimple
- staticcheck
- ineffassign
- unused
linters-settings:
stylecheck:
checks: ["all", "-ST1003"]
dupl:
threshold: 250

46
.goreleaser.yml Normal file
View File

@ -0,0 +1,46 @@
version: 2
project_name: supabase
builds:
- id: supabase
binary: supabase
flags:
- -trimpath
ldflags:
- -s -w -X github.com/supabase/cli/internal/utils.Version={{.Version}} -X github.com/supabase/cli/internal/utils.SentryDsn={{ .Env.SENTRY_DSN }}
env:
- CGO_ENABLED=0
targets:
- darwin_amd64
- darwin_arm64
- linux_amd64
- linux_arm64
- windows_amd64
- windows_arm64
archives:
- name_template: '{{ .ProjectName }}_{{ .Os }}_{{ .Arch }}{{ with .Arm }}v{{ . }}{{ end }}{{ with .Mips }}_{{ . }}{{ end }}{{ if not (eq .Amd64 "v1") }}{{ .Amd64 }}{{ end }}'
release:
draft: true
replace_existing_draft: true
prerelease: auto
changelog:
use: github
groups:
- title: Features
regexp: '^.*?feat(\([[:word:]]+\))??!?:.+$'
order: 0
- title: "Bug fixes"
regexp: '^.*?fix(\([[:word:]]+\))??!?:.+$'
order: 1
- title: Others
order: 999
nfpms:
- vendor: Supabase
description: Supabase CLI
maintainer: Supabase CLI
homepage: https://supabase.com
license: MIT
formats:
- apk
- deb
- rpm
- archlinux

43
CONTRIBUTING.md Normal file
View File

@ -0,0 +1,43 @@
# Welcome to Supabase CLI contributing guide
## Release process
We release to stable channel every two weeks.
We release to beta channel on merge to `main` branch.
Hotfixes are released manually. Follow these steps:
1. Create a new branch named `N.N.x` from latest stable version. For eg.
1. If stable is on `v1.2.3` and beta is on `v1.3.6`, create `1.2.x` branch.
2. If stable is on `v1.3.1` and beta is on `v1.3.6`, create `1.3.x` branch (or simply release all patch versions).
2. Cherry-pick your hotfix on top of `N.N.x` branch.
3. Run the [Release (Beta)](https://github.com/supabase/cli/actions/workflows/release-beta.yml) workflow targetting `N.N.x` branch.
4. Verify your hotfix locally with `npx supabase@N.N.x help`
5. Edit [GitHub releases](https://github.com/supabase/cli/releases) to set your hotfix pre-release as latest stable.
After promoting the next beta version to stable, previous `N.N.x` branches may be deleted.
To revert a stable release, set a previous release to latest. This will update brew and scoop to an old version. There's no need to revert npm as it supports version pinning.
## Unit testing
All new code should aim to improve [test coverage](https://coveralls.io/github/supabase/cli).
We use mock objects for unit testing code that interacts with external systems, such as
- local filesystem (via [afero](https://github.com/spf13/afero))
- Postgres database (via [pgmock](https://github.com/jackc/pgmock))
- Supabase API (via [gock](https://github.com/h2non/gock))
Wrappers and test helper methods can be found under [internal/testing](internal/testing).
Integration tests are created under [test](test). To run all tests:
```bash
go test ./... -race -v -count=1 -failfast
```
## API client
The Supabase API client is generated from OpenAPI spec. See [our guide](api/README.md) for updating the client and types.

21
LICENSE Normal file
View File

@ -0,0 +1,21 @@
MIT License
Copyright (c) 2021 Supabase, Inc. and contributors
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

183
README.md Normal file
View File

@ -0,0 +1,183 @@
# Supabase CLI (v1)
[![Coverage Status](https://coveralls.io/repos/github/supabase/cli/badge.svg?branch=main)](https://coveralls.io/github/supabase/cli?branch=main) [![Bitbucket Pipelines](https://img.shields.io/bitbucket/pipelines/supabase-cli/setup-cli/master?style=flat-square&label=Bitbucket%20Canary)](https://bitbucket.org/supabase-cli/setup-cli/pipelines) [![Gitlab Pipeline Status](https://img.shields.io/gitlab/pipeline-status/sweatybridge%2Fsetup-cli?label=Gitlab%20Canary)
](https://gitlab.com/sweatybridge/setup-cli/-/pipelines)
[Supabase](https://supabase.io) is an open source Firebase alternative. We're building the features of Firebase using enterprise-grade open source tools.
This repository contains all the functionality for Supabase CLI.
- [x] Running Supabase locally
- [x] Managing database migrations
- [x] Creating and deploying Supabase Functions
- [x] Generating types directly from your database schema
- [x] Making authenticated HTTP requests to [Management API](https://supabase.com/docs/reference/api/introduction)
## Getting started
### Install the CLI
Available via [NPM](https://www.npmjs.com) as dev dependency. To install:
```bash
npm i supabase --save-dev
```
To install the beta release channel:
```bash
npm i supabase@beta --save-dev
```
When installing with yarn 4, you need to disable experimental fetch with the following nodejs config.
```
NODE_OPTIONS=--no-experimental-fetch yarn add supabase
```
> **Note**
For Bun versions below v1.0.17, you must add `supabase` as a [trusted dependency](https://bun.sh/guides/install/trusted) before running `bun add -D supabase`.
<details>
<summary><b>macOS</b></summary>
Available via [Homebrew](https://brew.sh). To install:
```sh
brew install supabase/tap/supabase
```
To install the beta release channel:
```sh
brew install supabase/tap/supabase-beta
brew link --overwrite supabase-beta
```
To upgrade:
```sh
brew upgrade supabase
```
</details>
<details>
<summary><b>Windows</b></summary>
Available via [Scoop](https://scoop.sh). To install:
```powershell
scoop bucket add supabase https://github.com/supabase/scoop-bucket.git
scoop install supabase
```
To upgrade:
```powershell
scoop update supabase
```
</details>
<details>
<summary><b>Linux</b></summary>
Available via [Homebrew](https://brew.sh) and Linux packages.
#### via Homebrew
To install:
```sh
brew install supabase/tap/supabase
```
To upgrade:
```sh
brew upgrade supabase
```
#### via Linux packages
Linux packages are provided in [Releases](https://github.com/supabase/cli/releases). To install, download the `.apk`/`.deb`/`.rpm`/`.pkg.tar.zst` file depending on your package manager and run the respective commands.
```sh
sudo apk add --allow-untrusted <...>.apk
```
```sh
sudo dpkg -i <...>.deb
```
```sh
sudo rpm -i <...>.rpm
```
```sh
sudo pacman -U <...>.pkg.tar.zst
```
</details>
<details>
<summary><b>Other Platforms</b></summary>
You can also install the CLI via [go modules](https://go.dev/ref/mod#go-install) without the help of package managers.
```sh
go install github.com/supabase/cli@latest
```
Add a symlink to the binary in `$PATH` for easier access:
```sh
ln -s "$(go env GOPATH)/bin/cli" /usr/bin/supabase
```
This works on other non-standard Linux distros.
</details>
<details>
<summary><b>Community Maintained Packages</b></summary>
Available via [pkgx](https://pkgx.sh/). Package script [here](https://github.com/pkgxdev/pantry/blob/main/projects/supabase.com/cli/package.yml).
To install in your working directory:
```bash
pkgx install supabase
```
Available via [Nixpkgs](https://nixos.org/). Package script [here](https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/tools/supabase-cli/default.nix).
</details>
### Run the CLI
```bash
supabase bootstrap
```
Or using npx:
```bash
npx supabase bootstrap
```
The bootstrap command will guide you through the process of setting up a Supabase project using one of the [starter](https://github.com/supabase-community/supabase-samples/blob/main/samples.json) templates.
## Docs
Command & config reference can be found [here](https://supabase.com/docs/reference/cli/about).
## Breaking changes
We follow semantic versioning for changes that directly impact CLI commands, flags, and configurations.
However, due to dependencies on other service images, we cannot guarantee that schema migrations, seed.sql, and generated types will always work for the same CLI major version. If you need such guarantees, we encourage you to pin a specific version of CLI in package.json.
## Developing
To run from source:
```sh
# Go >= 1.22
go run . help
```

25
api/README.md Normal file
View File

@ -0,0 +1,25 @@
# Supabase OpenAPI Specification
This directory contains the OpenAPI specification for Supabase Management APIs.
It is used to automatically generate the Go [client](pkg/api/client.gen.go) and [types](pkg/api/types.gen.go).
## Updating the specification
The specification yaml is generated from our NestJS middleware. The latest release is viewable as [Swagger UI](https://api.supabase.com/api/v1).
To make a new release:
1. Update `beta.yaml` with the latest version from local development
```bash
curl -o api/beta.yaml http://127.0.0.1:8080/api/v1-yaml
```
2. Regenerate the Go client and API types
```bash
go generate
```
3. [Optional] Manually add [properties](https://swagger.io/docs/specification/basic-structure/) not generated by NestJS

6195
api/beta.yaml Normal file

File diff suppressed because it is too large Load Diff

47
cmd/bans.go Normal file
View File

@ -0,0 +1,47 @@
package cmd
import (
"github.com/spf13/afero"
"github.com/spf13/cobra"
"github.com/supabase/cli/internal/bans/get"
"github.com/supabase/cli/internal/bans/update"
"github.com/supabase/cli/internal/utils/flags"
)
var (
bansCmd = &cobra.Command{
GroupID: groupManagementAPI,
Use: "network-bans",
Short: "Manage network bans",
Long: `Network bans are IPs that get temporarily blocked if their traffic pattern looks abusive (e.g. multiple failed auth attempts).
The subcommands help you view the current bans, and unblock IPs if desired.`,
}
dbIpsToUnban []string
bansRemoveCmd = &cobra.Command{
Use: "remove",
Short: "Remove a network ban",
RunE: func(cmd *cobra.Command, args []string) error {
return update.Run(cmd.Context(), flags.ProjectRef, dbIpsToUnban, afero.NewOsFs())
},
}
bansGetCmd = &cobra.Command{
Use: "get",
Short: "Get the current network bans",
RunE: func(cmd *cobra.Command, args []string) error {
return get.Run(cmd.Context(), flags.ProjectRef, afero.NewOsFs())
},
}
)
func init() {
bansCmd.PersistentFlags().StringVar(&flags.ProjectRef, "project-ref", "", "Project ref of the Supabase project.")
bansCmd.AddCommand(bansGetCmd)
bansRemoveCmd.Flags().StringSliceVar(&dbIpsToUnban, "db-unban-ip", []string{}, "IP to allow DB connections from.")
bansCmd.AddCommand(bansRemoveCmd)
rootCmd.AddCommand(bansCmd)
}

96
cmd/bootstrap.go Normal file
View File

@ -0,0 +1,96 @@
package cmd
import (
"context"
"fmt"
"os"
"os/signal"
"strings"
"github.com/go-errors/errors"
"github.com/spf13/afero"
"github.com/spf13/cobra"
"github.com/spf13/viper"
"github.com/supabase/cli/internal/bootstrap"
"github.com/supabase/cli/internal/utils"
)
var (
starter = bootstrap.StarterTemplate{
Name: "scratch",
Description: "An empty project from scratch.",
Start: "supabase start",
}
bootstrapCmd = &cobra.Command{
GroupID: groupQuickStart,
Use: "bootstrap [template]",
Short: "Bootstrap a Supabase project from a starter template",
Args: cobra.MaximumNArgs(1),
RunE: func(cmd *cobra.Command, args []string) error {
ctx, _ := signal.NotifyContext(cmd.Context(), os.Interrupt)
if !viper.IsSet("WORKDIR") {
title := fmt.Sprintf("Enter a directory to bootstrap your project (or leave blank to use %s): ", utils.Bold(utils.CurrentDirAbs))
if workdir, err := utils.NewConsole().PromptText(ctx, title); err != nil {
return err
} else {
viper.Set("WORKDIR", workdir)
}
}
client := utils.GetGitHubClient(ctx)
templates, err := bootstrap.ListSamples(ctx, client)
if err != nil {
return err
}
if len(args) > 0 {
name := args[0]
for _, t := range templates {
if strings.EqualFold(t.Name, name) {
starter = t
break
}
}
if !strings.EqualFold(starter.Name, name) {
return errors.New("Invalid template: " + name)
}
} else {
if err := promptStarterTemplate(ctx, templates); err != nil {
return err
}
}
return bootstrap.Run(ctx, starter, afero.NewOsFs())
},
}
)
func init() {
bootstrapFlags := bootstrapCmd.Flags()
bootstrapFlags.StringVarP(&dbPassword, "password", "p", "", "Password to your remote Postgres database.")
cobra.CheckErr(viper.BindPFlag("DB_PASSWORD", bootstrapFlags.Lookup("password")))
rootCmd.AddCommand(bootstrapCmd)
}
func promptStarterTemplate(ctx context.Context, templates []bootstrap.StarterTemplate) error {
items := make([]utils.PromptItem, len(templates))
for i, t := range templates {
items[i] = utils.PromptItem{
Index: i,
Summary: t.Name,
Details: t.Description,
}
}
items = append(items, utils.PromptItem{
Index: len(items),
Summary: starter.Name,
Details: starter.Description,
})
title := "Which starter template do you want to use?"
choice, err := utils.PromptChoice(ctx, title, items)
if err != nil {
return err
}
if choice.Index < len(templates) {
starter = templates[choice.Index]
}
return nil
}

230
cmd/branches.go Normal file
View File

@ -0,0 +1,230 @@
package cmd
import (
"context"
"fmt"
"os"
"github.com/go-errors/errors"
"github.com/spf13/afero"
"github.com/spf13/cobra"
"github.com/supabase/cli/internal/branches/create"
"github.com/supabase/cli/internal/branches/delete"
"github.com/supabase/cli/internal/branches/disable"
"github.com/supabase/cli/internal/branches/get"
"github.com/supabase/cli/internal/branches/list"
"github.com/supabase/cli/internal/branches/update"
"github.com/supabase/cli/internal/gen/keys"
"github.com/supabase/cli/internal/utils"
"github.com/supabase/cli/internal/utils/flags"
"github.com/supabase/cli/pkg/api"
)
var (
branchesCmd = &cobra.Command{
GroupID: groupManagementAPI,
Use: "branches",
Short: "Manage Supabase preview branches",
}
branchRegion = utils.EnumFlag{
Allowed: awsRegions(),
}
persistent bool
branchCreateCmd = &cobra.Command{
Use: "create [name]",
Short: "Create a preview branch",
Long: "Create a preview branch for the linked project.",
Args: cobra.MaximumNArgs(1),
RunE: func(cmd *cobra.Command, args []string) error {
var body api.CreateBranchBody
if len(args) > 0 {
body.BranchName = args[0]
}
cmdFlags := cmd.Flags()
if cmdFlags.Changed("region") {
body.Region = &branchRegion.Value
}
if cmdFlags.Changed("size") {
body.DesiredInstanceSize = (*api.DesiredInstanceSize)(&size.Value)
}
if cmdFlags.Changed("persistent") {
body.Persistent = &persistent
}
return create.Run(cmd.Context(), body, afero.NewOsFs())
},
}
branchListCmd = &cobra.Command{
Use: "list",
Short: "List all preview branches",
Long: "List all preview branches of the linked project.",
Args: cobra.NoArgs,
RunE: func(cmd *cobra.Command, args []string) error {
return list.Run(cmd.Context(), afero.NewOsFs())
},
}
branchId string
branchGetCmd = &cobra.Command{
Use: "get [branch-id]",
Short: "Retrieve details of a preview branch",
Long: "Retrieve details of the specified preview branch.",
Args: cobra.MaximumNArgs(1),
RunE: func(cmd *cobra.Command, args []string) error {
ctx := cmd.Context()
if len(args) == 0 {
if err := promptBranchId(ctx, flags.ProjectRef); err != nil {
return err
}
} else {
branchId = args[0]
}
return get.Run(ctx, branchId, afero.NewOsFs())
},
}
branchStatus = utils.EnumFlag{
Allowed: []string{
string(api.BranchResponseStatusRUNNINGMIGRATIONS),
string(api.BranchResponseStatusMIGRATIONSPASSED),
string(api.BranchResponseStatusMIGRATIONSFAILED),
string(api.BranchResponseStatusFUNCTIONSDEPLOYED),
string(api.BranchResponseStatusFUNCTIONSFAILED),
},
}
branchName string
gitBranch string
branchUpdateCmd = &cobra.Command{
Use: "update [branch-id]",
Short: "Update a preview branch",
Long: "Update a preview branch by its ID.",
Args: cobra.MaximumNArgs(1),
RunE: func(cmd *cobra.Command, args []string) error {
cmdFlags := cmd.Flags()
var body api.UpdateBranchBody
if cmdFlags.Changed("name") {
body.BranchName = &branchName
}
if cmdFlags.Changed("git-branch") {
body.GitBranch = &gitBranch
}
if cmdFlags.Changed("persistent") {
body.Persistent = &persistent
}
if cmdFlags.Changed("status") {
body.Status = (*api.UpdateBranchBodyStatus)(&branchStatus.Value)
}
ctx := cmd.Context()
if len(args) == 0 {
if err := promptBranchId(ctx, flags.ProjectRef); err != nil {
return err
}
} else {
branchId = args[0]
}
return update.Run(cmd.Context(), branchId, body, afero.NewOsFs())
},
}
branchDeleteCmd = &cobra.Command{
Use: "delete [branch-id]",
Short: "Delete a preview branch",
Long: "Delete a preview branch by its ID.",
Args: cobra.MaximumNArgs(1),
RunE: func(cmd *cobra.Command, args []string) error {
ctx := cmd.Context()
if len(args) == 0 {
if err := promptBranchId(ctx, flags.ProjectRef); err != nil {
return err
}
} else {
branchId = args[0]
}
return delete.Run(ctx, branchId)
},
}
branchDisableCmd = &cobra.Command{
Use: "disable",
Short: "Disable preview branching",
Long: "Disable preview branching for the linked project.",
RunE: func(cmd *cobra.Command, args []string) error {
return disable.Run(cmd.Context(), afero.NewOsFs())
},
}
)
func init() {
branchFlags := branchesCmd.PersistentFlags()
branchFlags.StringVar(&flags.ProjectRef, "project-ref", "", "Project ref of the Supabase project.")
createFlags := branchCreateCmd.Flags()
createFlags.Var(&branchRegion, "region", "Select a region to deploy the branch database.")
createFlags.Var(&size, "size", "Select a desired instance size for the branch database.")
createFlags.BoolVar(&persistent, "persistent", false, "Whether to create a persistent branch.")
getFlags := branchGetCmd.Flags()
getFlags.VarP(&utils.OutputFormat, "output", "o", "Output format of branch details.")
branchesCmd.AddCommand(branchCreateCmd)
branchesCmd.AddCommand(branchListCmd)
branchesCmd.AddCommand(branchGetCmd)
updateFlags := branchUpdateCmd.Flags()
updateFlags.StringVar(&branchName, "name", "", "Rename the preview branch.")
updateFlags.StringVar(&gitBranch, "git-branch", "", "Change the associated git branch.")
updateFlags.BoolVar(&persistent, "persistent", false, "Switch between ephemeral and persistent branch.")
updateFlags.Var(&branchStatus, "status", "Override the current branch status.")
branchesCmd.AddCommand(branchUpdateCmd)
branchesCmd.AddCommand(branchDeleteCmd)
branchesCmd.AddCommand(branchDisableCmd)
rootCmd.AddCommand(branchesCmd)
}
func promptBranchId(ctx context.Context, ref string) error {
resp, err := utils.GetSupabase().V1ListAllBranchesWithResponse(ctx, ref)
if err != nil {
return errors.Errorf("failed to list preview branches: %w", err)
}
if resp.JSON200 == nil {
return errors.New("Unexpected error listing preview branches: " + string(resp.Body))
}
console := utils.NewConsole()
if !console.IsTTY {
// Fallback to current git branch on GHA
gitBranch := keys.GetGitBranch(afero.NewOsFs())
title := "Enter the name of your branch: "
if len(gitBranch) > 0 {
title = fmt.Sprintf("%-2s (or leave blank to use %s): ", title, utils.Aqua(gitBranch))
}
if name, err := console.PromptText(ctx, title); err != nil {
return err
} else if len(name) > 0 {
gitBranch = name
}
if len(gitBranch) == 0 {
return errors.New("git branch cannot be empty")
}
for _, branch := range *resp.JSON200 {
if branch.Name == gitBranch {
branchId = branch.Id
return nil
}
}
return errors.Errorf("Branch not found: %s", gitBranch)
}
items := make([]utils.PromptItem, len(*resp.JSON200))
for i, branch := range *resp.JSON200 {
items[i] = utils.PromptItem{
Summary: branch.Name,
Details: branch.Id,
}
}
title := "Select a branch:"
choice, err := utils.PromptChoice(ctx, title, items)
if err == nil {
branchId = choice.Details
fmt.Fprintln(os.Stderr, "Selected branch ID:", branchId)
}
return err
}

30
cmd/config.go Normal file
View File

@ -0,0 +1,30 @@
package cmd
import (
"github.com/spf13/afero"
"github.com/spf13/cobra"
"github.com/supabase/cli/internal/config/push"
"github.com/supabase/cli/internal/utils/flags"
)
var (
configCmd = &cobra.Command{
GroupID: groupManagementAPI,
Use: "config",
Short: "Manage Supabase project configurations",
}
configPushCmd = &cobra.Command{
Use: "push",
Short: "Pushes local config.toml to the linked project",
RunE: func(cmd *cobra.Command, args []string) error {
return push.Run(cmd.Context(), flags.ProjectRef, afero.NewOsFs())
},
}
)
func init() {
configCmd.PersistentFlags().StringVar(&flags.ProjectRef, "project-ref", "", "Project ref of the Supabase project.")
configCmd.AddCommand(configPushCmd)
rootCmd.AddCommand(configCmd)
}

345
cmd/db.go Normal file
View File

@ -0,0 +1,345 @@
package cmd
import (
"fmt"
"os"
"os/signal"
"path/filepath"
"github.com/spf13/afero"
"github.com/spf13/cobra"
"github.com/spf13/viper"
"github.com/supabase/cli/internal/db/branch/create"
"github.com/supabase/cli/internal/db/branch/delete"
"github.com/supabase/cli/internal/db/branch/list"
"github.com/supabase/cli/internal/db/branch/switch_"
"github.com/supabase/cli/internal/db/diff"
"github.com/supabase/cli/internal/db/dump"
"github.com/supabase/cli/internal/db/lint"
"github.com/supabase/cli/internal/db/pull"
"github.com/supabase/cli/internal/db/push"
"github.com/supabase/cli/internal/db/remote/changes"
"github.com/supabase/cli/internal/db/remote/commit"
"github.com/supabase/cli/internal/db/reset"
"github.com/supabase/cli/internal/db/start"
"github.com/supabase/cli/internal/db/test"
"github.com/supabase/cli/internal/utils"
"github.com/supabase/cli/internal/utils/flags"
)
var (
dbCmd = &cobra.Command{
GroupID: groupLocalDev,
Use: "db",
Short: "Manage Postgres databases",
PersistentPreRunE: func(cmd *cobra.Command, args []string) error {
ctx, _ := signal.NotifyContext(cmd.Context(), os.Interrupt)
cmd.SetContext(ctx)
return cmd.Root().PersistentPreRunE(cmd, args)
},
}
dbBranchCmd = &cobra.Command{
Hidden: true,
Use: "branch",
Short: "Manage local database branches",
Long: "Manage local database branches. Each branch is associated with a separate local database. Forking remote databases is NOT supported.",
}
dbBranchCreateCmd = &cobra.Command{
Deprecated: "use \"branches create <name>\" instead.\n",
Use: "create <branch name>",
Short: "Create a branch",
Args: cobra.ExactArgs(1),
RunE: func(cmd *cobra.Command, args []string) error {
return create.Run(args[0], afero.NewOsFs())
},
}
dbBranchDeleteCmd = &cobra.Command{
Deprecated: "use \"branches delete <branch-id>\" instead.\n",
Use: "delete <branch name>",
Short: "Delete a branch",
Args: cobra.ExactArgs(1),
RunE: func(cmd *cobra.Command, args []string) error {
return delete.Run(args[0], afero.NewOsFs())
},
}
dbBranchListCmd = &cobra.Command{
Deprecated: "use \"branches list\" instead.\n",
Use: "list",
Short: "List branches",
RunE: func(cmd *cobra.Command, args []string) error {
return list.Run(afero.NewOsFs(), os.Stdout)
},
}
dbSwitchCmd = &cobra.Command{
Deprecated: "use \"branches create <name>\" instead.\n",
Use: "switch <branch name>",
Short: "Switch the active branch",
Args: cobra.ExactArgs(1),
RunE: func(cmd *cobra.Command, args []string) error {
return switch_.Run(cmd.Context(), args[0], afero.NewOsFs())
},
}
useMigra bool
usePgAdmin bool
usePgSchema bool
schema []string
file string
dbDiffCmd = &cobra.Command{
Use: "diff",
Short: "Diffs the local database for schema changes",
RunE: func(cmd *cobra.Command, args []string) error {
if usePgAdmin {
return diff.RunPgAdmin(cmd.Context(), schema, file, flags.DbConfig, afero.NewOsFs())
}
differ := diff.DiffSchemaMigra
if usePgSchema {
differ = diff.DiffPgSchema
fmt.Fprintln(os.Stderr, utils.Yellow("WARNING:"), "--use-pg-schema flag is experimental and may not include all entities, such as RLS policies, enums, and grants.")
}
return diff.Run(cmd.Context(), schema, file, flags.DbConfig, differ, afero.NewOsFs())
},
}
dataOnly bool
useCopy bool
roleOnly bool
keepComments bool
excludeTable []string
dbDumpCmd = &cobra.Command{
Use: "dump",
Short: "Dumps data or schemas from the remote database",
PreRun: func(cmd *cobra.Command, args []string) {
if useCopy || len(excludeTable) > 0 {
cobra.CheckErr(cmd.MarkFlagRequired("data-only"))
}
},
RunE: func(cmd *cobra.Command, args []string) error {
return dump.Run(cmd.Context(), file, flags.DbConfig, schema, excludeTable, dataOnly, roleOnly, keepComments, useCopy, dryRun, afero.NewOsFs())
},
PostRun: func(cmd *cobra.Command, args []string) {
if len(file) > 0 {
if absPath, err := filepath.Abs(file); err != nil {
fmt.Fprintln(os.Stderr, "Dumped schema to "+utils.Bold(file)+".")
} else {
fmt.Fprintln(os.Stderr, "Dumped schema to "+utils.Bold(absPath)+".")
}
}
},
}
dryRun bool
includeAll bool
includeRoles bool
includeSeed bool
dbPushCmd = &cobra.Command{
Use: "push",
Short: "Push new migrations to the remote database",
RunE: func(cmd *cobra.Command, args []string) error {
return push.Run(cmd.Context(), dryRun, includeAll, includeRoles, includeSeed, flags.DbConfig, afero.NewOsFs())
},
}
dbPullCmd = &cobra.Command{
Use: "pull [migration name]",
Short: "Pull schema from the remote database",
RunE: func(cmd *cobra.Command, args []string) error {
name := "remote_schema"
if len(args) > 0 {
name = args[0]
}
return pull.Run(cmd.Context(), schema, flags.DbConfig, name, afero.NewOsFs())
},
PostRun: func(cmd *cobra.Command, args []string) {
fmt.Println("Finished " + utils.Aqua("supabase db pull") + ".")
},
}
dbRemoteCmd = &cobra.Command{
Hidden: true,
Use: "remote",
Short: "Manage remote databases",
}
dbRemoteChangesCmd = &cobra.Command{
Deprecated: "use \"db diff --use-migra --linked\" instead.\n",
Use: "changes",
Short: "Show changes on the remote database",
Long: "Show changes on the remote database since last migration.",
RunE: func(cmd *cobra.Command, args []string) error {
return changes.Run(cmd.Context(), schema, flags.DbConfig, afero.NewOsFs())
},
}
dbRemoteCommitCmd = &cobra.Command{
Deprecated: "use \"db pull\" instead.\n",
Use: "commit",
Short: "Commit remote changes as a new migration",
RunE: func(cmd *cobra.Command, args []string) error {
return commit.Run(cmd.Context(), schema, flags.DbConfig, afero.NewOsFs())
},
}
noSeed bool
dbResetCmd = &cobra.Command{
Use: "reset",
Short: "Resets the local database to current migrations",
RunE: func(cmd *cobra.Command, args []string) error {
if noSeed {
utils.Config.Db.Seed.Enabled = false
}
return reset.Run(cmd.Context(), migrationVersion, flags.DbConfig, afero.NewOsFs())
},
}
level = utils.EnumFlag{
Allowed: lint.AllowedLevels,
Value: lint.AllowedLevels[0],
}
lintFailOn = utils.EnumFlag{
Allowed: append([]string{"none"}, lint.AllowedLevels...),
Value: "none",
}
dbLintCmd = &cobra.Command{
Use: "lint",
Short: "Checks local database for typing error",
RunE: func(cmd *cobra.Command, args []string) error {
return lint.Run(cmd.Context(), schema, level.Value, lintFailOn.Value, flags.DbConfig, afero.NewOsFs())
},
}
fromBackup string
dbStartCmd = &cobra.Command{
Use: "start",
Short: "Starts local Postgres database",
RunE: func(cmd *cobra.Command, args []string) error {
return start.Run(cmd.Context(), fromBackup, afero.NewOsFs())
},
}
dbTestCmd = &cobra.Command{
Hidden: true,
Use: "test [path] ...",
Short: "Tests local database with pgTAP",
RunE: func(cmd *cobra.Command, args []string) error {
return test.Run(cmd.Context(), args, flags.DbConfig, afero.NewOsFs())
},
}
)
func init() {
// Build branch command
dbBranchCmd.AddCommand(dbBranchCreateCmd)
dbBranchCmd.AddCommand(dbBranchDeleteCmd)
dbBranchCmd.AddCommand(dbBranchListCmd)
dbBranchCmd.AddCommand(dbSwitchCmd)
dbCmd.AddCommand(dbBranchCmd)
// Build diff command
diffFlags := dbDiffCmd.Flags()
diffFlags.BoolVar(&useMigra, "use-migra", true, "Use migra to generate schema diff.")
diffFlags.BoolVar(&usePgAdmin, "use-pgadmin", false, "Use pgAdmin to generate schema diff.")
diffFlags.BoolVar(&usePgSchema, "use-pg-schema", false, "Use pg-schema-diff to generate schema diff.")
dbDiffCmd.MarkFlagsMutuallyExclusive("use-migra", "use-pgadmin")
diffFlags.String("db-url", "", "Diffs against the database specified by the connection string (must be percent-encoded).")
diffFlags.Bool("linked", false, "Diffs local migration files against the linked project.")
diffFlags.Bool("local", true, "Diffs local migration files against the local database.")
dbDiffCmd.MarkFlagsMutuallyExclusive("db-url", "linked", "local")
diffFlags.StringVarP(&file, "file", "f", "", "Saves schema diff to a new migration file.")
diffFlags.StringSliceVarP(&schema, "schema", "s", []string{}, "Comma separated list of schema to include.")
dbCmd.AddCommand(dbDiffCmd)
// Build dump command
dumpFlags := dbDumpCmd.Flags()
dumpFlags.BoolVar(&dryRun, "dry-run", false, "Prints the pg_dump script that would be executed.")
dumpFlags.BoolVar(&dataOnly, "data-only", false, "Dumps only data records.")
dumpFlags.BoolVar(&useCopy, "use-copy", false, "Uses copy statements in place of inserts.")
dumpFlags.StringSliceVarP(&excludeTable, "exclude", "x", []string{}, "List of schema.tables to exclude from data-only dump.")
dumpFlags.BoolVar(&roleOnly, "role-only", false, "Dumps only cluster roles.")
dbDumpCmd.MarkFlagsMutuallyExclusive("role-only", "data-only")
dumpFlags.BoolVar(&keepComments, "keep-comments", false, "Keeps commented lines from pg_dump output.")
dbDumpCmd.MarkFlagsMutuallyExclusive("keep-comments", "data-only")
dumpFlags.StringVarP(&file, "file", "f", "", "File path to save the dumped contents.")
dumpFlags.String("db-url", "", "Dumps from the database specified by the connection string (must be percent-encoded).")
dumpFlags.Bool("linked", true, "Dumps from the linked project.")
dumpFlags.Bool("local", false, "Dumps from the local database.")
dbDumpCmd.MarkFlagsMutuallyExclusive("db-url", "linked", "local")
dumpFlags.StringVarP(&dbPassword, "password", "p", "", "Password to your remote Postgres database.")
cobra.CheckErr(viper.BindPFlag("DB_PASSWORD", dumpFlags.Lookup("password")))
dumpFlags.StringSliceVarP(&schema, "schema", "s", []string{}, "Comma separated list of schema to include.")
dbDumpCmd.MarkFlagsMutuallyExclusive("schema", "role-only")
dbCmd.AddCommand(dbDumpCmd)
// Build push command
pushFlags := dbPushCmd.Flags()
pushFlags.BoolVar(&includeAll, "include-all", false, "Include all migrations not found on remote history table.")
pushFlags.BoolVar(&includeRoles, "include-roles", false, "Include custom roles from "+utils.CustomRolesPath+".")
pushFlags.BoolVar(&includeSeed, "include-seed", false, "Include seed data from your config.")
pushFlags.BoolVar(&dryRun, "dry-run", false, "Print the migrations that would be applied, but don't actually apply them.")
pushFlags.String("db-url", "", "Pushes to the database specified by the connection string (must be percent-encoded).")
pushFlags.Bool("linked", true, "Pushes to the linked project.")
pushFlags.Bool("local", false, "Pushes to the local database.")
dbPushCmd.MarkFlagsMutuallyExclusive("db-url", "linked", "local")
pushFlags.StringVarP(&dbPassword, "password", "p", "", "Password to your remote Postgres database.")
cobra.CheckErr(viper.BindPFlag("DB_PASSWORD", pushFlags.Lookup("password")))
dbCmd.AddCommand(dbPushCmd)
// Build pull command
pullFlags := dbPullCmd.Flags()
pullFlags.StringSliceVarP(&schema, "schema", "s", []string{}, "Comma separated list of schema to include.")
pullFlags.String("db-url", "", "Pulls from the database specified by the connection string (must be percent-encoded).")
pullFlags.Bool("linked", true, "Pulls from the linked project.")
pullFlags.Bool("local", false, "Pulls from the local database.")
dbPullCmd.MarkFlagsMutuallyExclusive("db-url", "linked", "local")
pullFlags.StringVarP(&dbPassword, "password", "p", "", "Password to your remote Postgres database.")
cobra.CheckErr(viper.BindPFlag("DB_PASSWORD", pullFlags.Lookup("password")))
dbCmd.AddCommand(dbPullCmd)
// Build remote command
remoteFlags := dbRemoteCmd.PersistentFlags()
remoteFlags.String("db-url", "", "Connect using the specified Postgres URL (must be percent-encoded).")
remoteFlags.StringVarP(&dbPassword, "password", "p", "", "Password to your remote Postgres database.")
cobra.CheckErr(viper.BindPFlag("DB_PASSWORD", remoteFlags.Lookup("password")))
remoteFlags.StringSliceVarP(&schema, "schema", "s", []string{}, "Comma separated list of schema to include.")
dbRemoteCmd.AddCommand(dbRemoteChangesCmd)
dbRemoteCmd.AddCommand(dbRemoteCommitCmd)
dbCmd.AddCommand(dbRemoteCmd)
// Build reset command
resetFlags := dbResetCmd.Flags()
resetFlags.String("db-url", "", "Resets the database specified by the connection string (must be percent-encoded).")
resetFlags.Bool("linked", false, "Resets the linked project with local migrations.")
resetFlags.Bool("local", true, "Resets the local database with local migrations.")
resetFlags.BoolVar(&noSeed, "no-seed", false, "Skip running the seed script after reset.")
dbResetCmd.MarkFlagsMutuallyExclusive("db-url", "linked", "local")
resetFlags.StringVar(&migrationVersion, "version", "", "Reset up to the specified version.")
dbCmd.AddCommand(dbResetCmd)
// Build lint command
lintFlags := dbLintCmd.Flags()
lintFlags.String("db-url", "", "Lints the database specified by the connection string (must be percent-encoded).")
lintFlags.Bool("linked", false, "Lints the linked project for schema errors.")
lintFlags.Bool("local", true, "Lints the local database for schema errors.")
dbLintCmd.MarkFlagsMutuallyExclusive("db-url", "linked", "local")
lintFlags.StringSliceVarP(&schema, "schema", "s", []string{}, "Comma separated list of schema to include.")
lintFlags.Var(&level, "level", "Error level to emit.")
lintFlags.Var(&lintFailOn, "fail-on", "Error level to exit with non-zero status.")
dbCmd.AddCommand(dbLintCmd)
// Build start command
startFlags := dbStartCmd.Flags()
startFlags.StringVar(&fromBackup, "from-backup", "", "Path to a logical backup file.")
dbCmd.AddCommand(dbStartCmd)
// Build test command
dbCmd.AddCommand(dbTestCmd)
testFlags := dbTestCmd.Flags()
testFlags.String("db-url", "", "Tests the database specified by the connection string (must be percent-encoded).")
testFlags.Bool("linked", false, "Runs pgTAP tests on the linked project.")
testFlags.Bool("local", true, "Runs pgTAP tests on the local database.")
dbTestCmd.MarkFlagsMutuallyExclusive("db-url", "linked", "local")
rootCmd.AddCommand(dbCmd)
}

89
cmd/domains.go Normal file
View File

@ -0,0 +1,89 @@
package cmd
import (
"github.com/spf13/afero"
"github.com/spf13/cobra"
"github.com/supabase/cli/internal/hostnames/activate"
"github.com/supabase/cli/internal/hostnames/create"
"github.com/supabase/cli/internal/hostnames/delete"
"github.com/supabase/cli/internal/hostnames/get"
"github.com/supabase/cli/internal/hostnames/reverify"
"github.com/supabase/cli/internal/utils/flags"
)
var (
customHostnamesCmd = &cobra.Command{
GroupID: groupManagementAPI,
Use: "domains",
Short: "Manage custom domain names for Supabase projects",
Long: `Manage custom domain names for Supabase projects.
Use of custom domains and vanity subdomains is mutually exclusive.
`,
}
rawOutput bool
customHostname string
customHostnamesCreateCmd = &cobra.Command{
Use: "create",
Short: "Create a custom hostname",
Long: `Create a custom hostname for your Supabase project.
Expects your custom hostname to have a CNAME record to your Supabase project's subdomain.`,
RunE: func(cmd *cobra.Command, args []string) error {
return create.Run(cmd.Context(), flags.ProjectRef, customHostname, rawOutput, afero.NewOsFs())
},
}
customHostnamesGetCmd = &cobra.Command{
Use: "get",
Short: "Get the current custom hostname config",
Long: "Retrieve the custom hostname config for your project, as stored in the Supabase platform.",
RunE: func(cmd *cobra.Command, args []string) error {
return get.Run(cmd.Context(), flags.ProjectRef, rawOutput, afero.NewOsFs())
},
}
customHostnamesReverifyCmd = &cobra.Command{
Use: "reverify",
Short: "Re-verify the custom hostname config for your project",
RunE: func(cmd *cobra.Command, args []string) error {
return reverify.Run(cmd.Context(), flags.ProjectRef, rawOutput, afero.NewOsFs())
},
}
customHostnamesActivateCmd = &cobra.Command{
Use: "activate",
Short: "Activate the custom hostname for a project",
Long: `Activates the custom hostname configuration for a project.
This reconfigures your Supabase project to respond to requests on your custom hostname.
After the custom hostname is activated, your project's auth services will no longer function on the Supabase-provisioned subdomain.`,
RunE: func(cmd *cobra.Command, args []string) error {
return activate.Run(cmd.Context(), flags.ProjectRef, rawOutput, afero.NewOsFs())
},
}
customHostnamesDeleteCmd = &cobra.Command{
Use: "delete",
Short: "Deletes the custom hostname config for your project",
RunE: func(cmd *cobra.Command, args []string) error {
return delete.Run(cmd.Context(), flags.ProjectRef, afero.NewOsFs())
},
}
)
func init() {
persistentFlags := customHostnamesCmd.PersistentFlags()
persistentFlags.StringVar(&flags.ProjectRef, "project-ref", "", "Project ref of the Supabase project.")
persistentFlags.BoolVar(&rawOutput, "include-raw-output", false, "Include raw output (useful for debugging).")
customHostnamesCreateCmd.Flags().StringVar(&customHostname, "custom-hostname", "", "The custom hostname to use for your Supabase project.")
customHostnamesCmd.AddCommand(customHostnamesGetCmd)
customHostnamesCmd.AddCommand(customHostnamesCreateCmd)
customHostnamesCmd.AddCommand(customHostnamesReverifyCmd)
customHostnamesCmd.AddCommand(customHostnamesActivateCmd)
customHostnamesCmd.AddCommand(customHostnamesDeleteCmd)
rootCmd.AddCommand(customHostnamesCmd)
}

41
cmd/encryption.go Normal file
View File

@ -0,0 +1,41 @@
package cmd
import (
"os"
"github.com/spf13/cobra"
"github.com/supabase/cli/internal/encryption/get"
"github.com/supabase/cli/internal/encryption/update"
"github.com/supabase/cli/internal/utils/flags"
)
var (
encryptionCmd = &cobra.Command{
GroupID: groupManagementAPI,
Use: "encryption",
Short: "Manage encryption keys of Supabase projects",
}
rootKeyGetCmd = &cobra.Command{
Use: "get-root-key",
Short: "Get the root encryption key of a Supabase project",
RunE: func(cmd *cobra.Command, args []string) error {
return get.Run(cmd.Context(), flags.ProjectRef)
},
}
rootKeyUpdateCmd = &cobra.Command{
Use: "update-root-key",
Short: "Update root encryption key of a Supabase project",
RunE: func(cmd *cobra.Command, args []string) error {
return update.Run(cmd.Context(), flags.ProjectRef, os.Stdin)
},
}
)
func init() {
encryptionCmd.PersistentFlags().StringVar(&flags.ProjectRef, "project-ref", "", "Project ref of the Supabase project.")
encryptionCmd.AddCommand(rootKeyUpdateCmd)
encryptionCmd.AddCommand(rootKeyGetCmd)
rootCmd.AddCommand(encryptionCmd)
}

162
cmd/functions.go Normal file
View File

@ -0,0 +1,162 @@
package cmd
import (
"fmt"
"github.com/go-errors/errors"
"github.com/spf13/afero"
"github.com/spf13/cobra"
"github.com/supabase/cli/internal/functions/delete"
"github.com/supabase/cli/internal/functions/deploy"
"github.com/supabase/cli/internal/functions/download"
"github.com/supabase/cli/internal/functions/list"
new_ "github.com/supabase/cli/internal/functions/new"
"github.com/supabase/cli/internal/functions/serve"
"github.com/supabase/cli/internal/utils"
"github.com/supabase/cli/internal/utils/flags"
"github.com/supabase/cli/pkg/cast"
)
var (
functionsCmd = &cobra.Command{
GroupID: groupManagementAPI,
Use: "functions",
Short: "Manage Supabase Edge functions",
}
functionsListCmd = &cobra.Command{
Use: "list",
Short: "List all Functions in Supabase",
Long: "List all Functions in the linked Supabase project.",
RunE: func(cmd *cobra.Command, args []string) error {
return list.Run(cmd.Context(), flags.ProjectRef, afero.NewOsFs())
},
}
functionsDeleteCmd = &cobra.Command{
Use: "delete <Function name>",
Short: "Delete a Function from Supabase",
Long: "Delete a Function from the linked Supabase project. This does NOT remove the Function locally.",
Args: cobra.ExactArgs(1),
RunE: func(cmd *cobra.Command, args []string) error {
return delete.Run(cmd.Context(), args[0], flags.ProjectRef, afero.NewOsFs())
},
}
functionsDownloadCmd = &cobra.Command{
Use: "download <Function name>",
Short: "Download a Function from Supabase",
Long: "Download the source code for a Function from the linked Supabase project.",
Args: cobra.ExactArgs(1),
RunE: func(cmd *cobra.Command, args []string) error {
return download.Run(cmd.Context(), args[0], flags.ProjectRef, useLegacyBundle, afero.NewOsFs())
},
}
useApi bool
useDocker bool
useLegacyBundle bool
noVerifyJWT = new(bool)
importMapPath string
functionsDeployCmd = &cobra.Command{
Use: "deploy [Function name]",
Short: "Deploy a Function to Supabase",
Long: "Deploy a Function to the linked Supabase project.",
RunE: func(cmd *cobra.Command, args []string) error {
// Fallback to config if user did not set the flag.
if !cmd.Flags().Changed("no-verify-jwt") {
noVerifyJWT = nil
}
if useApi {
useDocker = false
} else if maxJobs > 1 {
return errors.New("--jobs must be used together with --use-api")
}
return deploy.Run(cmd.Context(), args, useDocker, noVerifyJWT, importMapPath, maxJobs, afero.NewOsFs())
},
}
functionsNewCmd = &cobra.Command{
Use: "new <Function name>",
Short: "Create a new Function locally",
Args: cobra.ExactArgs(1),
PersistentPreRunE: func(cmd *cobra.Command, args []string) error {
cmd.GroupID = groupLocalDev
return cmd.Root().PersistentPreRunE(cmd, args)
},
RunE: func(cmd *cobra.Command, args []string) error {
return new_.Run(cmd.Context(), args[0], afero.NewOsFs())
},
}
envFilePath string
inspectBrk bool
inspectMode = utils.EnumFlag{
Allowed: []string{
string(serve.InspectModeRun),
string(serve.InspectModeBrk),
string(serve.InspectModeWait),
},
}
runtimeOption serve.RuntimeOption
functionsServeCmd = &cobra.Command{
Use: "serve",
Short: "Serve all Functions locally",
PersistentPreRunE: func(cmd *cobra.Command, args []string) error {
cmd.GroupID = groupLocalDev
return cmd.Root().PersistentPreRunE(cmd, args)
},
RunE: func(cmd *cobra.Command, args []string) error {
// Fallback to config if user did not set the flag.
if !cmd.Flags().Changed("no-verify-jwt") {
noVerifyJWT = nil
}
if len(inspectMode.Value) > 0 {
runtimeOption.InspectMode = cast.Ptr(serve.InspectMode(inspectMode.Value))
} else if inspectBrk {
runtimeOption.InspectMode = cast.Ptr(serve.InspectModeBrk)
}
if runtimeOption.InspectMode == nil && runtimeOption.InspectMain {
return fmt.Errorf("--inspect-main must be used together with one of these flags: [inspect inspect-mode]")
}
return serve.Run(cmd.Context(), envFilePath, noVerifyJWT, importMapPath, runtimeOption, afero.NewOsFs())
},
}
)
func init() {
functionsListCmd.Flags().StringVar(&flags.ProjectRef, "project-ref", "", "Project ref of the Supabase project.")
functionsDeleteCmd.Flags().StringVar(&flags.ProjectRef, "project-ref", "", "Project ref of the Supabase project.")
deployFlags := functionsDeployCmd.Flags()
deployFlags.BoolVar(&useApi, "use-api", false, "Use Management API to bundle functions.")
deployFlags.BoolVar(&useDocker, "use-docker", true, "Use Docker to bundle functions.")
deployFlags.BoolVar(&useLegacyBundle, "legacy-bundle", false, "Use legacy bundling mechanism.")
functionsDeployCmd.MarkFlagsMutuallyExclusive("use-api", "use-docker", "legacy-bundle")
cobra.CheckErr(deployFlags.MarkHidden("legacy-bundle"))
deployFlags.UintVarP(&maxJobs, "jobs", "j", 1, "Maximum number of parallel jobs.")
deployFlags.BoolVar(noVerifyJWT, "no-verify-jwt", false, "Disable JWT verification for the Function.")
deployFlags.StringVar(&flags.ProjectRef, "project-ref", "", "Project ref of the Supabase project.")
deployFlags.StringVar(&importMapPath, "import-map", "", "Path to import map file.")
functionsServeCmd.Flags().BoolVar(noVerifyJWT, "no-verify-jwt", false, "Disable JWT verification for the Function.")
functionsServeCmd.Flags().StringVar(&envFilePath, "env-file", "", "Path to an env file to be populated to the Function environment.")
functionsServeCmd.Flags().StringVar(&importMapPath, "import-map", "", "Path to import map file.")
functionsServeCmd.Flags().BoolVar(&inspectBrk, "inspect", false, "Alias of --inspect-mode brk.")
functionsServeCmd.Flags().Var(&inspectMode, "inspect-mode", "Activate inspector capability for debugging.")
functionsServeCmd.Flags().BoolVar(&runtimeOption.InspectMain, "inspect-main", false, "Allow inspecting the main worker.")
functionsServeCmd.MarkFlagsMutuallyExclusive("inspect", "inspect-mode")
functionsServeCmd.Flags().Bool("all", true, "Serve all Functions.")
cobra.CheckErr(functionsServeCmd.Flags().MarkHidden("all"))
functionsDownloadCmd.Flags().StringVar(&flags.ProjectRef, "project-ref", "", "Project ref of the Supabase project.")
functionsDownloadCmd.Flags().BoolVar(&useLegacyBundle, "legacy-bundle", false, "Use legacy bundling mechanism.")
functionsCmd.AddCommand(functionsListCmd)
functionsCmd.AddCommand(functionsDeleteCmd)
functionsCmd.AddCommand(functionsDeployCmd)
functionsCmd.AddCommand(functionsNewCmd)
functionsCmd.AddCommand(functionsServeCmd)
functionsCmd.AddCommand(functionsDownloadCmd)
rootCmd.AddCommand(functionsCmd)
}

121
cmd/gen.go Normal file
View File

@ -0,0 +1,121 @@
package cmd
import (
"os"
"os/signal"
env "github.com/Netflix/go-env"
"github.com/go-errors/errors"
"github.com/spf13/afero"
"github.com/spf13/cobra"
"github.com/supabase/cli/internal/gen/keys"
"github.com/supabase/cli/internal/gen/types"
"github.com/supabase/cli/internal/utils"
"github.com/supabase/cli/internal/utils/flags"
)
var (
genCmd = &cobra.Command{
GroupID: groupLocalDev,
Use: "gen",
Short: "Run code generation tools",
}
keyNames keys.CustomName
keyOutput = utils.EnumFlag{
Allowed: []string{
utils.OutputEnv,
utils.OutputJson,
utils.OutputToml,
utils.OutputYaml,
},
Value: utils.OutputEnv,
}
genKeysCmd = &cobra.Command{
Use: "keys",
Short: "Generate keys for preview branch",
PersistentPreRunE: func(cmd *cobra.Command, args []string) error {
es, err := env.EnvironToEnvSet(override)
if err != nil {
return err
}
if err := env.Unmarshal(es, &keyNames); err != nil {
return err
}
cmd.GroupID = groupManagementAPI
return cmd.Root().PersistentPreRunE(cmd, args)
},
RunE: func(cmd *cobra.Command, args []string) error {
return keys.Run(cmd.Context(), flags.ProjectRef, keyOutput.Value, keyNames, afero.NewOsFs())
},
}
lang = utils.EnumFlag{
Allowed: []string{
types.LangTypescript,
types.LangGo,
types.LangSwift,
},
Value: types.LangTypescript,
}
postgrestV9Compat bool
swiftAccessControl = utils.EnumFlag{
Allowed: []string{
types.SwiftInternalAccessControl,
types.SwiftPublicAccessControl,
},
Value: types.SwiftInternalAccessControl,
}
genTypesCmd = &cobra.Command{
Use: "types",
Short: "Generate types from Postgres schema",
PreRunE: func(cmd *cobra.Command, args []string) error {
if postgrestV9Compat && !cmd.Flags().Changed("db-url") {
return errors.New("--postgrest-v9-compat must used together with --db-url")
}
// Legacy commands specify language using arg, eg. gen types typescript
if len(args) > 0 && args[0] != types.LangTypescript && !cmd.Flags().Changed("lang") {
return errors.New("use --lang flag to specify the typegen language")
}
return nil
},
RunE: func(cmd *cobra.Command, args []string) error {
ctx, _ := signal.NotifyContext(cmd.Context(), os.Interrupt)
if flags.DbConfig.Host == "" {
// If no flag is specified, prompt for project id.
if err := flags.ParseProjectRef(ctx, afero.NewMemMapFs()); errors.Is(err, utils.ErrNotLinked) {
return errors.New("Must specify one of --local, --linked, --project-id, or --db-url")
} else if err != nil {
return err
}
}
return types.Run(ctx, flags.ProjectRef, flags.DbConfig, lang.Value, schema, postgrestV9Compat, swiftAccessControl.Value, afero.NewOsFs())
},
Example: ` supabase gen types --local
supabase gen types --linked --lang=go
supabase gen types --project-id abc-def-123 --schema public --schema private
supabase gen types --db-url 'postgresql://...' --schema public --schema auth`,
}
)
func init() {
typeFlags := genTypesCmd.Flags()
typeFlags.Bool("local", false, "Generate types from the local dev database.")
typeFlags.Bool("linked", false, "Generate types from the linked project.")
typeFlags.String("db-url", "", "Generate types from a database url.")
typeFlags.StringVar(&flags.ProjectRef, "project-id", "", "Generate types from a project ID.")
genTypesCmd.MarkFlagsMutuallyExclusive("local", "linked", "project-id", "db-url")
typeFlags.Var(&lang, "lang", "Output language of the generated types.")
typeFlags.StringSliceVarP(&schema, "schema", "s", []string{}, "Comma separated list of schema to include.")
typeFlags.Var(&swiftAccessControl, "swift-access-control", "Access control for Swift generated types.")
typeFlags.BoolVar(&postgrestV9Compat, "postgrest-v9-compat", false, "Generate types compatible with PostgREST v9 and below. Only use together with --db-url.")
genCmd.AddCommand(genTypesCmd)
keyFlags := genKeysCmd.Flags()
keyFlags.StringVar(&flags.ProjectRef, "project-ref", "", "Project ref of the Supabase project.")
keyFlags.VarP(&keyOutput, "output", "o", "Output format of key variables.")
keyFlags.StringSliceVar(&override, "override-name", []string{}, "Override specific variable names.")
genCmd.AddCommand(genKeysCmd)
rootCmd.AddCommand(genCmd)
}

9
cmd/generateFigSpec.go Normal file
View File

@ -0,0 +1,9 @@
package cmd
import (
generateFigSpec "github.com/withfig/autocomplete-tools/packages/cobra"
)
func init() {
rootCmd.AddCommand(generateFigSpec.NewCmdGenFigSpec())
}

63
cmd/init.go Normal file
View File

@ -0,0 +1,63 @@
package cmd
import (
"fmt"
"os"
"os/signal"
"github.com/spf13/afero"
"github.com/spf13/cobra"
"github.com/spf13/viper"
_init "github.com/supabase/cli/internal/init"
"github.com/supabase/cli/internal/utils"
)
var (
createVscodeSettings = new(bool)
createIntellijSettings = new(bool)
initParams = utils.InitParams{}
initCmd = &cobra.Command{
GroupID: groupLocalDev,
Use: "init",
Short: "Initialize a local project",
PersistentPreRunE: func(cmd *cobra.Command, args []string) error {
if !viper.IsSet("WORKDIR") {
// Prevents recursing to parent directory
viper.Set("WORKDIR", ".")
}
return cmd.Root().PersistentPreRunE(cmd, args)
},
PreRun: func(cmd *cobra.Command, args []string) {
if initParams.UseOrioleDB {
cobra.CheckErr(cmd.MarkFlagRequired("experimental"))
}
},
RunE: func(cmd *cobra.Command, args []string) error {
fsys := afero.NewOsFs()
if !cmd.Flags().Changed("with-vscode-settings") && !cmd.Flags().Changed("with-vscode-workspace") {
createVscodeSettings = nil
}
if !cmd.Flags().Changed("with-intellij-settings") {
createIntellijSettings = nil
}
ctx, _ := signal.NotifyContext(cmd.Context(), os.Interrupt)
return _init.Run(ctx, fsys, createVscodeSettings, createIntellijSettings, initParams)
},
PostRun: func(cmd *cobra.Command, args []string) {
fmt.Println("Finished " + utils.Aqua("supabase init") + ".")
},
}
)
func init() {
flags := initCmd.Flags()
flags.BoolVar(createVscodeSettings, "with-vscode-workspace", false, "Generate VS Code workspace.")
cobra.CheckErr(flags.MarkHidden("with-vscode-workspace"))
flags.BoolVar(createVscodeSettings, "with-vscode-settings", false, "Generate VS Code settings for Deno.")
flags.BoolVar(createIntellijSettings, "with-intellij-settings", false, "Generate IntelliJ IDEA settings for Deno.")
flags.BoolVar(&initParams.UseOrioleDB, "use-orioledb", false, "Use OrioleDB storage engine for Postgres.")
flags.BoolVar(&initParams.Overwrite, "force", false, "Overwrite existing "+utils.ConfigPath+".")
rootCmd.AddCommand(initCmd)
}

265
cmd/inspect.go Normal file
View File

@ -0,0 +1,265 @@
package cmd
import (
"fmt"
"os"
"os/signal"
"path/filepath"
"github.com/spf13/afero"
"github.com/spf13/cobra"
"github.com/supabase/cli/internal/inspect/bloat"
"github.com/supabase/cli/internal/inspect/blocking"
"github.com/supabase/cli/internal/inspect/cache"
"github.com/supabase/cli/internal/utils"
"github.com/supabase/cli/internal/utils/flags"
"github.com/supabase/cli/internal/inspect"
"github.com/supabase/cli/internal/inspect/calls"
"github.com/supabase/cli/internal/inspect/index_sizes"
"github.com/supabase/cli/internal/inspect/index_usage"
"github.com/supabase/cli/internal/inspect/locks"
"github.com/supabase/cli/internal/inspect/long_running_queries"
"github.com/supabase/cli/internal/inspect/outliers"
"github.com/supabase/cli/internal/inspect/replication_slots"
"github.com/supabase/cli/internal/inspect/role_configs"
"github.com/supabase/cli/internal/inspect/role_connections"
"github.com/supabase/cli/internal/inspect/seq_scans"
"github.com/supabase/cli/internal/inspect/table_index_sizes"
"github.com/supabase/cli/internal/inspect/table_record_counts"
"github.com/supabase/cli/internal/inspect/table_sizes"
"github.com/supabase/cli/internal/inspect/total_index_size"
"github.com/supabase/cli/internal/inspect/total_table_sizes"
"github.com/supabase/cli/internal/inspect/unused_indexes"
"github.com/supabase/cli/internal/inspect/vacuum_stats"
)
var (
inspectCmd = &cobra.Command{
GroupID: groupLocalDev,
Use: "inspect",
Short: "Tools to inspect your Supabase project",
}
inspectDBCmd = &cobra.Command{
Use: "db",
Short: "Tools to inspect your Supabase database",
PersistentPreRunE: func(cmd *cobra.Command, args []string) error {
ctx, _ := signal.NotifyContext(cmd.Context(), os.Interrupt)
cmd.SetContext(ctx)
return cmd.Root().PersistentPreRunE(cmd, args)
},
}
inspectCacheHitCmd = &cobra.Command{
Use: "cache-hit",
Short: "Show cache hit rates for tables and indices",
RunE: func(cmd *cobra.Command, args []string) error {
return cache.Run(cmd.Context(), flags.DbConfig, afero.NewOsFs())
},
}
inspectReplicationSlotsCmd = &cobra.Command{
Use: "replication-slots",
Short: "Show information about replication slots on the database",
RunE: func(cmd *cobra.Command, args []string) error {
return replication_slots.Run(cmd.Context(), flags.DbConfig, afero.NewOsFs())
},
}
inspectIndexUsageCmd = &cobra.Command{
Use: "index-usage",
Short: "Show information about the efficiency of indexes",
RunE: func(cmd *cobra.Command, args []string) error {
return index_usage.Run(cmd.Context(), flags.DbConfig, afero.NewOsFs())
},
}
inspectLocksCmd = &cobra.Command{
Use: "locks",
Short: "Show queries which have taken out an exclusive lock on a relation",
RunE: func(cmd *cobra.Command, args []string) error {
return locks.Run(cmd.Context(), flags.DbConfig, afero.NewOsFs())
},
}
inspectBlockingCmd = &cobra.Command{
Use: "blocking",
Short: "Show queries that are holding locks and the queries that are waiting for them to be released",
RunE: func(cmd *cobra.Command, args []string) error {
return blocking.Run(cmd.Context(), flags.DbConfig, afero.NewOsFs())
},
}
inspectOutliersCmd = &cobra.Command{
Use: "outliers",
Short: "Show queries from pg_stat_statements ordered by total execution time",
RunE: func(cmd *cobra.Command, args []string) error {
return outliers.Run(cmd.Context(), flags.DbConfig, afero.NewOsFs())
},
}
inspectCallsCmd = &cobra.Command{
Use: "calls",
Short: "Show queries from pg_stat_statements ordered by total times called",
RunE: func(cmd *cobra.Command, args []string) error {
return calls.Run(cmd.Context(), flags.DbConfig, afero.NewOsFs())
},
}
inspectTotalIndexSizeCmd = &cobra.Command{
Use: "total-index-size",
Short: "Show total size of all indexes",
RunE: func(cmd *cobra.Command, args []string) error {
return total_index_size.Run(cmd.Context(), flags.DbConfig, afero.NewOsFs())
},
}
inspectIndexSizesCmd = &cobra.Command{
Use: "index-sizes",
Short: "Show index sizes of individual indexes",
RunE: func(cmd *cobra.Command, args []string) error {
return index_sizes.Run(cmd.Context(), flags.DbConfig, afero.NewOsFs())
},
}
inspectTableSizesCmd = &cobra.Command{
Use: "table-sizes",
Short: "Show table sizes of individual tables without their index sizes",
RunE: func(cmd *cobra.Command, args []string) error {
return table_sizes.Run(cmd.Context(), flags.DbConfig, afero.NewOsFs())
},
}
inspectTableIndexSizesCmd = &cobra.Command{
Use: "table-index-sizes",
Short: "Show index sizes of individual tables",
RunE: func(cmd *cobra.Command, args []string) error {
return table_index_sizes.Run(cmd.Context(), flags.DbConfig, afero.NewOsFs())
},
}
inspectTotalTableSizesCmd = &cobra.Command{
Use: "total-table-sizes",
Short: "Show total table sizes, including table index sizes",
RunE: func(cmd *cobra.Command, args []string) error {
return total_table_sizes.Run(cmd.Context(), flags.DbConfig, afero.NewOsFs())
},
}
inspectUnusedIndexesCmd = &cobra.Command{
Use: "unused-indexes",
Short: "Show indexes with low usage",
RunE: func(cmd *cobra.Command, args []string) error {
return unused_indexes.Run(cmd.Context(), flags.DbConfig, afero.NewOsFs())
},
}
inspectSeqScansCmd = &cobra.Command{
Use: "seq-scans",
Short: "Show number of sequential scans recorded against all tables",
RunE: func(cmd *cobra.Command, args []string) error {
return seq_scans.Run(cmd.Context(), flags.DbConfig, afero.NewOsFs())
},
}
inspectLongRunningQueriesCmd = &cobra.Command{
Use: "long-running-queries",
Short: "Show currently running queries running for longer than 5 minutes",
RunE: func(cmd *cobra.Command, args []string) error {
return long_running_queries.Run(cmd.Context(), flags.DbConfig, afero.NewOsFs())
},
}
inspectTableRecordCountsCmd = &cobra.Command{
Use: "table-record-counts",
Short: "Show estimated number of rows per table",
RunE: func(cmd *cobra.Command, args []string) error {
return table_record_counts.Run(cmd.Context(), flags.DbConfig, afero.NewOsFs())
},
}
inspectBloatCmd = &cobra.Command{
Use: "bloat",
Short: "Estimates space allocated to a relation that is full of dead tuples",
RunE: func(cmd *cobra.Command, args []string) error {
return bloat.Run(cmd.Context(), flags.DbConfig, afero.NewOsFs())
},
}
inspectVacuumStatsCmd = &cobra.Command{
Use: "vacuum-stats",
Short: "Show statistics related to vacuum operations per table",
RunE: func(cmd *cobra.Command, args []string) error {
return vacuum_stats.Run(cmd.Context(), flags.DbConfig, afero.NewOsFs())
},
}
inspectRoleConfigsCmd = &cobra.Command{
Use: "role-configs",
Short: "Show configuration settings for database roles when they have been modified",
RunE: func(cmd *cobra.Command, args []string) error {
return role_configs.Run(cmd.Context(), flags.DbConfig, afero.NewOsFs())
},
}
inspectRoleConnectionsCmd = &cobra.Command{
Use: "role-connections",
Short: "Show number of active connections for all database roles",
RunE: func(cmd *cobra.Command, args []string) error {
return role_connections.Run(cmd.Context(), flags.DbConfig, afero.NewOsFs())
},
}
outputDir string
reportCmd = &cobra.Command{
Use: "report",
Short: "Generate a CSV output for all inspect commands",
RunE: func(cmd *cobra.Command, args []string) error {
ctx := cmd.Context()
if len(outputDir) == 0 {
defaultPath := filepath.Join(utils.CurrentDirAbs, "report")
title := fmt.Sprintf("Enter a directory to save output files (or leave blank to use %s): ", utils.Bold(defaultPath))
if dir, err := utils.NewConsole().PromptText(ctx, title); err != nil {
return err
} else if len(dir) == 0 {
outputDir = defaultPath
}
}
return inspect.Report(ctx, outputDir, flags.DbConfig, afero.NewOsFs())
},
}
)
func init() {
inspectFlags := inspectCmd.PersistentFlags()
inspectFlags.String("db-url", "", "Inspect the database specified by the connection string (must be percent-encoded).")
inspectFlags.Bool("linked", true, "Inspect the linked project.")
inspectFlags.Bool("local", false, "Inspect the local database.")
inspectCmd.MarkFlagsMutuallyExclusive("db-url", "linked", "local")
inspectDBCmd.AddCommand(inspectCacheHitCmd)
inspectDBCmd.AddCommand(inspectReplicationSlotsCmd)
inspectDBCmd.AddCommand(inspectIndexUsageCmd)
inspectDBCmd.AddCommand(inspectLocksCmd)
inspectDBCmd.AddCommand(inspectBlockingCmd)
inspectDBCmd.AddCommand(inspectOutliersCmd)
inspectDBCmd.AddCommand(inspectCallsCmd)
inspectDBCmd.AddCommand(inspectTotalIndexSizeCmd)
inspectDBCmd.AddCommand(inspectIndexSizesCmd)
inspectDBCmd.AddCommand(inspectTableSizesCmd)
inspectDBCmd.AddCommand(inspectTableIndexSizesCmd)
inspectDBCmd.AddCommand(inspectTotalTableSizesCmd)
inspectDBCmd.AddCommand(inspectUnusedIndexesCmd)
inspectDBCmd.AddCommand(inspectSeqScansCmd)
inspectDBCmd.AddCommand(inspectLongRunningQueriesCmd)
inspectDBCmd.AddCommand(inspectTableRecordCountsCmd)
inspectDBCmd.AddCommand(inspectBloatCmd)
inspectDBCmd.AddCommand(inspectVacuumStatsCmd)
inspectDBCmd.AddCommand(inspectRoleConfigsCmd)
inspectDBCmd.AddCommand(inspectRoleConnectionsCmd)
inspectCmd.AddCommand(inspectDBCmd)
reportCmd.Flags().StringVar(&outputDir, "output-dir", "", "Path to save CSV files in")
inspectCmd.AddCommand(reportCmd)
rootCmd.AddCommand(inspectCmd)
}

48
cmd/link.go Normal file
View File

@ -0,0 +1,48 @@
package cmd
import (
"os"
"os/signal"
"github.com/spf13/afero"
"github.com/spf13/cobra"
"github.com/spf13/viper"
"github.com/supabase/cli/internal/link"
"github.com/supabase/cli/internal/utils/flags"
"golang.org/x/term"
)
var (
linkCmd = &cobra.Command{
GroupID: groupLocalDev,
Use: "link",
Short: "Link to a Supabase project",
PreRunE: func(cmd *cobra.Command, args []string) error {
if !term.IsTerminal(int(os.Stdin.Fd())) && !viper.IsSet("PROJECT_ID") {
return cmd.MarkFlagRequired("project-ref")
}
return nil
},
RunE: func(cmd *cobra.Command, args []string) error {
ctx, _ := signal.NotifyContext(cmd.Context(), os.Interrupt)
// Use an empty fs to skip loading from file
if err := flags.ParseProjectRef(ctx, afero.NewMemMapFs()); err != nil {
return err
}
fsys := afero.NewOsFs()
if err := flags.LoadConfig(fsys); err != nil {
return err
}
return link.Run(ctx, flags.ProjectRef, fsys)
},
}
)
func init() {
linkFlags := linkCmd.Flags()
linkFlags.StringVar(&flags.ProjectRef, "project-ref", "", "Project ref of the Supabase project.")
linkFlags.StringVarP(&dbPassword, "password", "p", "", "Password to your remote Postgres database.")
// For some reason, BindPFlag only works for StringVarP instead of StringP
cobra.CheckErr(viper.BindPFlag("DB_PASSWORD", linkFlags.Lookup("password")))
rootCmd.AddCommand(linkCmd)
}

51
cmd/login.go Normal file
View File

@ -0,0 +1,51 @@
package cmd
import (
"os"
"github.com/go-errors/errors"
"github.com/spf13/afero"
"github.com/spf13/cobra"
"github.com/supabase/cli/internal/login"
"github.com/supabase/cli/internal/utils"
"golang.org/x/term"
)
var (
ErrMissingToken = errors.Errorf("Cannot use automatic login flow inside non-TTY environments. Please provide %s flag or set the %s environment variable.", utils.Aqua("--token"), utils.Aqua("SUPABASE_ACCESS_TOKEN"))
)
var (
params = login.RunParams{
// Skip the browser if we are inside non-TTY environment, which is the case for any CI.
OpenBrowser: term.IsTerminal(int(os.Stdin.Fd())),
Fsys: afero.NewOsFs(),
}
loginCmd = &cobra.Command{
GroupID: groupLocalDev,
Use: "login",
Short: "Authenticate using an access token",
RunE: func(cmd *cobra.Command, args []string) error {
if params.Token == "" {
params.Token = login.ParseAccessToken(os.Stdin)
}
if params.Token == "" && !params.OpenBrowser {
return ErrMissingToken
}
if cmd.Flags().Changed("no-browser") {
params.OpenBrowser = false
}
return login.Run(cmd.Context(), os.Stdout, params)
},
}
)
func init() {
loginFlags := loginCmd.Flags()
loginFlags.StringVar(&params.Token, "token", "", "Use provided token instead of automatic login flow")
loginFlags.StringVar(&params.TokenName, "name", "", "Name that will be used to store token in your settings")
loginFlags.Lookup("name").DefValue = "built-in token name generator"
loginFlags.Bool("no-browser", false, "Do not open browser automatically")
rootCmd.AddCommand(loginCmd)
}

24
cmd/logout.go Normal file
View File

@ -0,0 +1,24 @@
package cmd
import (
"os"
"github.com/spf13/afero"
"github.com/spf13/cobra"
"github.com/supabase/cli/internal/logout"
)
var (
logoutCmd = &cobra.Command{
GroupID: groupLocalDev,
Use: "logout",
Short: "Log out and delete access tokens locally",
RunE: func(cmd *cobra.Command, args []string) error {
return logout.Run(cmd.Context(), os.Stdout, afero.NewOsFs())
},
}
)
func init() {
rootCmd.AddCommand(logoutCmd)
}

154
cmd/migration.go Normal file
View File

@ -0,0 +1,154 @@
package cmd
import (
"fmt"
"os"
"os/signal"
"github.com/spf13/afero"
"github.com/spf13/cobra"
"github.com/spf13/viper"
"github.com/supabase/cli/internal/migration/fetch"
"github.com/supabase/cli/internal/migration/list"
"github.com/supabase/cli/internal/migration/new"
"github.com/supabase/cli/internal/migration/repair"
"github.com/supabase/cli/internal/migration/squash"
"github.com/supabase/cli/internal/migration/up"
"github.com/supabase/cli/internal/utils"
"github.com/supabase/cli/internal/utils/flags"
)
var (
migrationCmd = &cobra.Command{
GroupID: groupLocalDev,
Use: "migration",
Aliases: []string{"migrations"},
Short: "Manage database migration scripts",
PersistentPreRunE: func(cmd *cobra.Command, args []string) error {
ctx, _ := signal.NotifyContext(cmd.Context(), os.Interrupt)
cmd.SetContext(ctx)
return cmd.Root().PersistentPreRunE(cmd, args)
},
}
migrationListCmd = &cobra.Command{
Use: "list",
Short: "List local and remote migrations",
RunE: func(cmd *cobra.Command, args []string) error {
return list.Run(cmd.Context(), flags.DbConfig, afero.NewOsFs())
},
}
migrationNewCmd = &cobra.Command{
Use: "new <migration name>",
Short: "Create an empty migration script",
Args: cobra.ExactArgs(1),
RunE: func(cmd *cobra.Command, args []string) error {
return new.Run(args[0], os.Stdin, afero.NewOsFs())
},
}
targetStatus = utils.EnumFlag{
Allowed: []string{
repair.Applied,
repair.Reverted,
},
}
migrationRepairCmd = &cobra.Command{
Use: "repair [version] ...",
Short: "Repair the migration history table",
RunE: func(cmd *cobra.Command, args []string) error {
return repair.Run(cmd.Context(), flags.DbConfig, args, targetStatus.Value, afero.NewOsFs())
},
PostRun: func(cmd *cobra.Command, args []string) {
fmt.Println("Finished " + utils.Aqua("supabase migration repair") + ".")
},
}
migrationVersion string
migrationSquashCmd = &cobra.Command{
Use: "squash",
Short: "Squash migrations to a single file",
RunE: func(cmd *cobra.Command, args []string) error {
return squash.Run(cmd.Context(), migrationVersion, flags.DbConfig, afero.NewOsFs())
},
PostRun: func(cmd *cobra.Command, args []string) {
fmt.Println("Finished " + utils.Aqua("supabase migration squash") + ".")
},
}
migrationUpCmd = &cobra.Command{
Use: "up",
Short: "Apply pending migrations to local database",
RunE: func(cmd *cobra.Command, args []string) error {
return up.Run(cmd.Context(), includeAll, flags.DbConfig, afero.NewOsFs())
},
PostRun: func(cmd *cobra.Command, args []string) {
fmt.Println("Local database is up to date.")
},
}
migrationFetchCmd = &cobra.Command{
Use: "fetch",
Short: "Fetch migration files from history table",
RunE: func(cmd *cobra.Command, args []string) error {
return fetch.Run(cmd.Context(), flags.DbConfig, afero.NewOsFs())
},
}
)
func init() {
// Build list command
listFlags := migrationListCmd.Flags()
listFlags.String("db-url", "", "Lists migrations of the database specified by the connection string (must be percent-encoded).")
listFlags.Bool("linked", true, "Lists migrations applied to the linked project.")
listFlags.Bool("local", false, "Lists migrations applied to the local database.")
migrationListCmd.MarkFlagsMutuallyExclusive("db-url", "linked", "local")
listFlags.StringVarP(&dbPassword, "password", "p", "", "Password to your remote Postgres database.")
cobra.CheckErr(viper.BindPFlag("DB_PASSWORD", listFlags.Lookup("password")))
migrationListCmd.MarkFlagsMutuallyExclusive("db-url", "password")
migrationCmd.AddCommand(migrationListCmd)
// Build repair command
repairFlags := migrationRepairCmd.Flags()
repairFlags.Var(&targetStatus, "status", "Version status to update.")
cobra.CheckErr(migrationRepairCmd.MarkFlagRequired("status"))
repairFlags.String("db-url", "", "Repairs migrations of the database specified by the connection string (must be percent-encoded).")
repairFlags.Bool("linked", true, "Repairs the migration history of the linked project.")
repairFlags.Bool("local", false, "Repairs the migration history of the local database.")
migrationRepairCmd.MarkFlagsMutuallyExclusive("db-url", "linked", "local")
repairFlags.StringVarP(&dbPassword, "password", "p", "", "Password to your remote Postgres database.")
cobra.CheckErr(viper.BindPFlag("DB_PASSWORD", repairFlags.Lookup("password")))
migrationRepairCmd.MarkFlagsMutuallyExclusive("db-url", "password")
migrationCmd.AddCommand(migrationRepairCmd)
// Build squash command
squashFlags := migrationSquashCmd.Flags()
squashFlags.StringVar(&migrationVersion, "version", "", "Squash up to the specified version.")
squashFlags.String("db-url", "", "Squashes migrations of the database specified by the connection string (must be percent-encoded).")
squashFlags.Bool("linked", false, "Squashes the migration history of the linked project.")
squashFlags.Bool("local", true, "Squashes the migration history of the local database.")
migrationSquashCmd.MarkFlagsMutuallyExclusive("db-url", "linked", "local")
squashFlags.StringVarP(&dbPassword, "password", "p", "", "Password to your remote Postgres database.")
cobra.CheckErr(viper.BindPFlag("DB_PASSWORD", squashFlags.Lookup("password")))
migrationSquashCmd.MarkFlagsMutuallyExclusive("db-url", "password")
migrationCmd.AddCommand(migrationSquashCmd)
// Build up command
upFlags := migrationUpCmd.Flags()
upFlags.BoolVar(&includeAll, "include-all", false, "Include all migrations not found on remote history table.")
upFlags.String("db-url", "", "Applies migrations to the database specified by the connection string (must be percent-encoded).")
upFlags.Bool("linked", false, "Applies pending migrations to the linked project.")
upFlags.Bool("local", true, "Applies pending migrations to the local database.")
migrationUpCmd.MarkFlagsMutuallyExclusive("db-url", "linked", "local")
migrationCmd.AddCommand(migrationUpCmd)
// Build up command
fetchFlags := migrationFetchCmd.Flags()
fetchFlags.String("db-url", "", "Fetches migrations from the database specified by the connection string (must be percent-encoded).")
fetchFlags.Bool("linked", true, "Fetches migration history from the linked project.")
fetchFlags.Bool("local", false, "Fetches migration history from the local database.")
migrationFetchCmd.MarkFlagsMutuallyExclusive("db-url", "linked", "local")
migrationCmd.AddCommand(migrationFetchCmd)
// Build new command
migrationCmd.AddCommand(migrationNewCmd)
rootCmd.AddCommand(migrationCmd)
}

40
cmd/orgs.go Normal file
View File

@ -0,0 +1,40 @@
package cmd
import (
"github.com/spf13/cobra"
"github.com/supabase/cli/internal/orgs/create"
"github.com/supabase/cli/internal/orgs/list"
)
var (
orgsCmd = &cobra.Command{
GroupID: groupManagementAPI,
Use: "orgs",
Short: "Manage Supabase organizations",
}
orgsListCmd = &cobra.Command{
Use: "list",
Short: "List all organizations",
Long: "List all organizations the logged-in user belongs.",
RunE: func(cmd *cobra.Command, args []string) error {
return list.Run(cmd.Context())
},
}
orgsCreateCmd = &cobra.Command{
Use: "create",
Short: "Create an organization",
Long: "Create an organization for the logged-in user.",
Args: cobra.ExactArgs(1),
RunE: func(cmd *cobra.Command, args []string) error {
return create.Run(cmd.Context(), args[0])
},
}
)
func init() {
orgsCmd.AddCommand(orgsListCmd)
orgsCmd.AddCommand(orgsCreateCmd)
rootCmd.AddCommand(orgsCmd)
}

68
cmd/postgres.go Normal file
View File

@ -0,0 +1,68 @@
package cmd
import (
"github.com/spf13/afero"
"github.com/spf13/cobra"
"github.com/supabase/cli/internal/postgresConfig/delete"
"github.com/supabase/cli/internal/postgresConfig/get"
"github.com/supabase/cli/internal/postgresConfig/update"
"github.com/supabase/cli/internal/utils/flags"
)
var (
postgresCmd = &cobra.Command{
GroupID: groupManagementAPI,
Use: "postgres-config",
Short: "Manage Postgres database config",
}
postgresConfigGetCmd = &cobra.Command{
Use: "get",
Short: "Get the current Postgres database config overrides",
RunE: func(cmd *cobra.Command, args []string) error {
return get.Run(cmd.Context(), flags.ProjectRef, afero.NewOsFs())
},
}
postgresConfigUpdateCmd = &cobra.Command{
Use: "update",
Short: "Update Postgres database config",
Long: `Overriding the default Postgres config could result in unstable database behavior.
Custom configuration also overrides the optimizations generated based on the compute add-ons in use.`,
RunE: func(cmd *cobra.Command, args []string) error {
return update.Run(cmd.Context(), flags.ProjectRef, postgresConfigValues, postgresConfigUpdateReplaceMode, noRestart, afero.NewOsFs())
},
}
postgresConfigDeleteCmd = &cobra.Command{
Use: "delete",
Short: "Delete specific Postgres database config overrides",
Long: "Delete specific config overrides, reverting them to their default values.",
RunE: func(cmd *cobra.Command, args []string) error {
return delete.Run(cmd.Context(), flags.ProjectRef, postgresConfigKeysToDelete, noRestart, afero.NewOsFs())
},
}
postgresConfigValues []string
postgresConfigUpdateReplaceMode bool
postgresConfigKeysToDelete []string
noRestart bool
)
func init() {
postgresCmd.PersistentFlags().StringVar(&flags.ProjectRef, "project-ref", "", "Project ref of the Supabase project.")
postgresCmd.AddCommand(postgresConfigGetCmd)
postgresCmd.AddCommand(postgresConfigUpdateCmd)
postgresCmd.AddCommand(postgresConfigDeleteCmd)
updateFlags := postgresConfigUpdateCmd.Flags()
updateFlags.StringSliceVar(&postgresConfigValues, "config", []string{}, "Config overrides specified as a 'key=value' pair")
updateFlags.BoolVar(&postgresConfigUpdateReplaceMode, "replace-existing-overrides", false, "If true, replaces all existing overrides with the ones provided. If false (default), merges existing overrides with the ones provided.")
updateFlags.BoolVar(&noRestart, "no-restart", false, "Do not restart the database after updating config.")
deleteFlags := postgresConfigDeleteCmd.Flags()
deleteFlags.StringSliceVar(&postgresConfigKeysToDelete, "config", []string{}, "Config keys to delete (comma-separated)")
deleteFlags.BoolVar(&noRestart, "no-restart", false, "Do not restart the database after deleting config.")
rootCmd.AddCommand(postgresCmd)
}

161
cmd/projects.go Normal file
View File

@ -0,0 +1,161 @@
package cmd
import (
"os"
"sort"
"github.com/spf13/afero"
"github.com/spf13/cobra"
"github.com/spf13/viper"
"github.com/supabase/cli/internal/projects/apiKeys"
"github.com/supabase/cli/internal/projects/create"
"github.com/supabase/cli/internal/projects/delete"
"github.com/supabase/cli/internal/projects/list"
"github.com/supabase/cli/internal/utils"
"github.com/supabase/cli/internal/utils/flags"
"github.com/supabase/cli/pkg/api"
"golang.org/x/term"
)
var (
projectsCmd = &cobra.Command{
GroupID: groupManagementAPI,
Use: "projects",
Short: "Manage Supabase projects",
}
interactive bool
projectName string
orgId string
dbPassword string
region = utils.EnumFlag{
Allowed: awsRegions(),
}
plan = utils.EnumFlag{
Allowed: []string{string(api.V1CreateProjectBodyDtoPlanFree), string(api.V1CreateProjectBodyDtoPlanPro)},
Value: string(api.V1CreateProjectBodyDtoPlanFree),
}
size = utils.EnumFlag{
Allowed: []string{
string(api.DesiredInstanceSizeMicro),
string(api.DesiredInstanceSizeSmall),
string(api.DesiredInstanceSizeMedium),
string(api.DesiredInstanceSizeLarge),
string(api.DesiredInstanceSizeXlarge),
string(api.DesiredInstanceSizeN2xlarge),
string(api.DesiredInstanceSizeN4xlarge),
string(api.DesiredInstanceSizeN8xlarge),
string(api.DesiredInstanceSizeN12xlarge),
string(api.DesiredInstanceSizeN16xlarge),
},
}
projectsCreateCmd = &cobra.Command{
Use: "create [project name]",
Short: "Create a project on Supabase",
Args: cobra.MaximumNArgs(1),
Example: `supabase projects create my-project --org-id cool-green-pqdr0qc --db-password ******** --region us-east-1`,
PreRunE: func(cmd *cobra.Command, args []string) error {
if !term.IsTerminal(int(os.Stdin.Fd())) || !interactive {
cobra.CheckErr(cmd.MarkFlagRequired("org-id"))
cobra.CheckErr(cmd.MarkFlagRequired("db-password"))
cobra.CheckErr(cmd.MarkFlagRequired("region"))
return cobra.ExactArgs(1)(cmd, args)
}
return nil
},
RunE: func(cmd *cobra.Command, args []string) error {
if len(args) > 0 {
projectName = args[0]
}
body := api.V1CreateProjectBodyDto{
Name: projectName,
OrganizationId: orgId,
DbPass: dbPassword,
Region: api.V1CreateProjectBodyDtoRegion(region.Value),
}
if cmd.Flags().Changed("size") {
body.DesiredInstanceSize = (*api.V1CreateProjectBodyDtoDesiredInstanceSize)(&size.Value)
}
return create.Run(cmd.Context(), body, afero.NewOsFs())
},
}
projectsListCmd = &cobra.Command{
Use: "list",
Short: "List all Supabase projects",
Long: "List all Supabase projects the logged-in user can access.",
RunE: func(cmd *cobra.Command, args []string) error {
return list.Run(cmd.Context(), afero.NewOsFs())
},
}
projectsApiKeysCmd = &cobra.Command{
Use: "api-keys",
Short: "List all API keys for a Supabase project",
RunE: func(cmd *cobra.Command, args []string) error {
return apiKeys.Run(cmd.Context(), flags.ProjectRef, afero.NewOsFs())
},
}
projectsDeleteCmd = &cobra.Command{
Use: "delete <ref>",
Short: "Delete a Supabase project",
Args: cobra.MaximumNArgs(1),
PreRunE: func(cmd *cobra.Command, args []string) error {
if !term.IsTerminal(int(os.Stdin.Fd())) {
return cobra.ExactArgs(1)(cmd, args)
}
return nil
},
RunE: func(cmd *cobra.Command, args []string) error {
ctx := cmd.Context()
if len(args) == 0 {
title := "Which project do you want to delete?"
cobra.CheckErr(flags.PromptProjectRef(ctx, title))
} else {
flags.ProjectRef = args[0]
}
if err := delete.PreRun(ctx, flags.ProjectRef); err != nil {
return err
}
return delete.Run(ctx, flags.ProjectRef, afero.NewOsFs())
},
}
)
func init() {
// Add flags to cobra command
createFlags := projectsCreateCmd.Flags()
createFlags.BoolVarP(&interactive, "interactive", "i", true, "Enables interactive mode.")
cobra.CheckErr(createFlags.MarkHidden("interactive"))
createFlags.StringVar(&orgId, "org-id", "", "Organization ID to create the project in.")
createFlags.StringVar(&dbPassword, "db-password", "", "Database password of the project.")
createFlags.Var(&region, "region", "Select a region close to you for the best performance.")
createFlags.Var(&plan, "plan", "Select a plan that suits your needs.")
cobra.CheckErr(createFlags.MarkHidden("plan"))
createFlags.Var(&size, "size", "Select a desired instance size for your project.")
cobra.CheckErr(viper.BindPFlag("DB_PASSWORD", createFlags.Lookup("db-password")))
apiKeysFlags := projectsApiKeysCmd.Flags()
apiKeysFlags.StringVar(&flags.ProjectRef, "project-ref", "", "Project ref of the Supabase project.")
// Add commands to root
projectsCmd.AddCommand(projectsCreateCmd)
projectsCmd.AddCommand(projectsDeleteCmd)
projectsCmd.AddCommand(projectsListCmd)
projectsCmd.AddCommand(projectsApiKeysCmd)
rootCmd.AddCommand(projectsCmd)
}
func awsRegions() []string {
result := make([]string, len(utils.RegionMap))
i := 0
for k := range utils.RegionMap {
result[i] = k
i++
}
sort.Strings(result)
return result
}

44
cmd/restrictions.go Normal file
View File

@ -0,0 +1,44 @@
package cmd
import (
"github.com/spf13/cobra"
"github.com/supabase/cli/internal/restrictions/get"
"github.com/supabase/cli/internal/restrictions/update"
"github.com/supabase/cli/internal/utils/flags"
)
var (
restrictionsCmd = &cobra.Command{
GroupID: groupManagementAPI,
Use: "network-restrictions",
Short: "Manage network restrictions",
}
dbCidrsToAllow []string
bypassCidrChecks bool
restrictionsUpdateCmd = &cobra.Command{
Use: "update",
Short: "Update network restrictions",
RunE: func(cmd *cobra.Command, args []string) error {
return update.Run(cmd.Context(), flags.ProjectRef, dbCidrsToAllow, bypassCidrChecks)
},
}
restrictionsGetCmd = &cobra.Command{
Use: "get",
Short: "Get the current network restrictions",
RunE: func(cmd *cobra.Command, args []string) error {
return get.Run(cmd.Context(), flags.ProjectRef)
},
}
)
func init() {
restrictionsCmd.PersistentFlags().StringVar(&flags.ProjectRef, "project-ref", "", "Project ref of the Supabase project.")
restrictionsUpdateCmd.Flags().StringSliceVar(&dbCidrsToAllow, "db-allow-cidr", []string{}, "CIDR to allow DB connections from.")
restrictionsUpdateCmd.Flags().BoolVar(&bypassCidrChecks, "bypass-cidr-checks", false, "Bypass some of the CIDR validation checks.")
restrictionsCmd.AddCommand(restrictionsGetCmd)
restrictionsCmd.AddCommand(restrictionsUpdateCmd)
rootCmd.AddCommand(restrictionsCmd)
}

268
cmd/root.go Normal file
View File

@ -0,0 +1,268 @@
package cmd
import (
"context"
"fmt"
"net"
"net/url"
"os"
"os/signal"
"strings"
"time"
"github.com/getsentry/sentry-go"
"github.com/go-errors/errors"
"github.com/spf13/afero"
"github.com/spf13/cobra"
"github.com/spf13/viper"
"github.com/supabase/cli/internal/utils"
"github.com/supabase/cli/internal/utils/flags"
"golang.org/x/mod/semver"
)
const (
groupQuickStart = "quick-start"
groupLocalDev = "local-dev"
groupManagementAPI = "management-api"
)
func IsManagementAPI(cmd *cobra.Command) bool {
for cmd != cmd.Root() {
if cmd.GroupID == groupManagementAPI {
return true
}
// Find the last assigned group
if len(cmd.GroupID) > 0 {
break
}
cmd = cmd.Parent()
}
return false
}
func promptLogin(fsys afero.Fs) error {
if _, err := utils.LoadAccessTokenFS(fsys); err == utils.ErrMissingToken {
utils.CmdSuggestion = fmt.Sprintf("Run %s first.", utils.Aqua("supabase login"))
return errors.New("You need to be logged-in in order to use Management API commands.")
} else {
return err
}
}
var experimental = []*cobra.Command{
bansCmd,
restrictionsCmd,
vanityCmd,
sslEnforcementCmd,
genKeysCmd,
postgresCmd,
branchesCmd,
storageCmd,
}
func IsExperimental(cmd *cobra.Command) bool {
for _, exp := range experimental {
if cmd == exp || cmd.Parent() == exp {
return true
}
}
return false
}
var (
sentryOpts = sentry.ClientOptions{
Dsn: utils.SentryDsn,
Release: utils.Version,
ServerName: "<redacted>",
// Set TracesSampleRate to 1.0 to capture 100%
// of transactions for performance monitoring.
// We recommend adjusting this value in production,
TracesSampleRate: 1.0,
}
createTicket bool
rootCmd = &cobra.Command{
Use: "supabase",
Short: "Supabase CLI " + utils.Version,
Version: utils.Version,
PersistentPreRunE: func(cmd *cobra.Command, args []string) error {
if IsExperimental(cmd) && !viper.GetBool("EXPERIMENTAL") {
return errors.New("must set the --experimental flag to run this command")
}
cmd.SilenceUsage = true
// Change workdir
fsys := afero.NewOsFs()
if err := utils.ChangeWorkDir(fsys); err != nil {
return err
}
// Add common flags
ctx := cmd.Context()
if IsManagementAPI(cmd) {
if err := promptLogin(fsys); err != nil {
return err
}
ctx, _ = signal.NotifyContext(ctx, os.Interrupt)
if cmd.Flags().Lookup("project-ref") != nil {
if err := flags.ParseProjectRef(ctx, fsys); err != nil {
return err
}
}
}
if err := flags.ParseDatabaseConfig(cmd.Flags(), fsys); err != nil {
return err
}
// Prepare context
if viper.GetBool("DEBUG") {
ctx = utils.WithTraceContext(ctx)
fmt.Fprintln(os.Stderr, cmd.Root().Short)
}
cmd.SetContext(ctx)
// Setup sentry last to ignore errors from parsing cli flags
apiHost, err := url.Parse(utils.GetSupabaseAPIHost())
if err != nil {
return err
}
sentryOpts.Environment = apiHost.Host
return sentry.Init(sentryOpts)
},
SilenceErrors: true,
}
)
func Execute() {
defer recoverAndExit()
if err := rootCmd.Execute(); err != nil {
panic(err)
}
// Check upgrade last because --version flag is initialised after execute
version, err := checkUpgrade(rootCmd.Context(), afero.NewOsFs())
if err != nil {
fmt.Fprintln(utils.GetDebugLogger(), err)
}
if semver.Compare(version, "v"+utils.Version) > 0 {
fmt.Fprintln(os.Stderr, suggestUpgrade(version))
}
if len(utils.CmdSuggestion) > 0 {
fmt.Fprintln(os.Stderr, utils.CmdSuggestion)
}
}
func checkUpgrade(ctx context.Context, fsys afero.Fs) (string, error) {
if shouldFetchRelease(fsys) {
version, err := utils.GetLatestRelease(ctx)
if exists, _ := afero.DirExists(fsys, utils.SupabaseDirPath); exists {
// If user is offline, write an empty file to skip subsequent checks
err = utils.WriteFile(utils.CliVersionPath, []byte(version), fsys)
}
return version, err
}
version, err := afero.ReadFile(fsys, utils.CliVersionPath)
if err != nil {
return "", errors.Errorf("failed to read cli version: %w", err)
}
return string(version), nil
}
func shouldFetchRelease(fsys afero.Fs) bool {
// Always fetch latest release when using --version flag
if vf := rootCmd.Flag("version"); vf != nil && vf.Changed {
return true
}
if fi, err := fsys.Stat(utils.CliVersionPath); err == nil {
expiry := fi.ModTime().Add(time.Hour * 10)
// Skip if last checked is less than 10 hours ago
return time.Now().After(expiry)
}
return true
}
func suggestUpgrade(version string) string {
const guide = "https://supabase.com/docs/guides/cli/getting-started#updating-the-supabase-cli"
return fmt.Sprintf(`A new version of Supabase CLI is available: %s (currently installed v%s)
We recommend updating regularly for new features and bug fixes: %s`, utils.Yellow(version), utils.Version, utils.Bold(guide))
}
func recoverAndExit() {
err := recover()
if err == nil {
return
}
var msg string
switch err := err.(type) {
case string:
msg = err
case error:
if !errors.Is(err, context.Canceled) &&
len(utils.CmdSuggestion) == 0 &&
!viper.GetBool("DEBUG") {
utils.CmdSuggestion = utils.SuggestDebugFlag
}
msg = err.Error()
default:
msg = fmt.Sprintf("%#v", err)
}
// Log error to console
fmt.Fprintln(os.Stderr, utils.Red(msg))
if len(utils.CmdSuggestion) > 0 {
fmt.Fprintln(os.Stderr, utils.CmdSuggestion)
}
// Report error to sentry
if createTicket && len(utils.SentryDsn) > 0 {
sentry.ConfigureScope(addSentryScope)
eventId := sentry.CurrentHub().Recover(err)
if eventId != nil && sentry.Flush(2*time.Second) {
fmt.Fprintln(os.Stderr, "Sent crash report:", *eventId)
fmt.Fprintln(os.Stderr, "Quote the crash ID above when filing a bug report: https://github.com/supabase/cli/issues/new/choose")
}
}
os.Exit(1)
}
func init() {
cobra.OnInitialize(func() {
viper.SetEnvPrefix("SUPABASE")
viper.SetEnvKeyReplacer(strings.NewReplacer("-", "_"))
viper.AutomaticEnv()
})
flags := rootCmd.PersistentFlags()
flags.Bool("debug", false, "output debug logs to stderr")
flags.String("workdir", "", "path to a Supabase project directory")
flags.Bool("experimental", false, "enable experimental features")
flags.String("network-id", "", "use the specified docker network instead of a generated one")
flags.Var(&utils.OutputFormat, "output", "output format of status variables")
flags.Var(&utils.DNSResolver, "dns-resolver", "lookup domain names using the specified resolver")
flags.BoolVar(&createTicket, "create-ticket", false, "create a support ticket for any CLI error")
cobra.CheckErr(viper.BindPFlags(flags))
rootCmd.SetVersionTemplate("{{.Version}}\n")
rootCmd.AddGroup(&cobra.Group{ID: groupQuickStart, Title: "Quick Start:"})
rootCmd.AddGroup(&cobra.Group{ID: groupLocalDev, Title: "Local Development:"})
rootCmd.AddGroup(&cobra.Group{ID: groupManagementAPI, Title: "Management APIs:"})
}
// instantiate new rootCmd is a bit tricky with cobra, but it can be done later with the following
// approach for example: https://github.com/portworx/pxc/tree/master/cmd
func GetRootCmd() *cobra.Command {
return rootCmd
}
func addSentryScope(scope *sentry.Scope) {
serviceImages := utils.Config.GetServiceImages()
imageToVersion := make(map[string]interface{}, len(serviceImages))
for _, image := range serviceImages {
parts := strings.Split(image, ":")
// Bypasses sentry's IP sanitization rule, ie. 15.1.0.147
if net.ParseIP(parts[1]) != nil {
imageToVersion[parts[0]] = "v" + parts[1]
} else {
imageToVersion[parts[0]] = parts[1]
}
}
scope.SetContext("Services", imageToVersion)
scope.SetContext("Config", map[string]interface{}{
"Image Registry": utils.GetRegistry(),
"Project ID": flags.ProjectRef,
})
}

54
cmd/secrets.go Normal file
View File

@ -0,0 +1,54 @@
package cmd
import (
"github.com/spf13/afero"
"github.com/spf13/cobra"
"github.com/supabase/cli/internal/secrets/list"
"github.com/supabase/cli/internal/secrets/set"
"github.com/supabase/cli/internal/secrets/unset"
"github.com/supabase/cli/internal/utils/flags"
)
var (
secretsCmd = &cobra.Command{
GroupID: groupManagementAPI,
Use: "secrets",
Short: "Manage Supabase secrets",
}
secretsListCmd = &cobra.Command{
Use: "list",
Short: "List all secrets on Supabase",
Long: "List all secrets in the linked project.",
RunE: func(cmd *cobra.Command, args []string) error {
return list.Run(cmd.Context(), flags.ProjectRef, afero.NewOsFs())
},
}
secretsSetCmd = &cobra.Command{
Use: "set <NAME=VALUE> ...",
Short: "Set a secret(s) on Supabase",
Long: "Set a secret(s) to the linked Supabase project.",
RunE: func(cmd *cobra.Command, args []string) error {
return set.Run(cmd.Context(), flags.ProjectRef, envFilePath, args, afero.NewOsFs())
},
}
secretsUnsetCmd = &cobra.Command{
Use: "unset [NAME] ...",
Short: "Unset a secret(s) on Supabase",
Long: "Unset a secret(s) from the linked Supabase project.",
RunE: func(cmd *cobra.Command, args []string) error {
return unset.Run(cmd.Context(), flags.ProjectRef, args, afero.NewOsFs())
},
}
)
func init() {
secretsCmd.PersistentFlags().StringVar(&flags.ProjectRef, "project-ref", "", "Project ref of the Supabase project.")
secretsSetCmd.Flags().StringVar(&envFilePath, "env-file", "", "Read secrets from a .env file.")
secretsCmd.AddCommand(secretsListCmd)
secretsCmd.AddCommand(secretsSetCmd)
secretsCmd.AddCommand(secretsUnsetCmd)
rootCmd.AddCommand(secretsCmd)
}

43
cmd/seed.go Normal file
View File

@ -0,0 +1,43 @@
package cmd
import (
"os"
"os/signal"
"github.com/spf13/afero"
"github.com/spf13/cobra"
"github.com/supabase/cli/internal/seed/buckets"
"github.com/supabase/cli/internal/utils"
"github.com/supabase/cli/internal/utils/flags"
)
var (
seedCmd = &cobra.Command{
GroupID: groupLocalDev,
Use: "seed",
Short: "Seed a Supabase project from " + utils.ConfigPath,
PersistentPreRunE: func(cmd *cobra.Command, args []string) error {
ctx, _ := signal.NotifyContext(cmd.Context(), os.Interrupt)
cmd.SetContext(ctx)
return cmd.Root().PersistentPreRunE(cmd, args)
},
}
bucketsCmd = &cobra.Command{
Use: "buckets",
Short: "Seed buckets declared in [storage.buckets]",
Args: cobra.NoArgs,
RunE: func(cmd *cobra.Command, args []string) error {
return buckets.Run(cmd.Context(), flags.ProjectRef, true, afero.NewOsFs())
},
}
)
func init() {
seedFlags := seedCmd.PersistentFlags()
seedFlags.Bool("linked", false, "Seeds the linked project.")
seedFlags.Bool("local", true, "Seeds the local database.")
seedCmd.MarkFlagsMutuallyExclusive("local", "linked")
seedCmd.AddCommand(bucketsCmd)
rootCmd.AddCommand(seedCmd)
}

22
cmd/services.go Normal file
View File

@ -0,0 +1,22 @@
package cmd
import (
"github.com/spf13/afero"
"github.com/spf13/cobra"
"github.com/supabase/cli/internal/services"
)
var (
servicesCmd = &cobra.Command{
GroupID: groupManagementAPI,
Use: "services",
Short: "Show versions of all Supabase services",
RunE: func(cmd *cobra.Command, args []string) error {
return services.Run(cmd.Context(), afero.NewOsFs())
},
}
)
func init() {
rootCmd.AddCommand(servicesCmd)
}

43
cmd/snippets.go Normal file
View File

@ -0,0 +1,43 @@
package cmd
import (
"github.com/spf13/afero"
"github.com/spf13/cobra"
"github.com/supabase/cli/internal/snippets/download"
"github.com/supabase/cli/internal/snippets/list"
"github.com/supabase/cli/internal/utils/flags"
)
var (
snippetsCmd = &cobra.Command{
GroupID: groupManagementAPI,
Use: "snippets",
Short: "Manage Supabase SQL snippets",
}
snippetsListCmd = &cobra.Command{
Use: "list",
Short: "List all SQL snippets",
Long: "List all SQL snippets of the linked project.",
RunE: func(cmd *cobra.Command, args []string) error {
return list.Run(cmd.Context(), afero.NewOsFs())
},
}
snippetsDownloadCmd = &cobra.Command{
Use: "download <snippet-id>",
Short: "Download contents of a SQL snippet",
Long: "Download contents of the specified SQL snippet.",
Args: cobra.ExactArgs(1),
RunE: func(cmd *cobra.Command, args []string) error {
return download.Run(cmd.Context(), args[0], afero.NewOsFs())
},
}
)
func init() {
snippetsCmd.PersistentFlags().StringVar(&flags.ProjectRef, "project-ref", "", "Project ref of the Supabase project.")
snippetsCmd.AddCommand(snippetsListCmd)
snippetsCmd.AddCommand(snippetsDownloadCmd)
rootCmd.AddCommand(snippetsCmd)
}

51
cmd/sslEnforcement.go Normal file
View File

@ -0,0 +1,51 @@
package cmd
import (
"github.com/go-errors/errors"
"github.com/spf13/afero"
"github.com/spf13/cobra"
"github.com/supabase/cli/internal/ssl_enforcement/get"
"github.com/supabase/cli/internal/ssl_enforcement/update"
"github.com/supabase/cli/internal/utils/flags"
)
var (
sslEnforcementCmd = &cobra.Command{
GroupID: groupManagementAPI,
Use: "ssl-enforcement",
Short: "Manage SSL enforcement configuration",
}
dbEnforceSsl bool
dbDisableSsl bool
sslEnforcementUpdateCmd = &cobra.Command{
Use: "update",
Short: "Update SSL enforcement configuration",
RunE: func(cmd *cobra.Command, args []string) error {
if !dbEnforceSsl && !dbDisableSsl {
return errors.New("enable/disable not specified")
}
return update.Run(cmd.Context(), flags.ProjectRef, dbEnforceSsl, afero.NewOsFs())
},
}
sslEnforcementGetCmd = &cobra.Command{
Use: "get",
Short: "Get the current SSL enforcement configuration",
RunE: func(cmd *cobra.Command, args []string) error {
return get.Run(cmd.Context(), flags.ProjectRef, afero.NewOsFs())
},
}
)
func init() {
sslEnforcementCmd.PersistentFlags().StringVar(&flags.ProjectRef, "project-ref", "", "Project ref of the Supabase project.")
sslEnforcementUpdateCmd.Flags().BoolVar(&dbEnforceSsl, "enable-db-ssl-enforcement", false, "Whether the DB should enable SSL enforcement for all external connections.")
sslEnforcementUpdateCmd.Flags().BoolVar(&dbDisableSsl, "disable-db-ssl-enforcement", false, "Whether the DB should disable SSL enforcement for all external connections.")
sslEnforcementUpdateCmd.MarkFlagsMutuallyExclusive("enable-db-ssl-enforcement", "disable-db-ssl-enforcement")
sslEnforcementCmd.AddCommand(sslEnforcementUpdateCmd)
sslEnforcementCmd.AddCommand(sslEnforcementGetCmd)
rootCmd.AddCommand(sslEnforcementCmd)
}

179
cmd/sso.go Normal file
View File

@ -0,0 +1,179 @@
package cmd
import (
"github.com/go-errors/errors"
"github.com/spf13/cobra"
"github.com/supabase/cli/internal/sso/create"
"github.com/supabase/cli/internal/sso/get"
"github.com/supabase/cli/internal/sso/info"
"github.com/supabase/cli/internal/sso/list"
"github.com/supabase/cli/internal/sso/remove"
"github.com/supabase/cli/internal/sso/update"
"github.com/supabase/cli/internal/utils"
"github.com/supabase/cli/internal/utils/flags"
)
var (
ssoCmd = &cobra.Command{
GroupID: groupManagementAPI,
Use: "sso",
Short: "Manage Single Sign-On (SSO) authentication for projects",
}
ssoProviderType = utils.EnumFlag{
Allowed: []string{"saml"},
// intentionally no default value so users have to specify --type saml explicitly
}
ssoMetadataFile string
ssoMetadataURL string
ssoSkipURLValidation bool
ssoMetadata bool
ssoAttributeMappingFile string
ssoDomains []string
ssoAddDomains []string
ssoRemoveDomains []string
ssoAddCmd = &cobra.Command{
Use: "add",
Short: "Add a new SSO identity provider",
Long: "Add and configure a new connection to a SSO identity provider to your Supabase project.",
Example: ` supabase sso add --type saml --project-ref mwjylndxudmiehsxhmmz --metadata-url 'https://...' --domains example.com`,
RunE: func(cmd *cobra.Command, args []string) error {
return create.Run(cmd.Context(), create.RunParams{
ProjectRef: flags.ProjectRef,
Type: ssoProviderType.String(),
Format: utils.OutputFormat.Value,
MetadataFile: ssoMetadataFile,
MetadataURL: ssoMetadataURL,
SkipURLValidation: ssoSkipURLValidation,
AttributeMapping: ssoAttributeMappingFile,
Domains: ssoDomains,
})
},
}
ssoRemoveCmd = &cobra.Command{
Use: "remove <provider-id>",
Short: "Remove an existing SSO identity provider",
Long: "Remove a connection to an already added SSO identity provider. Removing the provider will prevent existing users from logging in. Please treat this command with care.",
Args: cobra.ExactArgs(1),
Example: ` supabase sso remove b5ae62f9-ef1d-4f11-a02b-731c8bbb11e8 --project-ref mwjylndxudmiehsxhmmz`,
RunE: func(cmd *cobra.Command, args []string) error {
if !utils.UUIDPattern.MatchString(args[0]) {
return errors.Errorf("identity provider ID %q is not a UUID", args[0])
}
return remove.Run(cmd.Context(), flags.ProjectRef, args[0], utils.OutputFormat.Value)
},
}
ssoUpdateCmd = &cobra.Command{
Use: "update <provider-id>",
Short: "Update information about an SSO identity provider",
Long: "Update the configuration settings of a already added SSO identity provider.",
Args: cobra.ExactArgs(1),
Example: ` supabase sso update b5ae62f9-ef1d-4f11-a02b-731c8bbb11e8 --project-ref mwjylndxudmiehsxhmmz --add-domains example.com`,
RunE: func(cmd *cobra.Command, args []string) error {
if !utils.UUIDPattern.MatchString(args[0]) {
return errors.Errorf("identity provider ID %q is not a UUID", args[0])
}
return update.Run(cmd.Context(), update.RunParams{
ProjectRef: flags.ProjectRef,
ProviderID: args[0],
Format: utils.OutputFormat.Value,
MetadataFile: ssoMetadataFile,
MetadataURL: ssoMetadataURL,
SkipURLValidation: ssoSkipURLValidation,
AttributeMapping: ssoAttributeMappingFile,
Domains: ssoDomains,
AddDomains: ssoAddDomains,
RemoveDomains: ssoRemoveDomains,
})
},
}
ssoShowCmd = &cobra.Command{
Use: "show <provider-id>",
Short: "Show information about an SSO identity provider",
Long: "Provides the information about an established connection to an identity provider. You can use --metadata to obtain the raw SAML 2.0 Metadata XML document stored in your project's configuration.",
Args: cobra.ExactArgs(1),
Example: ` supabase sso show b5ae62f9-ef1d-4f11-a02b-731c8bbb11e8 --project-ref mwjylndxudmiehsxhmmz`,
RunE: func(cmd *cobra.Command, args []string) error {
if !utils.UUIDPattern.MatchString(args[0]) {
return errors.Errorf("identity provider ID %q is not a UUID", args[0])
}
format := utils.OutputFormat.Value
if ssoMetadata {
format = utils.OutputMetadata
}
return get.Run(cmd.Context(), flags.ProjectRef, args[0], format)
},
}
ssoListCmd = &cobra.Command{
Use: "list",
Short: "List all SSO identity providers for a project",
Long: "List all connections to a SSO identity provider to your Supabase project.",
Example: ` supabase sso list --project-ref mwjylndxudmiehsxhmmz`,
RunE: func(cmd *cobra.Command, args []string) error {
return list.Run(cmd.Context(), flags.ProjectRef, utils.OutputFormat.Value)
},
}
ssoInfoCmd = &cobra.Command{
Use: "info",
Short: "Returns the SAML SSO settings required for the identity provider",
Long: "Returns all of the important SSO information necessary for your project to be registered with a SAML 2.0 compatible identity provider.",
Example: ` supabase sso info --project-ref mwjylndxudmiehsxhmmz`,
RunE: func(cmd *cobra.Command, args []string) error {
return info.Run(cmd.Context(), flags.ProjectRef, utils.OutputFormat.Value)
},
}
)
func init() {
persistentFlags := ssoCmd.PersistentFlags()
persistentFlags.StringVar(&flags.ProjectRef, "project-ref", "", "Project ref of the Supabase project.")
ssoAddFlags := ssoAddCmd.Flags()
ssoAddFlags.VarP(&ssoProviderType, "type", "t", "Type of identity provider (according to supported protocol).")
ssoAddFlags.StringSliceVar(&ssoDomains, "domains", nil, "Comma separated list of email domains to associate with the added identity provider.")
ssoAddFlags.StringVar(&ssoMetadataFile, "metadata-file", "", "File containing a SAML 2.0 Metadata XML document describing the identity provider.")
ssoAddFlags.StringVar(&ssoMetadataURL, "metadata-url", "", "URL pointing to a SAML 2.0 Metadata XML document describing the identity provider.")
ssoAddFlags.BoolVar(&ssoSkipURLValidation, "skip-url-validation", false, "Whether local validation of the SAML 2.0 Metadata URL should not be performed.")
ssoAddFlags.StringVar(&ssoAttributeMappingFile, "attribute-mapping-file", "", "File containing a JSON mapping between SAML attributes to custom JWT claims.")
ssoAddCmd.MarkFlagsMutuallyExclusive("metadata-file", "metadata-url")
cobra.CheckErr(ssoAddCmd.MarkFlagRequired("type"))
cobra.CheckErr(ssoAddCmd.MarkFlagFilename("metadata-file", "xml"))
cobra.CheckErr(ssoAddCmd.MarkFlagFilename("attribute-mapping-file", "json"))
ssoUpdateFlags := ssoUpdateCmd.Flags()
ssoUpdateFlags.StringSliceVar(&ssoDomains, "domains", []string{}, "Replace domains with this comma separated list of email domains.")
ssoUpdateFlags.StringSliceVar(&ssoAddDomains, "add-domains", []string{}, "Add this comma separated list of email domains to the identity provider.")
ssoUpdateFlags.StringSliceVar(&ssoRemoveDomains, "remove-domains", []string{}, "Remove this comma separated list of email domains from the identity provider.")
ssoUpdateFlags.StringVar(&ssoMetadataFile, "metadata-file", "", "File containing a SAML 2.0 Metadata XML document describing the identity provider.")
ssoUpdateFlags.StringVar(&ssoMetadataURL, "metadata-url", "", "URL pointing to a SAML 2.0 Metadata XML document describing the identity provider.")
ssoUpdateFlags.BoolVar(&ssoSkipURLValidation, "skip-url-validation", false, "Whether local validation of the SAML 2.0 Metadata URL should not be performed.")
ssoUpdateFlags.StringVar(&ssoAttributeMappingFile, "attribute-mapping-file", "", "File containing a JSON mapping between SAML attributes to custom JWT claims.")
ssoUpdateCmd.MarkFlagsMutuallyExclusive("metadata-file", "metadata-url")
ssoUpdateCmd.MarkFlagsMutuallyExclusive("domains", "add-domains")
ssoUpdateCmd.MarkFlagsMutuallyExclusive("domains", "remove-domains")
cobra.CheckErr(ssoUpdateCmd.MarkFlagFilename("metadata-file", "xml"))
cobra.CheckErr(ssoUpdateCmd.MarkFlagFilename("attribute-mapping-file", "json"))
ssoShowFlags := ssoShowCmd.Flags()
ssoShowFlags.BoolVar(&ssoMetadata, "metadata", false, "Show SAML 2.0 XML Metadata only")
ssoCmd.AddCommand(ssoAddCmd)
ssoCmd.AddCommand(ssoRemoveCmd)
ssoCmd.AddCommand(ssoUpdateCmd)
ssoCmd.AddCommand(ssoShowCmd)
ssoCmd.AddCommand(ssoListCmd)
ssoCmd.AddCommand(ssoInfoCmd)
rootCmd.AddCommand(ssoCmd)
}

62
cmd/start.go Normal file
View File

@ -0,0 +1,62 @@
package cmd
import (
"fmt"
"os"
"sort"
"strings"
"github.com/spf13/afero"
"github.com/spf13/cobra"
"github.com/supabase/cli/internal/start"
"github.com/supabase/cli/internal/utils"
)
func validateExcludedContainers(excludedContainers []string) {
// Validate excluded containers
validContainers := start.ExcludableContainers()
var invalidContainers []string
for _, e := range excludedContainers {
if !utils.SliceContains(validContainers, e) {
invalidContainers = append(invalidContainers, e)
}
}
if len(invalidContainers) > 0 {
// Sort the names list so it's easier to visually spot the one you looking for
sort.Strings(validContainers)
warning := fmt.Sprintf("%s The following container names are not valid to exclude: %s\nValid containers to exclude are: %s\n",
utils.Yellow("WARNING:"),
utils.Aqua(strings.Join(invalidContainers, ", ")),
utils.Aqua(strings.Join(validContainers, ", ")))
fmt.Fprint(os.Stderr, warning)
}
}
var (
allowedContainers = start.ExcludableContainers()
excludedContainers []string
ignoreHealthCheck bool
preview bool
startCmd = &cobra.Command{
GroupID: groupLocalDev,
Use: "start",
Short: "Start containers for Supabase local development",
RunE: func(cmd *cobra.Command, args []string) error {
validateExcludedContainers(excludedContainers)
return start.Run(cmd.Context(), afero.NewOsFs(), excludedContainers, ignoreHealthCheck)
},
}
)
func init() {
flags := startCmd.Flags()
names := strings.Join(allowedContainers, ",")
flags.StringSliceVarP(&excludedContainers, "exclude", "x", []string{}, "Names of containers to not start. ["+names+"]")
flags.BoolVar(&ignoreHealthCheck, "ignore-health-check", false, "Ignore unhealthy services and exit 0")
flags.BoolVar(&preview, "preview", false, "Connect to feature preview branch")
cobra.CheckErr(flags.MarkHidden("preview"))
rootCmd.AddCommand(startCmd)
}

47
cmd/status.go Normal file
View File

@ -0,0 +1,47 @@
package cmd
import (
"os"
"os/signal"
env "github.com/Netflix/go-env"
"github.com/spf13/afero"
"github.com/spf13/cobra"
"github.com/supabase/cli/internal/status"
"github.com/supabase/cli/internal/utils"
)
var (
override []string
names status.CustomName
output = utils.EnumFlag{
Allowed: append([]string{utils.OutputEnv}, utils.OutputDefaultAllowed...),
Value: utils.OutputPretty,
}
statusCmd = &cobra.Command{
GroupID: groupLocalDev,
Use: "status",
Short: "Show status of local Supabase containers",
PreRunE: func(cmd *cobra.Command, args []string) error {
es, err := env.EnvironToEnvSet(override)
if err != nil {
return err
}
return env.Unmarshal(es, &names)
},
RunE: func(cmd *cobra.Command, args []string) error {
ctx, _ := signal.NotifyContext(cmd.Context(), os.Interrupt)
return status.Run(ctx, names, output.Value, afero.NewOsFs())
},
Example: ` supabase status -o env --override-name api.url=NEXT_PUBLIC_SUPABASE_URL
supabase status -o json`,
}
)
func init() {
flags := statusCmd.Flags()
flags.VarP(&output, "output", "o", "Output format of status variables.")
flags.StringSliceVar(&override, "override-name", []string{}, "Override specific variable names.")
rootCmd.AddCommand(statusCmd)
}

37
cmd/stop.go Normal file
View File

@ -0,0 +1,37 @@
package cmd
import (
"os"
"os/signal"
"github.com/spf13/afero"
"github.com/spf13/cobra"
"github.com/supabase/cli/internal/stop"
)
var (
noBackup bool
projectId string
all bool
stopCmd = &cobra.Command{
GroupID: groupLocalDev,
Use: "stop",
Short: "Stop all local Supabase containers",
RunE: func(cmd *cobra.Command, args []string) error {
ctx, _ := signal.NotifyContext(cmd.Context(), os.Interrupt)
return stop.Run(ctx, !noBackup, projectId, all, afero.NewOsFs())
},
}
)
func init() {
flags := stopCmd.Flags()
flags.Bool("backup", true, "Backs up the current database before stopping.")
flags.StringVar(&projectId, "project-id", "", "Local project ID to stop.")
cobra.CheckErr(flags.MarkHidden("backup"))
flags.BoolVar(&noBackup, "no-backup", false, "Deletes all data volumes after stopping.")
flags.BoolVar(&all, "all", false, "Stop all local Supabase instances from all projects across the machine.")
stopCmd.MarkFlagsMutuallyExclusive("project-id", "all")
rootCmd.AddCommand(stopCmd)
}

99
cmd/storage.go Normal file
View File

@ -0,0 +1,99 @@
package cmd
import (
"github.com/spf13/afero"
"github.com/spf13/cobra"
"github.com/supabase/cli/internal/storage/client"
"github.com/supabase/cli/internal/storage/cp"
"github.com/supabase/cli/internal/storage/ls"
"github.com/supabase/cli/internal/storage/mv"
"github.com/supabase/cli/internal/storage/rm"
"github.com/supabase/cli/pkg/storage"
)
var (
storageCmd = &cobra.Command{
GroupID: groupManagementAPI,
Use: "storage",
Short: "Manage Supabase Storage objects",
}
recursive bool
lsCmd = &cobra.Command{
Use: "ls [path]",
Example: "ls ss:///bucket/docs",
Short: "List objects by path prefix",
Args: cobra.MaximumNArgs(1),
RunE: func(cmd *cobra.Command, args []string) error {
objectPath := client.STORAGE_SCHEME + ":///"
if len(args) > 0 {
objectPath = args[0]
}
return ls.Run(cmd.Context(), objectPath, recursive, afero.NewOsFs())
},
}
options storage.FileOptions
maxJobs uint
cpCmd = &cobra.Command{
Use: "cp <src> <dst>",
Example: `cp readme.md ss:///bucket/readme.md
cp -r docs ss:///bucket/docs
cp -r ss:///bucket/docs .
`,
Short: "Copy objects from src to dst path",
Args: cobra.ExactArgs(2),
RunE: func(cmd *cobra.Command, args []string) error {
opts := func(fo *storage.FileOptions) {
fo.CacheControl = options.CacheControl
fo.ContentType = options.ContentType
}
return cp.Run(cmd.Context(), args[0], args[1], recursive, maxJobs, afero.NewOsFs(), opts)
},
}
mvCmd = &cobra.Command{
Use: "mv <src> <dst>",
Short: "Move objects from src to dst path",
Example: "mv -r ss:///bucket/docs ss:///bucket/www/docs",
Args: cobra.ExactArgs(2),
RunE: func(cmd *cobra.Command, args []string) error {
return mv.Run(cmd.Context(), args[0], args[1], recursive, afero.NewOsFs())
},
}
rmCmd = &cobra.Command{
Use: "rm <file> ...",
Short: "Remove objects by file path",
Example: `rm -r ss:///bucket/docs
rm ss:///bucket/docs/example.md ss:///bucket/readme.md
`,
Args: cobra.MinimumNArgs(1),
RunE: func(cmd *cobra.Command, args []string) error {
return rm.Run(cmd.Context(), args, recursive, afero.NewOsFs())
},
}
)
func init() {
storageFlags := storageCmd.PersistentFlags()
storageFlags.Bool("linked", true, "Connects to Storage API of the linked project.")
storageFlags.Bool("local", false, "Connects to Storage API of the local database.")
storageCmd.MarkFlagsMutuallyExclusive("linked", "local")
lsCmd.Flags().BoolVarP(&recursive, "recursive", "r", false, "Recursively list a directory.")
storageCmd.AddCommand(lsCmd)
cpFlags := cpCmd.Flags()
cpFlags.BoolVarP(&recursive, "recursive", "r", false, "Recursively copy a directory.")
cpFlags.StringVar(&options.CacheControl, "cache-control", "max-age=3600", "Custom Cache-Control header for HTTP upload.")
cpFlags.StringVar(&options.ContentType, "content-type", "", "Custom Content-Type header for HTTP upload.")
cpFlags.Lookup("content-type").DefValue = "auto-detect"
cpFlags.UintVarP(&maxJobs, "jobs", "j", 1, "Maximum number of parallel jobs.")
storageCmd.AddCommand(cpCmd)
rmCmd.Flags().BoolVarP(&recursive, "recursive", "r", false, "Recursively remove a directory.")
storageCmd.AddCommand(rmCmd)
mvCmd.Flags().BoolVarP(&recursive, "recursive", "r", false, "Recursively move a directory.")
storageCmd.AddCommand(mvCmd)
rootCmd.AddCommand(storageCmd)
}

56
cmd/test.go Normal file
View File

@ -0,0 +1,56 @@
package cmd
import (
"os"
"os/signal"
"github.com/spf13/afero"
"github.com/spf13/cobra"
"github.com/supabase/cli/internal/test/new"
"github.com/supabase/cli/internal/utils"
)
var (
testCmd = &cobra.Command{
GroupID: groupLocalDev,
Use: "test",
Short: "Run tests on local Supabase containers",
}
testDbCmd = &cobra.Command{
Use: "db [path] ...",
Short: dbTestCmd.Short,
RunE: dbTestCmd.RunE,
}
template = utils.EnumFlag{
Allowed: []string{new.TemplatePgTAP},
Value: new.TemplatePgTAP,
}
testNewCmd = &cobra.Command{
Use: "new <name>",
Short: "Create a new test file",
Args: cobra.ExactArgs(1),
RunE: func(cmd *cobra.Command, args []string) error {
ctx, _ := signal.NotifyContext(cmd.Context(), os.Interrupt)
return new.Run(ctx, args[0], template.Value, afero.NewOsFs())
},
}
)
func init() {
// Build db command
dbFlags := testDbCmd.Flags()
dbFlags.String("db-url", "", "Tests the database specified by the connection string (must be percent-encoded).")
dbFlags.Bool("linked", false, "Runs pgTAP tests on the linked project.")
dbFlags.Bool("local", true, "Runs pgTAP tests on the local database.")
testDbCmd.MarkFlagsMutuallyExclusive("db-url", "linked", "local")
testCmd.AddCommand(testDbCmd)
// Build new command
newFlags := testNewCmd.Flags()
newFlags.VarP(&template, "template", "t", "Template framework to generate.")
testCmd.AddCommand(testNewCmd)
// Build test command
rootCmd.AddCommand(testCmd)
}

26
cmd/unlink.go Normal file
View File

@ -0,0 +1,26 @@
package cmd
import (
"os"
"os/signal"
"github.com/spf13/afero"
"github.com/spf13/cobra"
"github.com/supabase/cli/internal/unlink"
)
var (
unlinkCmd = &cobra.Command{
GroupID: groupLocalDev,
Use: "unlink",
Short: "Unlink a Supabase project",
RunE: func(cmd *cobra.Command, args []string) error {
ctx, _ := signal.NotifyContext(cmd.Context(), os.Interrupt)
return unlink.Run(ctx, afero.NewOsFs())
},
}
)
func init() {
rootCmd.AddCommand(unlinkCmd)
}

74
cmd/vanitySubdomains.go Normal file
View File

@ -0,0 +1,74 @@
package cmd
import (
"github.com/spf13/afero"
"github.com/spf13/cobra"
"github.com/supabase/cli/internal/utils/flags"
"github.com/supabase/cli/internal/vanity_subdomains/activate"
"github.com/supabase/cli/internal/vanity_subdomains/check"
"github.com/supabase/cli/internal/vanity_subdomains/delete"
"github.com/supabase/cli/internal/vanity_subdomains/get"
)
var (
vanityCmd = &cobra.Command{
GroupID: groupManagementAPI,
Use: "vanity-subdomains",
Short: "Manage vanity subdomains for Supabase projects",
Long: `Manage vanity subdomains for Supabase projects.
Usage of vanity subdomains and custom domains is mutually exclusive.`,
}
desiredSubdomain string
vanityActivateCmd = &cobra.Command{
Use: "activate",
Short: "Activate a vanity subdomain",
Long: `Activate a vanity subdomain for your Supabase project.
This reconfigures your Supabase project to respond to requests on your vanity subdomain.
After the vanity subdomain is activated, your project's auth services will no longer function on the {project-ref}.{supabase-domain} hostname.
`,
RunE: func(cmd *cobra.Command, args []string) error {
return activate.Run(cmd.Context(), flags.ProjectRef, desiredSubdomain, afero.NewOsFs())
},
}
vanityGetCmd = &cobra.Command{
Use: "get",
Short: "Get the current vanity subdomain",
RunE: func(cmd *cobra.Command, args []string) error {
return get.Run(cmd.Context(), flags.ProjectRef, afero.NewOsFs())
},
}
vanityCheckCmd = &cobra.Command{
Use: "check-availability",
Short: "Checks if a desired subdomain is available for use",
RunE: func(cmd *cobra.Command, args []string) error {
return check.Run(cmd.Context(), flags.ProjectRef, desiredSubdomain, afero.NewOsFs())
},
}
vanityDeleteCmd = &cobra.Command{
Use: "delete",
Short: "Deletes a project's vanity subdomain",
Long: `Deletes the vanity subdomain for a project, and reverts to using the project ref for routing.`,
RunE: func(cmd *cobra.Command, args []string) error {
return delete.Run(cmd.Context(), flags.ProjectRef, afero.NewOsFs())
},
}
)
func init() {
vanityCmd.PersistentFlags().StringVar(&flags.ProjectRef, "project-ref", "", "Project ref of the Supabase project.")
vanityActivateCmd.Flags().StringVar(&desiredSubdomain, "desired-subdomain", "", "The desired vanity subdomain to use for your Supabase project.")
vanityCheckCmd.Flags().StringVar(&desiredSubdomain, "desired-subdomain", "", "The desired vanity subdomain to use for your Supabase project.")
vanityCmd.AddCommand(vanityGetCmd)
vanityCmd.AddCommand(vanityCheckCmd)
vanityCmd.AddCommand(vanityActivateCmd)
vanityCmd.AddCommand(vanityDeleteCmd)
rootCmd.AddCommand(vanityCmd)
}

21
docs/README.md Normal file
View File

@ -0,0 +1,21 @@
## Extended man pages for CLI commands
### Build
Update [version string](https://github.com/supabase/cli/blob/main/docs/main.go#L33) to match latest release.
```bash
go run docs/main.go > cli_v1_commands.yaml
```
### Release
1. Clone the [supabase/supabase](https://github.com/supabase/supabase) repo
2. Copy over the CLI reference and reformat using supabase config
```bash
mv ../cli/cli_v1_commands.yaml specs/
npx prettier -w specs/cli_v1_commands.yaml
```
3. If there are new commands added, update [common-cli-sections.json](https://github.com/supabase/supabase/blob/master/spec/common-cli-sections.json) manually

320
docs/main.go Normal file
View File

@ -0,0 +1,320 @@
package main
import (
"bytes"
"embed"
"fmt"
"log"
"os"
"path/filepath"
"strings"
"github.com/spf13/cobra"
"github.com/spf13/pflag"
cli "github.com/supabase/cli/cmd"
"github.com/supabase/cli/internal/utils"
"gopkg.in/yaml.v3"
)
const tagOthers = "other-commands"
var (
examples map[string][]ExampleDoc
//go:embed templates/examples.yaml
exampleSpec string
//go:embed supabase/*
docsDir embed.FS
)
func main() {
semver := "latest"
if len(os.Args) > 1 {
semver = os.Args[1]
}
// Trim version tag
if semver[0] == 'v' {
semver = semver[1:]
}
if err := generate(semver); err != nil {
log.Fatalln(err)
}
}
func generate(version string) error {
dec := yaml.NewDecoder(strings.NewReader(exampleSpec))
if err := dec.Decode(&examples); err != nil {
return err
}
root := cli.GetRootCmd()
root.InitDefaultCompletionCmd()
root.InitDefaultHelpFlag()
spec := SpecDoc{
Clispec: "001",
Info: InfoDoc{
Id: "cli",
Version: version,
Title: strings.TrimSpace(root.Short),
Description: forceMultiLine("Supabase CLI provides you with tools to develop your application locally, and deploy your application to the Supabase platform."),
Language: "sh",
Source: "https://github.com/supabase/cli",
Bugs: "https://github.com/supabase/cli/issues",
Spec: "https://github.com/supabase/spec/cli_v1_commands.yaml",
Tags: getTags(root),
},
}
root.Flags().VisitAll(func(flag *pflag.Flag) {
if !flag.Hidden {
spec.Flags = append(spec.Flags, getFlags(flag))
}
})
cobra.CheckErr(root.MarkFlagRequired("experimental"))
// Generate, serialise, and print
yamlDoc := GenYamlDoc(root, &spec)
spec.Info.Options = yamlDoc.Options
// Reverse commands list
for i, j := 0, len(spec.Commands)-1; i < j; i, j = i+1, j-1 {
spec.Commands[i], spec.Commands[j] = spec.Commands[j], spec.Commands[i]
}
// Write to stdout
encoder := yaml.NewEncoder(os.Stdout)
encoder.SetIndent(2)
return encoder.Encode(spec)
}
type TagDoc struct {
Id string `yaml:",omitempty"`
Title string `yaml:",omitempty"`
Description string `yaml:",omitempty"`
}
type InfoDoc struct {
Id string `yaml:",omitempty"`
Version string `yaml:",omitempty"`
Title string `yaml:",omitempty"`
Language string `yaml:",omitempty"`
Source string `yaml:",omitempty"`
Bugs string `yaml:",omitempty"`
Spec string `yaml:",omitempty"`
Description string `yaml:",omitempty"`
Options string `yaml:",omitempty"`
Tags []TagDoc `yaml:",omitempty"`
}
type ValueDoc struct {
Id string `yaml:",omitempty"`
Name string `yaml:",omitempty"`
Type string `yaml:",omitempty"`
Description string `yaml:",omitempty"`
}
type FlagDoc struct {
Id string `yaml:",omitempty"`
Name string `yaml:",omitempty"`
Description string `yaml:",omitempty"`
Required bool `yaml:",omitempty"`
DefaultValue string `yaml:"default_value"`
AcceptedValues []ValueDoc `yaml:"accepted_values,omitempty"`
}
type ExampleDoc struct {
Id string `yaml:",omitempty"`
Name string `yaml:",omitempty"`
Code string `yaml:",omitempty"`
Response string `yaml:",omitempty"`
}
type CmdDoc struct {
Id string `yaml:",omitempty"`
Title string `yaml:",omitempty"`
Summary string `yaml:",omitempty"`
Source string `yaml:",omitempty"`
Description string `yaml:",omitempty"`
Examples []ExampleDoc `yaml:",omitempty"`
Tags []string `yaml:""`
Links []LinkDoc `yaml:""`
Usage string `yaml:",omitempty"`
Subcommands []string `yaml:""`
Options string `yaml:",omitempty"`
Flags []FlagDoc `yaml:""`
}
type LinkDoc struct {
Name string `yaml:",omitempty"`
Link string `yaml:",omitempty"`
}
type ParamDoc struct {
Id string `yaml:",omitempty"`
Title string `yaml:",omitempty"`
Description string `yaml:",omitempty"`
Required bool `yaml:",omitempty"`
Default string `yaml:",omitempty"`
Tags []string `yaml:",omitempty"`
Links []LinkDoc `yaml:""`
}
type SpecDoc struct {
Clispec string `yaml:",omitempty"`
Info InfoDoc `yaml:",omitempty"`
Flags []FlagDoc `yaml:",omitempty"`
Commands []CmdDoc `yaml:",omitempty"`
Parameters []FlagDoc `yaml:",omitempty"`
}
// DFS on command tree to generate documentation specs.
func GenYamlDoc(cmd *cobra.Command, root *SpecDoc) CmdDoc {
var subcommands []string
for _, c := range cmd.Commands() {
if !c.IsAvailableCommand() || c.IsAdditionalHelpTopicCommand() {
continue
}
sub := GenYamlDoc(c, root)
if !cmd.HasParent() && len(sub.Tags) == 0 {
sub.Tags = append(sub.Tags, tagOthers)
}
root.Commands = append(root.Commands, sub)
subcommands = append(subcommands, sub.Id)
}
yamlDoc := CmdDoc{
Id: strings.ReplaceAll(cmd.CommandPath(), " ", "-"),
Title: cmd.CommandPath(),
Summary: forceMultiLine(cmd.Short),
Description: forceMultiLine(strings.ReplaceAll(cmd.Long, "\t", " ")),
Subcommands: subcommands,
}
names := strings.Fields(cmd.CommandPath())
if len(names) > 3 {
base := strings.Join(names[2:], "-")
names = append(names[:2], base)
}
path := filepath.Join(names...) + ".md"
if contents, err := docsDir.ReadFile(path); err == nil {
noHeader := bytes.TrimLeftFunc(contents, func(r rune) bool {
return r != '\n'
})
yamlDoc.Description = forceMultiLine(string(noHeader))
}
if eg, ok := examples[yamlDoc.Id]; ok {
yamlDoc.Examples = eg
}
if len(cmd.GroupID) > 0 {
yamlDoc.Tags = append(yamlDoc.Tags, cmd.GroupID)
}
if cmd.Runnable() {
yamlDoc.Usage = forceMultiLine(cmd.UseLine())
}
// Only print flags for root and leaf commands
if !cmd.HasSubCommands() {
flags := cmd.LocalFlags()
flags.VisitAll(func(flag *pflag.Flag) {
if !flag.Hidden {
yamlDoc.Flags = append(yamlDoc.Flags, getFlags(flag))
}
})
// Print required flag for experimental commands
globalFlags := cmd.Root().Flags()
if cli.IsExperimental(cmd) {
flag := globalFlags.Lookup("experimental")
yamlDoc.Flags = append(yamlDoc.Flags, getFlags(flag))
}
// Leaf commands should inherit parent flags except root
parentFlags := cmd.InheritedFlags()
parentFlags.VisitAll(func(flag *pflag.Flag) {
if !flag.Hidden && globalFlags.Lookup(flag.Name) == nil {
yamlDoc.Flags = append(yamlDoc.Flags, getFlags(flag))
}
})
}
return yamlDoc
}
func getFlags(flag *pflag.Flag) FlagDoc {
doc := FlagDoc{
Id: flag.Name,
Name: getName(flag),
Description: forceMultiLine(getUsage(flag)),
DefaultValue: flag.DefValue,
Required: flag.Annotations[cobra.BashCompOneRequiredFlag] != nil,
}
if f, ok := flag.Value.(*utils.EnumFlag); ok {
for _, v := range f.Allowed {
doc.AcceptedValues = append(doc.AcceptedValues, ValueDoc{
Id: v,
Name: v,
Type: flag.Value.Type(),
})
}
}
return doc
}
// Prints a human readable flag name.
//
// -f, --flag `string`
func getName(flag *pflag.Flag) (line string) {
// Prefix: shorthand
if flag.Shorthand != "" && flag.ShorthandDeprecated == "" {
line += fmt.Sprintf("-%s, ", flag.Shorthand)
}
line += fmt.Sprintf("--%s", flag.Name)
// Suffix: type
if varname, _ := pflag.UnquoteUsage(flag); varname != "" {
line += fmt.Sprintf(" <%s>", varname)
}
// Not used by our cmd but kept here for consistency
if flag.NoOptDefVal != "" {
switch flag.Value.Type() {
case "string":
line += fmt.Sprintf("[=\"%s\"]", flag.NoOptDefVal)
case "bool":
if flag.NoOptDefVal != "true" {
line += fmt.Sprintf("[=%s]", flag.NoOptDefVal)
}
case "count":
if flag.NoOptDefVal != "+1" {
line += fmt.Sprintf("[=%s]", flag.NoOptDefVal)
}
default:
line += fmt.Sprintf("[=%s]", flag.NoOptDefVal)
}
}
return line
}
// Prints flag usage and default value.
//
// Select a plan. (default "free")
func getUsage(flag *pflag.Flag) string {
_, usage := pflag.UnquoteUsage(flag)
return usage
}
// Yaml lib generates incorrect yaml with long strings that do not contain \n.
//
// example: 'a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
// a a a a a a '
func forceMultiLine(s string) string {
if len(s) > 60 && !strings.Contains(s, "\n") {
s = s + "\n"
}
return s
}
func getTags(cmd *cobra.Command) (tags []TagDoc) {
for _, group := range cmd.Groups() {
tags = append(tags, TagDoc{
Id: group.ID,
Title: group.Title[:len(group.Title)-1],
})
}
tags = append(tags, TagDoc{Id: tagOthers, Title: "Additional Commands"})
return tags
}

15
docs/supabase/db/diff.md Normal file
View File

@ -0,0 +1,15 @@
## supabase-db-diff
Diffs schema changes made to the local or remote database.
Requires the local development stack to be running when diffing against the local database. To diff against a remote or self-hosted database, specify the `--linked` or `--db-url` flag respectively.
Runs [djrobstep/migra](https://github.com/djrobstep/migra) in a container to compare schema differences between the target database and a shadow database. The shadow database is created by applying migrations in local `supabase/migrations` directory in a separate container. Output is written to stdout by default. For convenience, you can also save the schema diff as a new migration file by passing in `-f` flag.
By default, all schemas in the target database are diffed. Use the `--schema public,extensions` flag to restrict diffing to a subset of schemas.
While the diff command is able to capture most schema changes, there are cases where it is known to fail. Currently, this could happen if you schema contains:
- Changes to publication
- Changes to storage buckets
- Views with `security_invoker` attributes

9
docs/supabase/db/dump.md Normal file
View File

@ -0,0 +1,9 @@
## supabase-db-dump
Dumps contents from a remote database.
Requires your local project to be linked to a remote database by running `supabase link`. For self-hosted databases, you can pass in the connection parameters using `--db-url` flag.
Runs `pg_dump` in a container with additional flags to exclude Supabase managed schemas. The ignored schemas include auth, storage, and those created by extensions.
The default dump does not contain any data or custom roles. To dump those contents explicitly, specify either the `--data-only` and `--role-only` flag.

17
docs/supabase/db/lint.md Normal file
View File

@ -0,0 +1,17 @@
## supabase-db-lint
Lints local database for schema errors.
Requires the local development stack to be running when linting against the local database. To lint against a remote or self-hosted database, specify the `--linked` or `--db-url` flag respectively.
Runs `plpgsql_check` extension in the local Postgres container to check for errors in all schemas. The default lint level is `warning` and can be raised to error via the `--level` flag.
To lint against specific schemas only, pass in the `--schema` flag.
The `--fail-on` flag can be used to control when the command should exit with a non-zero status code. The possible values are:
- `none` (default): Always exit with a zero status code, regardless of lint results.
- `warning`: Exit with a non-zero status code if any warnings or errors are found.
- `error`: Exit with a non-zero status code only if errors are found.
This flag is particularly useful in CI/CD pipelines where you want to fail the build based on certain lint conditions.

9
docs/supabase/db/pull.md Normal file
View File

@ -0,0 +1,9 @@
# supabase-db-pull
Pulls schema changes from a remote database. A new migration file will be created under `supabase/migrations` directory.
Requires your local project to be linked to a remote database by running `supabase link`. For self-hosted databases, you can pass in the connection parameters using `--db-url` flag.
Optionally, a new row can be inserted into the migration history table to reflect the current state of the remote database.
If no entries exist in the migration history table, `pg_dump` will be used to capture all contents of the remote schemas you have created. Otherwise, this command will only diff schema changes against the remote database, similar to running `db diff --linked`.

11
docs/supabase/db/push.md Normal file
View File

@ -0,0 +1,11 @@
## supabase-db-push
Pushes all local migrations to a remote database.
Requires your local project to be linked to a remote database by running `supabase link`. For self-hosted databases, you can pass in the connection parameters using `--db-url` flag.
The first time this command is run, a migration history table will be created under `supabase_migrations.schema_migrations`. After successfully applying a migration, a new row will be inserted into the migration history table with timestamp as its unique id. Subsequent pushes will skip migrations that have already been applied.
If you need to mutate the migration history table, such as deleting existing entries or inserting new entries without actually running the migration, use the `migration repair` command.
Use the `--dry-run` flag to view the list of changes before applying.

View File

@ -0,0 +1,9 @@
## supabase-db-reset
Resets the local database to a clean state.
Requires the local development stack to be started by running `supabase start`.
Recreates the local Postgres container and applies all local migrations found in `supabase/migrations` directory. If test data is defined in `supabase/seed.sql`, it will be seeded after the migrations are run. Any other data or schema changes made during local development will be discarded.
When running db reset with `--linked` or `--db-url` flag, a SQL script is executed to identify and drop all user created entities in the remote database. Since Postgres roles are cluster level entities, any custom roles created through the dashboard or `supabase/roles.sql` will not be deleted by remote reset.

View File

@ -0,0 +1,7 @@
## supabase-domains-activate
Activates the custom hostname configuration for a project.
This reconfigures your Supabase project to respond to requests on your custom hostname.
After the custom hostname is activated, your project's third-party auth providers will no longer function on the Supabase-provisioned subdomain. Please refer to [Prepare to activate your domain](/docs/guides/platform/custom-domains#prepare-to-activate-your-domain) section in our documentation to learn more about the steps you need to follow.

View File

@ -0,0 +1,28 @@
## supabase-functions-serve
Serve all Functions locally.
`supabase functions serve` command includes additional flags to assist developers in debugging Edge Functions via the v8 inspector protocol, allowing for debugging via Chrome DevTools, VS Code, and IntelliJ IDEA for example. Refer to the [docs guide](/docs/guides/functions/debugging-tools) for setup instructions.
1. `--inspect`
* Alias of `--inspect-mode brk`.
2. `--inspect-mode [ run | brk | wait ]`
* Activates the inspector capability.
* `run` mode simply allows a connection without additional behavior. It is not ideal for short scripts, but it can be useful for long-running scripts where you might occasionally want to set breakpoints.
* `brk` mode same as `run` mode, but additionally sets a breakpoint at the first line to pause script execution before any code runs.
* `wait` mode similar to `brk` mode, but instead of setting a breakpoint at the first line, it pauses script execution until an inspector session is connected.
3. `--inspect-main`
* Can only be used when one of the above two flags is enabled.
* By default, creating an inspector session for the main worker is not allowed, but this flag allows it.
* Other behaviors follow the `inspect-mode` flag mentioned above.
Additionally, the following properties can be customized via `supabase/config.toml` under `edge_runtime` section.
1. `inspector_port`
* The port used to listen to the Inspector session, defaults to 8083.
2. `policy`
* A value that indicates how the edge-runtime should forward incoming HTTP requests to the worker.
* `per_worker` allows multiple HTTP requests to be forwarded to a worker that has already been created.
* `oneshot` will force the worker to process a single HTTP request and then exit. (Debugging purpose, This is especially useful if you want to reflect changes you've made immediately.)

9
docs/supabase/init.md Normal file
View File

@ -0,0 +1,9 @@
## supabase-init
Initialize configurations for Supabase local development.
A `supabase/config.toml` file is created in your current working directory. This configuration is specific to each local project.
> You may override the directory path by specifying the `SUPABASE_WORKDIR` environment variable or `--workdir` flag.
In addition to `config.toml`, the `supabase` directory may also contain other Supabase objects, such as `migrations`, `functions`, `tests`, etc.

View File

@ -0,0 +1,14 @@
## db-bloat
This command displays an estimation of table "bloat" - Due to Postgres' [MVCC](https://www.postgresql.org/docs/current/mvcc.html) when data is updated or deleted new rows are created and old rows are made invisible and marked as "dead tuples". Usually the [autovaccum](https://supabase.com/docs/guides/platform/database-size#vacuum-operations) process will asynchronously clean the dead tuples. Sometimes the autovaccum is unable to work fast enough to reduce or prevent tables from becoming bloated. High bloat can slow down queries, cause excessive IOPS and waste space in your database.
Tables with a high bloat ratio should be investigated to see if there are vacuuming is not quick enough or there are other issues.
```
TYPE │ SCHEMA NAME │ OBJECT NAME │ BLOAT │ WASTE
────────┼─────────────┼────────────────────────────┼───────┼─────────────
table │ public │ very_bloated_table │ 41.0 │ 700 MB
table │ public │ my_table │ 4.0 │ 76 MB
table │ public │ happy_table │ 1.0 │ 1472 kB
index │ public │ happy_table::my_nice_index │ 0.7 │ 880 kB
```

View File

@ -0,0 +1,9 @@
## db-blocking
This command shows you statements that are currently holding locks and blocking, as well as the statement that is being blocked. This can be used in conjunction with `inspect db locks` to determine which statements need to be terminated in order to resolve lock contention.
```
BLOCKED PID │ BLOCKING STATEMENT │ BLOCKING DURATION │ BLOCKING PID │ BLOCKED STATEMENT │ BLOCKED DURATION
──────────────┼──────────────────────────────┼───────────────────┼──────────────┼────────────────────────────────────────────────────────────────────────────────────────┼───────────────────
253 │ select count(*) from mytable │ 00:00:03.838314 │ 13495 │ UPDATE "mytable" SET "updated_at" = '2023─08─03 14:07:04.746688' WHERE "id" = 83719341 │ 00:00:03.821826
```

View File

@ -0,0 +1,14 @@
# db-cache-hit
This command provides information on the efficiency of the buffer cache and how often your queries have to go hit the disk rather than reading from memory. Information on both index reads (`index hit rate`) as well as table reads (`table hit rate`) are shown. In general, databases with low cache hit rates perform worse as it is slower to go to disk than retrieve data from memory. If your table hit rate is low, this can indicate that you do not have enough RAM and you may benefit from upgrading to a larger compute addon with more memory. If your index hit rate is low, this may indicate that there is scope to add more appropriate indexes.
The hit rates are calculated as a ratio of number of table or index blocks fetched from the postgres buffer cache against the sum of cached blocks and uncached blocks read from disk.
On smaller compute plans (free, small, medium), a ratio of below 99% can indicate a problem. On larger plans the hit rates may be lower but performance will remain constant as the data may use the OS cache rather than Postgres buffer cache.
```
NAME │ RATIO
─────────────────┼───────────
index hit rate │ 0.996621
table hit rate │ 0.999341
```

View File

@ -0,0 +1,15 @@
# db-calls
This command is much like the `supabase inspect db outliers` command, but ordered by the number of times a statement has been called.
You can use this information to see which queries are called most often, which can potentially be good candidates for optimisation.
```
QUERY │ TOTAL EXECUTION TIME │ PROPORTION OF TOTAL EXEC TIME │ NUMBER CALLS │ SYNC IO TIME
─────────────────────────────────────────────────┼──────────────────────┼───────────────────────────────┼──────────────┼──────────────────
SELECT * FROM users WHERE id = $1 │ 14:50:11.828939 │ 89.8% │ 183,389,757 │ 00:00:00.002018
SELECT * FROM user_events │ 01:20:23.466633 │ 1.4% │ 78,325 │ 00:00:00
INSERT INTO users (email, name) VALUES ($1, $2)│ 00:40:11.616882 │ 0.8% │ 54,003 │ 00:00:00.000322
```

View File

@ -0,0 +1,14 @@
# db-index-sizes
This command displays the size of each each index in the database. It is calculated by taking the number of pages (reported in `relpages`) and multiplying it by the page size (8192 bytes).
```
NAME │ SIZE
──────────────────────────────┼─────────────
user_events_index │ 2082 MB
job_run_details_pkey │ 3856 kB
schema_migrations_pkey │ 16 kB
refresh_tokens_token_unique │ 8192 bytes
users_instance_id_idx │ 0 bytes
buckets_pkey │ 0 bytes
```

View File

@ -0,0 +1,14 @@
# db-index-usage
This command provides information on the efficiency of indexes, represented as what percentage of total scans were index scans. A low percentage can indicate under indexing, or wrong data being indexed.
```
TABLE NAME │ PERCENTAGE OF TIMES INDEX USED │ ROWS IN TABLE
────────────────────┼────────────────────────────────┼────────────────
user_events │ 99 │ 4225318
user_feed │ 99 │ 3581573
unindexed_table │ 0 │ 322911
job │ 100 │ 33242
schema_migrations │ 97 │ 0
migrations │ Insufficient data │ 0
```

View File

@ -0,0 +1,11 @@
# db-locks
This command displays queries that have taken out an exclusive lock on a relation. Exclusive locks typically prevent other operations on that relation from taking place, and can be a cause of "hung" queries that are waiting for a lock to be granted.
If you see a query that is hanging for a very long time or causing blocking issues you may consider killing the query by connecting to the database and running `SELECT pg_cancel_backend(PID);` to cancel the query. If the query still does not stop you can force a hard stop by running `SELECT pg_terminate_backend(PID);`
```
PID │ RELNAME │ TRANSACTION ID │ GRANTED │ QUERY │ AGE
─────────┼─────────┼────────────────┼─────────┼─────────────────────────────────────────┼───────────
328112 │ null │ 0 │ t │ SELECT * FROM logs; │ 00:04:20
```

View File

@ -0,0 +1,11 @@
# db-long-running-queries
This command displays currently running queries, that have been running for longer than 5 minutes, descending by duration. Very long running queries can be a source of multiple issues, such as preventing DDL statements completing or vacuum being unable to update `relfrozenxid`.
```
PID │ DURATION │ QUERY
───────┼─────────────────┼───────────────────────────────────────────────────────────────────────────────────────
19578 | 02:29:11.200129 | EXPLAIN SELECT "students".* FROM "students" WHERE "students"."id" = 1450645 LIMIT 1
19465 | 02:26:05.542653 | EXPLAIN SELECT "students".* FROM "students" WHERE "students"."id" = 1889881 LIMIT 1
19632 | 02:24:46.962818 | EXPLAIN SELECT "students".* FROM "students" WHERE "students"."id" = 1581884 LIMIT 1
```

View File

@ -0,0 +1,16 @@
# db-outliers
This command displays statements, obtained from `pg_stat_statements`, ordered by the amount of time to execute in aggregate. This includes the statement itself, the total execution time for that statement, the proportion of total execution time for all statements that statement has taken up, the number of times that statement has been called, and the amount of time that statement spent on synchronous I/O (reading/writing from the file system).
Typically, an efficient query will have an appropriate ratio of calls to total execution time, with as little time spent on I/O as possible. Queries that have a high total execution time but low call count should be investigated to improve their performance. Queries that have a high proportion of execution time being spent on synchronous I/O should also be investigated.
```
QUERY │ EXECUTION TIME │ PROPORTION OF EXEC TIME │ NUMBER CALLS │ SYNC IO TIME
─────────────────────────────────────────┼──────────────────┼─────────────────────────┼──────────────┼───────────────
SELECT * FROM archivable_usage_events.. │ 154:39:26.431466 │ 72.2% │ 34,211,877 │ 00:00:00
COPY public.archivable_usage_events (.. │ 50:38:33.198418 │ 23.6% │ 13 │ 13:34:21.00108
COPY public.usage_events (id, reporte.. │ 02:32:16.335233 │ 1.2% │ 13 │ 00:34:19.784318
INSERT INTO usage_events (id, retaine.. │ 01:42:59.436532 │ 0.8% │ 12,328,187 │ 00:00:00
SELECT * FROM usage_events WHERE (alp.. │ 01:18:10.754354 │ 0.6% │ 102,114,301 │ 00:00:00
```

View File

@ -0,0 +1,13 @@
# db-replication-slots
This command shows information about [logical replication slots](https://www.postgresql.org/docs/current/logical-replication.html) that are setup on the database. It shows if the slot is active, the state of the WAL sender process ('startup', 'catchup', 'streaming', 'backup', 'stopping') the replication client address and the replication lag in GB.
This command is useful to check that the amount of replication lag is as low as possible, replication lag can occur due to network latency issues, slow disk I/O, long running transactions or lack of ability for the subscriber to consume WAL fast enough.
```
NAME │ ACTIVE │ STATE │ REPLICATION CLIENT ADDRESS │ REPLICATION LAG GB
─────────────────────────────────────────────┼────────┼─────────┼────────────────────────────┼─────────────────────
supabase_realtime_replication_slot │ t │ N/A │ N/A │ 0
datastream │ t │ catchup │ 24.201.24.106 │ 45
```

View File

@ -0,0 +1,33 @@
# db-role-connections
This command shows the number of active connections for each database roles to see which specific role might be consuming more connections than expected.
This is a Supabase specific command. You can see this breakdown on the dashboard as well:
https://app.supabase.com/project/_/database/roles
The maximum number of active connections depends [on your instance size](https://supabase.com/docs/guides/platform/compute-add-ons). You can [manually overwrite](https://supabase.com/docs/guides/platform/performance#allowing-higher-number-of-connections) the allowed number of connection but it is not advised.
```
ROLE NAME │ ACTIVE CONNCTION
────────────────────────────┼───────────────────
authenticator │ 5
postgres │ 5
supabase_admin │ 1
pgbouncer │ 1
anon │ 0
authenticated │ 0
service_role │ 0
dashboard_user │ 0
supabase_auth_admin │ 0
supabase_storage_admin │ 0
supabase_functions_admin │ 0
pgsodium_keyholder │ 0
pg_read_all_data │ 0
pg_write_all_data │ 0
pg_monitor │ 0
Active connections 12/90
```

View File

@ -0,0 +1,15 @@
# db-seq-scans
This command displays the number of sequential scans recorded against all tables, descending by count of sequential scans. Tables that have very high numbers of sequential scans may be underindexed, and it may be worth investigating queries that read from these tables.
```
NAME │ COUNT
───────────────────────────────────┼─────────
emails │ 182435
users │ 25063
job_run_details │ 60
schema_migrations │ 0
migrations │ 0
```

View File

@ -0,0 +1,13 @@
# db-table-index-sizes
This command displays the total size of indexes for each table. It is calculated by using the system administration function `pg_indexes_size()`.
```
TABLE │ INDEX SIZE
───────────────────────────────────┼─────────────
job_run_details │ 10104 kB
users │ 128 kB
job │ 32 kB
instances │ 8192 bytes
http_request_queue │ 0 bytes
```

View File

@ -0,0 +1,13 @@
# db-table-record-counts
This command displays an estimated count of rows per table, descending by estimated count. The estimated count is derived from `n_live_tup`, which is updated by vacuum operations. Due to the way `n_live_tup` is populated, sparse vs. dense pages can result in estimations that are significantly out from the real count of rows.
```
NAME │ ESTIMATED COUNT
─────────────┼──────────────────
logs │ 322943
emails │ 1103
job │ 1
migrations │ 0
```

View File

@ -0,0 +1,14 @@
# db-table-sizes
This command displays the size of each table in the database. It is calculated by using the system administration function `pg_table_size()`, which includes the size of the main data fork, free space map, visibility map and TOAST data. It does not include the size of the table's indexes.
```
NAME │ SIZE
───────────────────────────────────┼─────────────
job_run_details │ 385 MB
emails │ 584 kB
job │ 40 kB
sessions │ 0 bytes
prod_resource_notifications_meta │ 0 bytes
```

View File

@ -0,0 +1,9 @@
# db-total-index-size
This command displays the total size of all indexes on the database. It is calculated by taking the number of pages (reported in `relpages`) and multiplying it by the page size (8192 bytes).
```
SIZE
─────────
12 MB
```

View File

@ -0,0 +1,11 @@
# db-total-table-sizes
This command displays the total size of each table in the database. It is the sum of the values that `pg_table_size()` and `pg_indexes_size()` gives for each table. System tables inside `pg_catalog` and `information_schema` are not included.
```
NAME │ SIZE
───────────────────────────────────┼─────────────
job_run_details │ 395 MB
slack_msgs │ 648 kB
emails │ 640 kB
```

View File

@ -0,0 +1,9 @@
# db-unused-indexes
This command displays indexes that have < 50 scans recorded against them, and are greater than 5 pages in size, ordered by size relative to the number of index scans. This command is generally useful for discovering indexes that are unused. Indexes can impact write performance, as well as read performance should they occupy space in memory, its a good idea to remove indexes that are not needed or being used.
```
TABLE │ INDEX │ INDEX SIZE │ INDEX SCANS
─────────────────────┼────────────────────────────────────────────┼────────────┼──────────────
public.users │ user_id_created_at_idx │ 97 MB │ 0
```

View File

@ -0,0 +1,18 @@
# db-vacuum-stats
This shows you stats about the vacuum activities for each table. Due to Postgres' [MVCC](https://www.postgresql.org/docs/current/mvcc.html) when data is updated or deleted new rows are created and old rows are made invisible and marked as "dead tuples". Usually the [autovaccum](https://supabase.com/docs/guides/platform/database-size#vacuum-operations) process will aysnchronously clean the dead tuples.
The command lists when the last vacuum and last auto vacuum took place, the row count on the table as well as the count of dead rows and whether autovacuum is expected to run or not. If the number of dead rows is much higher than the row count, or if an autovacuum is expected but has not been performed for some time, this can indicate that autovacuum is not able to keep up and that your vacuum settings need to be tweaked or that you require more compute or disk IOPS to allow autovaccum to complete.
```
SCHEMA │ TABLE │ LAST VACUUM │ LAST AUTO VACUUM │ ROW COUNT │ DEAD ROW COUNT │ EXPECT AUTOVACUUM?
──────────────────────┼──────────────────────────────────┼─────────────┼──────────────────┼──────────────────────┼────────────────┼─────────────────────
auth │ users │ │ 2023-06-26 12:34 │ 18,030 │ 0 │ no
public │ profiles │ │ 2023-06-26 23:45 │ 13,420 │ 28 │ no
public │ logs │ │ 2023-06-26 01:23 │ 1,313,033 │ 3,318,228 │ yes
storage │ objects │ │ │ No stats │ 0 │ no
storage │ buckets │ │ │ No stats │ 0 │ no
supabase_migrations │ schema_migrations │ │ │ No stats │ 0 │ no
```

11
docs/supabase/link.md Normal file
View File

@ -0,0 +1,11 @@
## supabase-link
Link your local development project to a hosted Supabase project.
PostgREST configurations are fetched from the Supabase platform and validated against your local configuration file.
Optionally, database settings can be validated if you provide a password. Your database password is saved in native credentials storage if available.
> If you do not want to be prompted for the database password, such as in a CI environment, you may specify it explicitly via the `SUPABASE_DB_PASSWORD` environment variable.
Some commands like `db dump`, `db push`, and `db pull` require your project to be linked first.

9
docs/supabase/login.md Normal file
View File

@ -0,0 +1,9 @@
## supabase-login
Connect the Supabase CLI to your Supabase account by logging in with your [personal access token](https://supabase.com/dashboard/account/tokens).
Your access token is stored securely in [native credentials storage](https://github.com/zalando/go-keyring#dependencies). If native credentials storage is unavailable, it will be written to a plain text file at `~/.supabase/access-token`.
> If this behavior is not desired, such as in a CI environment, you may skip login by specifying the `SUPABASE_ACCESS_TOKEN` environment variable in other commands.
The Supabase CLI uses the stored token to access Management APIs for projects, functions, secrets, etc.

View File

@ -0,0 +1,11 @@
## supabase-migration-list
Lists migration history in both local and remote databases.
Requires your local project to be linked to a remote database by running `supabase link`. For self-hosted databases, you can pass in the connection parameters using `--db-url` flag.
> Note that URL strings must be escaped according to [RFC 3986](https://www.rfc-editor.org/rfc/rfc3986).
Local migrations are stored in `supabase/migrations` directory while remote migrations are tracked in `supabase_migrations.schema_migrations` table. Only the timestamps are compared to identify any differences.
In case of discrepancies between the local and remote migration history, you can resolve them using the `migration repair` command.

View File

@ -0,0 +1,7 @@
## supabase-migration-new
Creates a new migration file locally.
A `supabase/migrations` directory will be created if it does not already exists in your current `workdir`. All schema migration files must be created in this directory following the pattern `<timestamp>_<name>.sql`.
Outputs from other commands like `db diff` may be piped to `migration new <name>` via stdin.

View File

@ -0,0 +1,57 @@
## supabase-migration-repair
Repairs the remote migration history table.
Requires your local project to be linked to a remote database by running `supabase link`.
If your local and remote migration history goes out of sync, you can repair the remote history by marking specific migrations as `--status applied` or `--status reverted`. Marking as `reverted` will delete an existing record from the migration history table while marking as `applied` will insert a new record.
For example, your migration history may look like the table below, with missing entries in either local or remote.
```bash
$ supabase migration list
LOCAL │ REMOTE │ TIME (UTC)
─────────────────┼────────────────┼──────────────────────
│ 20230103054303 │ 2023-01-03 05:43:03
20230103054315 │ │ 2023-01-03 05:43:15
```
To reset your migration history to a clean state, first delete your local migration file.
```bash
$ rm supabase/migrations/20230103054315_remote_commit.sql
$ supabase migration list
LOCAL │ REMOTE │ TIME (UTC)
─────────────────┼────────────────┼──────────────────────
│ 20230103054303 │ 2023-01-03 05:43:03
```
Then mark the remote migration `20230103054303` as reverted.
```bash
$ supabase migration repair 20230103054303 --status reverted
Connecting to remote database...
Repaired migration history: [20220810154537] => reverted
Finished supabase migration repair.
$ supabase migration list
LOCAL │ REMOTE │ TIME (UTC)
─────────────────┼────────────────┼──────────────────────
```
Now you can run `db pull` again to dump the remote schema as a local migration file.
```bash
$ supabase db pull
Connecting to remote database...
Schema written to supabase/migrations/20240414044403_remote_schema.sql
Update remote migration history table? [Y/n]
Repaired migration history: [20240414044403] => applied
Finished supabase db pull.
$ supabase migration list
LOCAL │ REMOTE │ TIME (UTC)
─────────────────┼────────────────┼──────────────────────
20240414044403 │ 20240414044403 │ 2024-04-14 04:44:03
```

View File

@ -0,0 +1,11 @@
## supabase-migration-squash
Squashes local schema migrations to a single migration file.
The squashed migration is equivalent to a schema only dump of the local database after applying existing migration files. This is especially useful when you want to remove repeated modifications of the same schema from your migration history.
However, one limitation is that data manipulation statements, such as insert, update, or delete, are omitted from the squashed migration. You will have to add them back manually in a new migration file. This includes cron jobs, storage buckets, and any encrypted secrets in vault.
By default, the latest `<timestamp>_<name>.sql` file will be updated to contain the squashed migration. You can override the target version using the `--version <timestamp>` flag.
If your `supabase/migrations` directory is empty, running `supabase squash` will do nothing.

11
docs/supabase/start.md Normal file
View File

@ -0,0 +1,11 @@
## supabase-start
Starts the Supabase local development stack.
Requires `supabase/config.toml` to be created in your current working directory by running `supabase init`.
All service containers are started by default. You can exclude those not needed by passing in `-x` flag. To exclude multiple containers, either pass in a comma separated string, such as `-x gotrue,imgproxy`, or specify `-x` flag multiple times.
> It is recommended to have at least 7GB of RAM to start all services.
Health checks are automatically added to verify the started containers. Use `--ignore-health-check` flag to ignore these errors.

7
docs/supabase/status.md Normal file
View File

@ -0,0 +1,7 @@
## supabase-status
Shows status of the Supabase local development stack.
Requires the local development stack to be started by running `supabase start` or `supabase db start`.
You can export the connection parameters for [initializing supabase-js](https://supabase.com/docs/reference/javascript/initializing) locally by specifying the `-o env` flag. Supported parameters include `JWT_SECRET`, `ANON_KEY`, and `SERVICE_ROLE_KEY`.

9
docs/supabase/stop.md Normal file
View File

@ -0,0 +1,9 @@
## supabase-stop
Stops the Supabase local development stack.
Requires `supabase/config.toml` to be created in your current working directory by running `supabase init`.
All Docker resources are maintained across restarts. Use `--no-backup` flag to reset your local development data between restarts.
Use the `--all` flag to stop all local Supabase projects instances on the machine. Use with caution with `--no-backup` as it will delete all supabase local projects data.

Some files were not shown because too many files have changed in this diff Show More