Compare commits

..

4 Commits

Author SHA1 Message Date
Nate Brown
6d5299715e Fix typos 2021-11-04 10:45:25 -05:00
Nate Brown
ba8646fa83 Add an additional transitional mode to get us to enforced safely 2021-11-04 10:45:25 -05:00
Nate Brown
c1ed78ffc7 Add tests for the successful psk mode matrix 2021-11-04 10:44:25 -05:00
Nate Brown
cf3b7ec2fa PSK Support 2021-11-04 10:41:48 -05:00
253 changed files with 11819 additions and 29638 deletions

View File

@@ -1,69 +0,0 @@
name: "\U0001F41B Bug Report"
description: Report an issue or possible bug
title: "\U0001F41B BUG:"
labels: []
assignees: []
body:
- type: markdown
attributes:
value: |
### Thank you for taking the time to file a bug report!
Please fill out this form as completely as possible.
- type: input
id: version
attributes:
label: What version of `nebula` are you using? (`nebula -version`)
placeholder: 0.0.0
validations:
required: true
- type: input
id: os
attributes:
label: What operating system are you using?
description: iOS and Android specific issues belong in the [mobile_nebula](https://github.com/DefinedNet/mobile_nebula) repo.
placeholder: Linux, Mac, Windows
validations:
required: true
- type: textarea
id: description
attributes:
label: Describe the Bug
description: A clear and concise description of what the bug is.
validations:
required: true
- type: textarea
id: logs
attributes:
label: Logs from affected hosts
description: |
Please provide logs from ALL affected hosts during the time of the issue. If you do not provide logs we will be unable to assist you!
[Learn how to find Nebula logs here.](https://nebula.defined.net/docs/guides/viewing-nebula-logs/)
Improve formatting by using <code>```</code> at the beginning and end of each log block.
value: |
```
```
validations:
required: true
- type: textarea
id: configs
attributes:
label: Config files from affected hosts
description: |
Provide config files for all affected hosts.
Improve formatting by using <code>```</code> at the beginning and end of each config file.
value: |
```
```
validations:
required: true

View File

@@ -1,21 +0,0 @@
blank_issues_enabled: true
contact_links:
- name: 💨 Performance Issues
url: https://github.com/slackhq/nebula/discussions/new/choose
about: 'We ask that you create a discussion instead of an issue for performance-related questions. This allows us to have a more open conversation about the issue and helps us to better understand the problem.'
- name: 📄 Documentation Issues
url: https://github.com/definednet/nebula-docs
about: "If you've found an issue with the website documentation, please file it in the nebula-docs repository."
- name: 📱 Mobile Nebula Issues
url: https://github.com/definednet/mobile_nebula
about: "If you're using the mobile Nebula app and have found an issue, please file it in the mobile_nebula repository."
- name: 📘 Documentation
url: https://nebula.defined.net/docs/
about: 'The documentation is the best place to start if you are new to Nebula.'
- name: 💁 Support/Chat
url: https://join.slack.com/t/nebulaoss/shared_invite/zt-39pk4xopc-CUKlGcb5Z39dQ0cK1v7ehA
about: 'For faster support, join us on Slack for assistance!'

View File

@@ -1,22 +0,0 @@
version: 2
updates:
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "weekly"
- package-ecosystem: "gomod"
directory: "/"
schedule:
interval: "weekly"
groups:
golang-x-dependencies:
patterns:
- "golang.org/x/*"
zx2c4-dependencies:
patterns:
- "golang.zx2c4.com/*"
protobuf-dependencies:
patterns:
- "github.com/golang/protobuf"
- "google.golang.org/protobuf"

View File

@@ -1,11 +0,0 @@
<!--
Thank you for taking the time to submit a pull request!
Please be sure to provide a clear description of what you're trying to achieve with the change.
- If you're submitting a new feature, please explain how to use it and document any new config options in the example config.
- If you're submitting a bugfix, please link the related issue or describe the circumstances surrounding the issue.
- If you're changing a default, explain why you believe the new default is appropriate for most users.
P.S. If you're only updating the README or other docs, please file a pull request here instead: https://github.com/DefinedNet/nebula-docs
-->

View File

@@ -14,21 +14,31 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v4 - name: Set up Go 1.17
uses: actions/setup-go@v2
- uses: actions/setup-go@v6
with: with:
go-version: '1.25' go-version: 1.17
check-latest: true id: go
- name: Check out code into the Go module directory
uses: actions/checkout@v2
- uses: actions/cache@v2
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-gofmt1.17-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-gofmt1.17-
- name: Install goimports - name: Install goimports
run: | run: |
go install golang.org/x/tools/cmd/goimports@latest go get golang.org/x/tools/cmd/goimports
go build golang.org/x/tools/cmd/goimports
- name: gofmt - name: gofmt
run: | run: |
if [ "$(find . -iname '*.go' | grep -v '\.pb\.go$' | xargs goimports -l)" ] if [ "$(find . -iname '*.go' | grep -v '\.pb\.go$' | xargs ./goimports -l)" ]
then then
find . -iname '*.go' | grep -v '\.pb\.go$' | xargs goimports -d find . -iname '*.go' | grep -v '\.pb\.go$' | xargs ./goimports -d
exit 1 exit 1
fi fi

View File

@@ -7,212 +7,315 @@ name: Create release and upload binaries
jobs: jobs:
build-linux: build-linux:
name: Build Linux/BSD All name: Build Linux All
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v4 - name: Set up Go 1.17
uses: actions/setup-go@v2
- uses: actions/setup-go@v6
with: with:
go-version: '1.25' go-version: 1.17
check-latest: true
- name: Checkout code
uses: actions/checkout@v2
- name: Build - name: Build
run: | run: |
make BUILD_NUMBER="${GITHUB_REF#refs/tags/v}" release-linux release-freebsd release-openbsd release-netbsd make BUILD_NUMBER="${GITHUB_REF#refs/tags/v}" release-linux release-freebsd
mkdir release mkdir release
mv build/*.tar.gz release mv build/*.tar.gz release
- name: Upload artifacts - name: Upload artifacts
uses: actions/upload-artifact@v4 uses: actions/upload-artifact@v1
with: with:
name: linux-latest name: linux-latest
path: release path: release
build-windows: build-windows:
name: Build Windows name: Build Windows amd64
runs-on: windows-latest runs-on: windows-latest
steps: steps:
- uses: actions/checkout@v4 - name: Set up Go 1.17
uses: actions/setup-go@v2
- uses: actions/setup-go@v6
with: with:
go-version: '1.25' go-version: 1.17
check-latest: true
- name: Checkout code
uses: actions/checkout@v2
- name: Build - name: Build
run: | run: |
echo $Env:GITHUB_REF.Substring(11) echo $Env:GITHUB_REF.Substring(11)
mkdir build\windows-amd64 go build -trimpath -ldflags "-X main.Build=$($Env:GITHUB_REF.Substring(11))" -o build\nebula.exe ./cmd/nebula-service
$Env:GOARCH = "amd64" go build -trimpath -ldflags "-X main.Build=$($Env:GITHUB_REF.Substring(11))" -o build\nebula-cert.exe ./cmd/nebula-cert
go build -trimpath -ldflags "-X main.Build=$($Env:GITHUB_REF.Substring(11))" -o build\windows-amd64\nebula.exe ./cmd/nebula-service
go build -trimpath -ldflags "-X main.Build=$($Env:GITHUB_REF.Substring(11))" -o build\windows-amd64\nebula-cert.exe ./cmd/nebula-cert
mkdir build\windows-arm64
$Env:GOARCH = "arm64"
go build -trimpath -ldflags "-X main.Build=$($Env:GITHUB_REF.Substring(11))" -o build\windows-arm64\nebula.exe ./cmd/nebula-service
go build -trimpath -ldflags "-X main.Build=$($Env:GITHUB_REF.Substring(11))" -o build\windows-arm64\nebula-cert.exe ./cmd/nebula-cert
mkdir build\dist\windows
mv dist\windows\wintun build\dist\windows\
- name: Upload artifacts - name: Upload artifacts
uses: actions/upload-artifact@v4 uses: actions/upload-artifact@v1
with: with:
name: windows-latest name: windows-latest
path: build path: build
build-darwin: build-darwin:
name: Build Universal Darwin name: Build Darwin amd64
env: runs-on: macOS-latest
HAS_SIGNING_CREDS: ${{ secrets.AC_USERNAME != '' }}
runs-on: macos-latest
steps: steps:
- uses: actions/checkout@v4 - name: Set up Go 1.17
uses: actions/setup-go@v2
- uses: actions/setup-go@v6
with: with:
go-version: '1.25' go-version: 1.17
check-latest: true
- name: Import certificates - name: Checkout code
if: env.HAS_SIGNING_CREDS == 'true' uses: actions/checkout@v2
uses: Apple-Actions/import-codesign-certs@v5
with:
p12-file-base64: ${{ secrets.APPLE_DEVELOPER_CERTIFICATE_P12_BASE64 }}
p12-password: ${{ secrets.APPLE_DEVELOPER_CERTIFICATE_PASSWORD }}
- name: Build, sign, and notarize - name: Build
env:
AC_USERNAME: ${{ secrets.AC_USERNAME }}
AC_PASSWORD: ${{ secrets.AC_PASSWORD }}
run: | run: |
rm -rf release make BUILD_NUMBER="${GITHUB_REF#refs/tags/v}" service build/nebula-darwin-amd64.tar.gz
make BUILD_NUMBER="${GITHUB_REF#refs/tags/v}" service build/nebula-darwin-arm64.tar.gz
mkdir release mkdir release
make BUILD_NUMBER="${GITHUB_REF#refs/tags/v}" service build/darwin-amd64/nebula build/darwin-amd64/nebula-cert mv build/*.tar.gz release
make BUILD_NUMBER="${GITHUB_REF#refs/tags/v}" service build/darwin-arm64/nebula build/darwin-arm64/nebula-cert
lipo -create -output ./release/nebula ./build/darwin-amd64/nebula ./build/darwin-arm64/nebula
lipo -create -output ./release/nebula-cert ./build/darwin-amd64/nebula-cert ./build/darwin-arm64/nebula-cert
if [ -n "$AC_USERNAME" ]; then
codesign -s "10BC1FDDEB6CE753550156C0669109FAC49E4D1E" -f -v --timestamp --options=runtime -i "net.defined.nebula" ./release/nebula
codesign -s "10BC1FDDEB6CE753550156C0669109FAC49E4D1E" -f -v --timestamp --options=runtime -i "net.defined.nebula-cert" ./release/nebula-cert
fi
zip -j release/nebula-darwin.zip release/nebula-cert release/nebula
if [ -n "$AC_USERNAME" ]; then
xcrun notarytool submit ./release/nebula-darwin.zip --team-id "576H3XS7FP" --apple-id "$AC_USERNAME" --password "$AC_PASSWORD" --wait
fi
- name: Upload artifacts - name: Upload artifacts
uses: actions/upload-artifact@v4 uses: actions/upload-artifact@v1
with: with:
name: darwin-latest name: darwin-latest
path: ./release/* path: release
build-docker:
name: Create and Upload Docker Images
# Technically we only need build-linux to succeed, but if any platforms fail we'll
# want to investigate and restart the build
needs: [build-linux, build-darwin, build-windows]
runs-on: ubuntu-latest
env:
HAS_DOCKER_CREDS: ${{ vars.DOCKERHUB_USERNAME != '' && secrets.DOCKERHUB_TOKEN != '' }}
# XXX It's not possible to write a conditional here, so instead we do it on every step
#if: ${{ env.HAS_DOCKER_CREDS == 'true' }}
steps:
# Be sure to checkout the code before downloading artifacts, or they will
# be overwritten
- name: Checkout code
if: ${{ env.HAS_DOCKER_CREDS == 'true' }}
uses: actions/checkout@v4
- name: Download artifacts
if: ${{ env.HAS_DOCKER_CREDS == 'true' }}
uses: actions/download-artifact@v4
with:
name: linux-latest
path: artifacts
- name: Login to Docker Hub
if: ${{ env.HAS_DOCKER_CREDS == 'true' }}
uses: docker/login-action@v3
with:
username: ${{ vars.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Set up Docker Buildx
if: ${{ env.HAS_DOCKER_CREDS == 'true' }}
uses: docker/setup-buildx-action@v3
- name: Build and push images
if: ${{ env.HAS_DOCKER_CREDS == 'true' }}
env:
DOCKER_IMAGE_REPO: ${{ vars.DOCKER_IMAGE_REPO || 'nebulaoss/nebula' }}
DOCKER_IMAGE_TAG: ${{ vars.DOCKER_IMAGE_TAG || 'latest' }}
run: |
mkdir -p build/linux-{amd64,arm64}
tar -zxvf artifacts/nebula-linux-amd64.tar.gz -C build/linux-amd64/
tar -zxvf artifacts/nebula-linux-arm64.tar.gz -C build/linux-arm64/
docker buildx build . --push -f docker/Dockerfile --platform linux/amd64,linux/arm64 --tag "${DOCKER_IMAGE_REPO}:${DOCKER_IMAGE_TAG}" --tag "${DOCKER_IMAGE_REPO}:${GITHUB_REF#refs/tags/v}"
release: release:
name: Create and Upload Release name: Create and Upload Release
needs: [build-linux, build-darwin, build-windows] needs: [build-linux, build-darwin, build-windows]
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v4 - name: Download Linux artifacts
uses: actions/download-artifact@v1
- name: Download artifacts
uses: actions/download-artifact@v4
with: with:
path: artifacts name: linux-latest
- name: Download Darwin artifacts
uses: actions/download-artifact@v1
with:
name: darwin-latest
- name: Download Windows artifacts
uses: actions/download-artifact@v1
with:
name: windows-latest
- name: Zip Windows - name: Zip Windows
run: | run: |
cd artifacts/windows-latest cd windows-latest
cp windows-amd64/* . zip nebula-windows-amd64.zip nebula.exe nebula-cert.exe
zip -r nebula-windows-amd64.zip nebula.exe nebula-cert.exe dist
cp windows-arm64/* .
zip -r nebula-windows-arm64.zip nebula.exe nebula-cert.exe dist
- name: Create sha256sum - name: Create sha256sum
run: | run: |
cd artifacts
for dir in linux-latest darwin-latest windows-latest for dir in linux-latest darwin-latest windows-latest
do do
( (
cd $dir cd $dir
if [ "$dir" = windows-latest ] if [ "$dir" = windows-latest ]
then then
sha256sum <windows-amd64/nebula.exe | sed 's=-$=nebula-windows-amd64.zip/nebula.exe=' sha256sum <nebula.exe | sed 's=-$=nebula-windows-amd64.zip/nebula.exe='
sha256sum <windows-amd64/nebula-cert.exe | sed 's=-$=nebula-windows-amd64.zip/nebula-cert.exe=' sha256sum <nebula-cert.exe | sed 's=-$=nebula-windows-amd64.zip/nebula-cert.exe='
sha256sum <windows-arm64/nebula.exe | sed 's=-$=nebula-windows-arm64.zip/nebula.exe='
sha256sum <windows-arm64/nebula-cert.exe | sed 's=-$=nebula-windows-arm64.zip/nebula-cert.exe='
sha256sum nebula-windows-amd64.zip sha256sum nebula-windows-amd64.zip
sha256sum nebula-windows-arm64.zip
elif [ "$dir" = darwin-latest ]
then
sha256sum <nebula-darwin.zip | sed 's=-$=nebula-darwin.zip='
sha256sum <nebula | sed 's=-$=nebula-darwin.zip/nebula='
sha256sum <nebula-cert | sed 's=-$=nebula-darwin.zip/nebula-cert='
else else
for v in *.tar.gz for v in *.tar.gz
do do
sha256sum $v sha256sum $v
tar zxf $v --to-command='sh -c "sha256sum | sed s=-$='$v'/$TAR_FILENAME="' tar zxf $v --to-command='sh -c "sha256sum | sed s=-$='$v'/$TAR_FILENAME="'
done done
fi fi
) )
done | sort -k 2 >SHASUM256.txt done | sort -k 2 >SHASUM256.txt
- name: Create Release - name: Create Release
id: create_release id: create_release
uses: actions/create-release@v1
env: env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: | with:
cd artifacts tag_name: ${{ github.ref }}
gh release create \ release_name: Release ${{ github.ref }}
--verify-tag \ draft: false
--title "Release ${{ github.ref_name }}" \ prerelease: false
"${{ github.ref_name }}" \
SHASUM256.txt *-latest/*.zip *-latest/*.tar.gz ##
## Upload assets (I wish we could just upload the whole folder at once...
##
- name: Upload SHASUM256.txt
uses: actions/upload-release-asset@v1.0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: ./SHASUM256.txt
asset_name: SHASUM256.txt
asset_content_type: text/plain
- name: Upload darwin-amd64
uses: actions/upload-release-asset@v1.0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: ./darwin-latest/nebula-darwin-amd64.tar.gz
asset_name: nebula-darwin-amd64.tar.gz
asset_content_type: application/gzip
- name: Upload darwin-arm64
uses: actions/upload-release-asset@v1.0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: ./darwin-latest/nebula-darwin-arm64.tar.gz
asset_name: nebula-darwin-arm64.tar.gz
asset_content_type: application/gzip
- name: Upload windows-amd64
uses: actions/upload-release-asset@v1.0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: ./windows-latest/nebula-windows-amd64.zip
asset_name: nebula-windows-amd64.zip
asset_content_type: application/zip
- name: Upload linux-amd64
uses: actions/upload-release-asset@v1.0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: ./linux-latest/nebula-linux-amd64.tar.gz
asset_name: nebula-linux-amd64.tar.gz
asset_content_type: application/gzip
- name: Upload linux-386
uses: actions/upload-release-asset@v1.0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: ./linux-latest/nebula-linux-386.tar.gz
asset_name: nebula-linux-386.tar.gz
asset_content_type: application/gzip
- name: Upload linux-ppc64le
uses: actions/upload-release-asset@v1.0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: ./linux-latest/nebula-linux-ppc64le.tar.gz
asset_name: nebula-linux-ppc64le.tar.gz
asset_content_type: application/gzip
- name: Upload linux-arm-5
uses: actions/upload-release-asset@v1.0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: ./linux-latest/nebula-linux-arm-5.tar.gz
asset_name: nebula-linux-arm-5.tar.gz
asset_content_type: application/gzip
- name: Upload linux-arm-6
uses: actions/upload-release-asset@v1.0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: ./linux-latest/nebula-linux-arm-6.tar.gz
asset_name: nebula-linux-arm-6.tar.gz
asset_content_type: application/gzip
- name: Upload linux-arm-7
uses: actions/upload-release-asset@v1.0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: ./linux-latest/nebula-linux-arm-7.tar.gz
asset_name: nebula-linux-arm-7.tar.gz
asset_content_type: application/gzip
- name: Upload linux-arm64
uses: actions/upload-release-asset@v1.0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: ./linux-latest/nebula-linux-arm64.tar.gz
asset_name: nebula-linux-arm64.tar.gz
asset_content_type: application/gzip
- name: Upload linux-mips
uses: actions/upload-release-asset@v1.0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: ./linux-latest/nebula-linux-mips.tar.gz
asset_name: nebula-linux-mips.tar.gz
asset_content_type: application/gzip
- name: Upload linux-mipsle
uses: actions/upload-release-asset@v1.0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: ./linux-latest/nebula-linux-mipsle.tar.gz
asset_name: nebula-linux-mipsle.tar.gz
asset_content_type: application/gzip
- name: Upload linux-mips64
uses: actions/upload-release-asset@v1.0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: ./linux-latest/nebula-linux-mips64.tar.gz
asset_name: nebula-linux-mips64.tar.gz
asset_content_type: application/gzip
- name: Upload linux-mips64le
uses: actions/upload-release-asset@v1.0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: ./linux-latest/nebula-linux-mips64le.tar.gz
asset_name: nebula-linux-mips64le.tar.gz
asset_content_type: application/gzip
- name: Upload linux-mips-softfloat
uses: actions/upload-release-asset@v1.0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: ./linux-latest/nebula-linux-mips-softfloat.tar.gz
asset_name: nebula-linux-mips-softfloat.tar.gz
asset_content_type: application/gzip
- name: Upload linux-riscv64
uses: actions/upload-release-asset@v1.0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: ./linux-latest/nebula-linux-riscv64.tar.gz
asset_name: nebula-linux-riscv64.tar.gz
asset_content_type: application/gzip
- name: Upload freebsd-amd64
uses: actions/upload-release-asset@v1.0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: ./linux-latest/nebula-freebsd-amd64.tar.gz
asset_name: nebula-freebsd-amd64.tar.gz
asset_content_type: application/gzip

View File

@@ -1,51 +0,0 @@
name: smoke-extra
on:
push:
branches:
- master
pull_request:
types: [opened, synchronize, labeled, reopened]
paths:
- '.github/workflows/smoke**'
- '**Makefile'
- '**.go'
- '**.proto'
- 'go.mod'
- 'go.sum'
jobs:
smoke-extra:
if: github.ref == 'refs/heads/master' || contains(github.event.pull_request.labels.*.name, 'smoke-test-extra')
name: Run extra smoke tests
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-go@v6
with:
go-version: '1.25'
check-latest: true
- name: add hashicorp source
run: wget -O- https://apt.releases.hashicorp.com/gpg | gpg --dearmor | sudo tee /usr/share/keyrings/hashicorp-archive-keyring.gpg && echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list
- name: install vagrant
run: sudo apt-get update && sudo apt-get install -y vagrant virtualbox
- name: freebsd-amd64
run: make smoke-vagrant/freebsd-amd64
- name: openbsd-amd64
run: make smoke-vagrant/openbsd-amd64
- name: netbsd-amd64
run: make smoke-vagrant/netbsd-amd64
- name: linux-386
run: make smoke-vagrant/linux-386
- name: linux-amd64-ipv6disable
run: make smoke-vagrant/linux-amd64-ipv6disable
timeout-minutes: 30

View File

@@ -18,15 +18,24 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v4 - name: Set up Go 1.17
uses: actions/setup-go@v2
- uses: actions/setup-go@v6
with: with:
go-version: '1.25' go-version: 1.17
check-latest: true id: go
- name: Check out code into the Go module directory
uses: actions/checkout@v2
- uses: actions/cache@v2
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go1.17-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go1.17-
- name: build - name: build
run: make bin-docker CGO_ENABLED=1 BUILD_ARGS=-race run: make bin-docker
- name: setup docker image - name: setup docker image
working-directory: ./.github/workflows/smoke working-directory: ./.github/workflows/smoke
@@ -36,20 +45,4 @@ jobs:
working-directory: ./.github/workflows/smoke working-directory: ./.github/workflows/smoke
run: ./smoke.sh run: ./smoke.sh
- name: setup relay docker image
working-directory: ./.github/workflows/smoke
run: ./build-relay.sh
- name: run smoke relay
working-directory: ./.github/workflows/smoke
run: ./smoke-relay.sh
- name: setup docker image for P256
working-directory: ./.github/workflows/smoke
run: NAME="smoke-p256" CURVE=P256 ./build.sh
- name: run smoke-p256
working-directory: ./.github/workflows/smoke
run: NAME="smoke-p256" ./smoke.sh
timeout-minutes: 10 timeout-minutes: 10

View File

@@ -1,6 +1,4 @@
FROM ubuntu:jammy FROM debian:buster
RUN apt-get update && apt-get install -y iputils-ping ncat tcpdump
ADD ./build /nebula ADD ./build /nebula

View File

@@ -1,44 +0,0 @@
#!/bin/sh
set -e -x
rm -rf ./build
mkdir ./build
(
cd build
cp ../../../../build/linux-amd64/nebula .
cp ../../../../build/linux-amd64/nebula-cert .
HOST="lighthouse1" AM_LIGHTHOUSE=true ../genconfig.sh >lighthouse1.yml <<EOF
relay:
am_relay: true
EOF
export LIGHTHOUSES="192.168.100.1 172.17.0.2:4242"
export REMOTE_ALLOW_LIST='{"172.17.0.4/32": false, "172.17.0.5/32": false}'
HOST="host2" ../genconfig.sh >host2.yml <<EOF
relay:
relays:
- 192.168.100.1
EOF
export REMOTE_ALLOW_LIST='{"172.17.0.3/32": false}'
HOST="host3" ../genconfig.sh >host3.yml
HOST="host4" ../genconfig.sh >host4.yml <<EOF
relay:
use_relays: false
EOF
../../../../nebula-cert ca -name "Smoke Test"
../../../../nebula-cert sign -name "lighthouse1" -groups "lighthouse,lighthouse1" -ip "192.168.100.1/24"
../../../../nebula-cert sign -name "host2" -groups "host,host2" -ip "192.168.100.2/24"
../../../../nebula-cert sign -name "host3" -groups "host,host3" -ip "192.168.100.3/24"
../../../../nebula-cert sign -name "host4" -groups "host,host4" -ip "192.168.100.4/24"
)
docker build -t nebula:smoke-relay .

View File

@@ -5,44 +5,35 @@ set -e -x
rm -rf ./build rm -rf ./build
mkdir ./build mkdir ./build
# TODO: Assumes your docker bridge network is a /24, and the first container that launches will be .1
# - We could make this better by launching the lighthouse first and then fetching what IP it is.
NET="$(docker network inspect bridge -f '{{ range .IPAM.Config }}{{ .Subnet }}{{ end }}' | cut -d. -f1-3)"
( (
cd build cd build
cp ../../../../build/linux-amd64/nebula . cp ../../../../build/linux-amd64/nebula .
cp ../../../../build/linux-amd64/nebula-cert . cp ../../../../build/linux-amd64/nebula-cert .
if [ "$1" ]
then
cp "../../../../build/$1/nebula" "$1-nebula"
fi
HOST="lighthouse1" \ HOST="lighthouse1" \
AM_LIGHTHOUSE=true \ AM_LIGHTHOUSE=true \
../genconfig.sh >lighthouse1.yml ../genconfig.sh >lighthouse1.yml
HOST="host2" \ HOST="host2" \
LIGHTHOUSES="192.168.100.1 $NET.2:4242" \ LIGHTHOUSES="192.168.100.1 172.17.0.2:4242" \
../genconfig.sh >host2.yml ../genconfig.sh >host2.yml
HOST="host3" \ HOST="host3" \
LIGHTHOUSES="192.168.100.1 $NET.2:4242" \ LIGHTHOUSES="192.168.100.1 172.17.0.2:4242" \
INBOUND='[{"port": "any", "proto": "icmp", "group": "lighthouse"}]' \ INBOUND='[{"port": "any", "proto": "icmp", "group": "lighthouse"}]' \
../genconfig.sh >host3.yml ../genconfig.sh >host3.yml
HOST="host4" \ HOST="host4" \
LIGHTHOUSES="192.168.100.1 $NET.2:4242" \ LIGHTHOUSES="192.168.100.1 172.17.0.2:4242" \
OUTBOUND='[{"port": "any", "proto": "icmp", "group": "lighthouse"}]' \ OUTBOUND='[{"port": "any", "proto": "icmp", "group": "lighthouse"}]' \
../genconfig.sh >host4.yml ../genconfig.sh >host4.yml
../../../../nebula-cert ca -curve "${CURVE:-25519}" -name "Smoke Test" ../../../../nebula-cert ca -name "Smoke Test"
../../../../nebula-cert sign -name "lighthouse1" -groups "lighthouse,lighthouse1" -ip "192.168.100.1/24" ../../../../nebula-cert sign -name "lighthouse1" -groups "lighthouse,lighthouse1" -ip "192.168.100.1/24"
../../../../nebula-cert sign -name "host2" -groups "host,host2" -ip "192.168.100.2/24" ../../../../nebula-cert sign -name "host2" -groups "host,host2" -ip "192.168.100.2/24"
../../../../nebula-cert sign -name "host3" -groups "host,host3" -ip "192.168.100.3/24" ../../../../nebula-cert sign -name "host3" -groups "host,host3" -ip "192.168.100.3/24"
../../../../nebula-cert sign -name "host4" -groups "host,host4" -ip "192.168.100.4/24" ../../../../nebula-cert sign -name "host4" -groups "host,host4" -ip "192.168.100.4/24"
) )
docker build -t "nebula:${NAME:-smoke}" . sudo docker build -t nebula:smoke .

View File

@@ -40,20 +40,15 @@ pki:
lighthouse: lighthouse:
am_lighthouse: ${AM_LIGHTHOUSE:-false} am_lighthouse: ${AM_LIGHTHOUSE:-false}
hosts: $(lighthouse_hosts) hosts: $(lighthouse_hosts)
remote_allow_list: ${REMOTE_ALLOW_LIST}
listen: listen:
host: 0.0.0.0 host: 0.0.0.0
port: ${LISTEN_PORT:-4242} port: ${LISTEN_PORT:-4242}
tun: tun:
dev: ${TUN_DEV:-tun0} dev: ${TUN_DEV:-nebula1}
firewall: firewall:
inbound_action: reject
outbound_action: reject
outbound: ${OUTBOUND:-$FIREWALL_ALL} outbound: ${OUTBOUND:-$FIREWALL_ALL}
inbound: ${INBOUND:-$FIREWALL_ALL} inbound: ${INBOUND:-$FIREWALL_ALL}
$(test -t 0 || cat)
EOF EOF

View File

@@ -1,85 +0,0 @@
#!/bin/bash
set -e -x
set -o pipefail
mkdir -p logs
cleanup() {
echo
echo " *** cleanup"
echo
set +e
if [ "$(jobs -r)" ]
then
docker kill lighthouse1 host2 host3 host4
fi
}
trap cleanup EXIT
docker run --name lighthouse1 --rm nebula:smoke-relay -config lighthouse1.yml -test
docker run --name host2 --rm nebula:smoke-relay -config host2.yml -test
docker run --name host3 --rm nebula:smoke-relay -config host3.yml -test
docker run --name host4 --rm nebula:smoke-relay -config host4.yml -test
docker run --name lighthouse1 --device /dev/net/tun:/dev/net/tun --cap-add NET_ADMIN --rm nebula:smoke-relay -config lighthouse1.yml 2>&1 | tee logs/lighthouse1 | sed -u 's/^/ [lighthouse1] /' &
sleep 1
docker run --name host2 --device /dev/net/tun:/dev/net/tun --cap-add NET_ADMIN --rm nebula:smoke-relay -config host2.yml 2>&1 | tee logs/host2 | sed -u 's/^/ [host2] /' &
sleep 1
docker run --name host3 --device /dev/net/tun:/dev/net/tun --cap-add NET_ADMIN --rm nebula:smoke-relay -config host3.yml 2>&1 | tee logs/host3 | sed -u 's/^/ [host3] /' &
sleep 1
docker run --name host4 --device /dev/net/tun:/dev/net/tun --cap-add NET_ADMIN --rm nebula:smoke-relay -config host4.yml 2>&1 | tee logs/host4 | sed -u 's/^/ [host4] /' &
sleep 1
set +x
echo
echo " *** Testing ping from lighthouse1"
echo
set -x
docker exec lighthouse1 ping -c1 192.168.100.2
docker exec lighthouse1 ping -c1 192.168.100.3
docker exec lighthouse1 ping -c1 192.168.100.4
set +x
echo
echo " *** Testing ping from host2"
echo
set -x
docker exec host2 ping -c1 192.168.100.1
# Should fail because no relay configured in this direction
! docker exec host2 ping -c1 192.168.100.3 -w5 || exit 1
! docker exec host2 ping -c1 192.168.100.4 -w5 || exit 1
set +x
echo
echo " *** Testing ping from host3"
echo
set -x
docker exec host3 ping -c1 192.168.100.1
docker exec host3 ping -c1 192.168.100.2
docker exec host3 ping -c1 192.168.100.4
set +x
echo
echo " *** Testing ping from host4"
echo
set -x
docker exec host4 ping -c1 192.168.100.1
# Should fail because relays not allowed
! docker exec host4 ping -c1 192.168.100.2 -w5 || exit 1
docker exec host4 ping -c1 192.168.100.3
docker exec host4 sh -c 'kill 1'
docker exec host3 sh -c 'kill 1'
docker exec host2 sh -c 'kill 1'
docker exec lighthouse1 sh -c 'kill 1'
sleep 5
if [ "$(jobs -r)" ]
then
echo "nebula still running after SIGTERM sent" >&2
exit 1
fi

View File

@@ -1,105 +0,0 @@
#!/bin/bash
set -e -x
set -o pipefail
export VAGRANT_CWD="$PWD/vagrant-$1"
mkdir -p logs
cleanup() {
echo
echo " *** cleanup"
echo
set +e
if [ "$(jobs -r)" ]
then
docker kill lighthouse1 host2
fi
vagrant destroy -f
}
trap cleanup EXIT
CONTAINER="nebula:${NAME:-smoke}"
docker run --name lighthouse1 --rm "$CONTAINER" -config lighthouse1.yml -test
docker run --name host2 --rm "$CONTAINER" -config host2.yml -test
vagrant up
vagrant ssh -c "cd /nebula && /nebula/$1-nebula -config host3.yml -test" -- -T
docker run --name lighthouse1 --device /dev/net/tun:/dev/net/tun --cap-add NET_ADMIN --rm "$CONTAINER" -config lighthouse1.yml 2>&1 | tee logs/lighthouse1 | sed -u 's/^/ [lighthouse1] /' &
sleep 1
docker run --name host2 --device /dev/net/tun:/dev/net/tun --cap-add NET_ADMIN --rm "$CONTAINER" -config host2.yml 2>&1 | tee logs/host2 | sed -u 's/^/ [host2] /' &
sleep 1
vagrant ssh -c "cd /nebula && sudo sh -c 'echo \$\$ >/nebula/pid && exec /nebula/$1-nebula -config host3.yml'" 2>&1 -- -T | tee logs/host3 | sed -u 's/^/ [host3] /' &
sleep 15
# grab tcpdump pcaps for debugging
docker exec lighthouse1 tcpdump -i nebula1 -q -w - -U 2>logs/lighthouse1.inside.log >logs/lighthouse1.inside.pcap &
docker exec lighthouse1 tcpdump -i eth0 -q -w - -U 2>logs/lighthouse1.outside.log >logs/lighthouse1.outside.pcap &
docker exec host2 tcpdump -i nebula1 -q -w - -U 2>logs/host2.inside.log >logs/host2.inside.pcap &
docker exec host2 tcpdump -i eth0 -q -w - -U 2>logs/host2.outside.log >logs/host2.outside.pcap &
# vagrant ssh -c "tcpdump -i nebula1 -q -w - -U" 2>logs/host3.inside.log >logs/host3.inside.pcap &
# vagrant ssh -c "tcpdump -i eth0 -q -w - -U" 2>logs/host3.outside.log >logs/host3.outside.pcap &
#docker exec host2 ncat -nklv 0.0.0.0 2000 &
#vagrant ssh -c "ncat -nklv 0.0.0.0 2000" &
#docker exec host2 ncat -e '/usr/bin/echo host2' -nkluv 0.0.0.0 3000 &
#vagrant ssh -c "ncat -e '/usr/bin/echo host3' -nkluv 0.0.0.0 3000" &
set +x
echo
echo " *** Testing ping from lighthouse1"
echo
set -x
docker exec lighthouse1 ping -c1 192.168.100.2
docker exec lighthouse1 ping -c1 192.168.100.3
set +x
echo
echo " *** Testing ping from host2"
echo
set -x
docker exec host2 ping -c1 192.168.100.1
# Should fail because not allowed by host3 inbound firewall
! docker exec host2 ping -c1 192.168.100.3 -w5 || exit 1
#set +x
#echo
#echo " *** Testing ncat from host2"
#echo
#set -x
# Should fail because not allowed by host3 inbound firewall
#! docker exec host2 ncat -nzv -w5 192.168.100.3 2000 || exit 1
#! docker exec host2 ncat -nzuv -w5 192.168.100.3 3000 | grep -q host3 || exit 1
set +x
echo
echo " *** Testing ping from host3"
echo
set -x
vagrant ssh -c "ping -c1 192.168.100.1" -- -T
vagrant ssh -c "ping -c1 192.168.100.2" -- -T
#set +x
#echo
#echo " *** Testing ncat from host3"
#echo
#set -x
#vagrant ssh -c "ncat -nzv -w5 192.168.100.2 2000"
#vagrant ssh -c "ncat -nzuv -w5 192.168.100.2 3000" | grep -q host2
vagrant ssh -c "sudo xargs kill </nebula/pid" -- -T
docker exec host2 sh -c 'kill 1'
docker exec lighthouse1 sh -c 'kill 1'
sleep 1
if [ "$(jobs -r)" ]
then
echo "nebula still running after SIGTERM sent" >&2
exit 1
fi

View File

@@ -7,112 +7,63 @@ set -o pipefail
mkdir -p logs mkdir -p logs
cleanup() { cleanup() {
echo
echo " *** cleanup"
echo
set +e set +e
if [ "$(jobs -r)" ] if [ "$(jobs -r)" ]
then then
docker kill lighthouse1 host2 host3 host4 sudo docker kill lighthouse1 host2 host3 host4
fi fi
} }
trap cleanup EXIT trap cleanup EXIT
CONTAINER="nebula:${NAME:-smoke}" sudo docker run --name lighthouse1 --rm nebula:smoke -config lighthouse1.yml -test
sudo docker run --name host2 --rm nebula:smoke -config host2.yml -test
sudo docker run --name host3 --rm nebula:smoke -config host3.yml -test
sudo docker run --name host4 --rm nebula:smoke -config host4.yml -test
docker run --name lighthouse1 --rm "$CONTAINER" -config lighthouse1.yml -test sudo docker run --name lighthouse1 --device /dev/net/tun:/dev/net/tun --cap-add NET_ADMIN --rm nebula:smoke -config lighthouse1.yml 2>&1 | tee logs/lighthouse1 &
docker run --name host2 --rm "$CONTAINER" -config host2.yml -test
docker run --name host3 --rm "$CONTAINER" -config host3.yml -test
docker run --name host4 --rm "$CONTAINER" -config host4.yml -test
docker run --name lighthouse1 --device /dev/net/tun:/dev/net/tun --cap-add NET_ADMIN --rm "$CONTAINER" -config lighthouse1.yml 2>&1 | tee logs/lighthouse1 | sed -u 's/^/ [lighthouse1] /' &
sleep 1 sleep 1
docker run --name host2 --device /dev/net/tun:/dev/net/tun --cap-add NET_ADMIN --rm "$CONTAINER" -config host2.yml 2>&1 | tee logs/host2 | sed -u 's/^/ [host2] /' & sudo docker run --name host2 --device /dev/net/tun:/dev/net/tun --cap-add NET_ADMIN --rm nebula:smoke -config host2.yml 2>&1 | tee logs/host2 &
sleep 1 sleep 1
docker run --name host3 --device /dev/net/tun:/dev/net/tun --cap-add NET_ADMIN --rm "$CONTAINER" -config host3.yml 2>&1 | tee logs/host3 | sed -u 's/^/ [host3] /' & sudo docker run --name host3 --device /dev/net/tun:/dev/net/tun --cap-add NET_ADMIN --rm nebula:smoke -config host3.yml 2>&1 | tee logs/host3 &
sleep 1 sleep 1
docker run --name host4 --device /dev/net/tun:/dev/net/tun --cap-add NET_ADMIN --rm "$CONTAINER" -config host4.yml 2>&1 | tee logs/host4 | sed -u 's/^/ [host4] /' & sudo docker run --name host4 --device /dev/net/tun:/dev/net/tun --cap-add NET_ADMIN --rm nebula:smoke -config host4.yml 2>&1 | tee logs/host4 &
sleep 1 sleep 1
# grab tcpdump pcaps for debugging
docker exec lighthouse1 tcpdump -i nebula1 -q -w - -U 2>logs/lighthouse1.inside.log >logs/lighthouse1.inside.pcap &
docker exec lighthouse1 tcpdump -i eth0 -q -w - -U 2>logs/lighthouse1.outside.log >logs/lighthouse1.outside.pcap &
docker exec host2 tcpdump -i nebula1 -q -w - -U 2>logs/host2.inside.log >logs/host2.inside.pcap &
docker exec host2 tcpdump -i eth0 -q -w - -U 2>logs/host2.outside.log >logs/host2.outside.pcap &
docker exec host3 tcpdump -i nebula1 -q -w - -U 2>logs/host3.inside.log >logs/host3.inside.pcap &
docker exec host3 tcpdump -i eth0 -q -w - -U 2>logs/host3.outside.log >logs/host3.outside.pcap &
docker exec host4 tcpdump -i nebula1 -q -w - -U 2>logs/host4.inside.log >logs/host4.inside.pcap &
docker exec host4 tcpdump -i eth0 -q -w - -U 2>logs/host4.outside.log >logs/host4.outside.pcap &
docker exec host2 ncat -nklv 0.0.0.0 2000 &
docker exec host3 ncat -nklv 0.0.0.0 2000 &
docker exec host2 ncat -e '/usr/bin/echo host2' -nkluv 0.0.0.0 3000 &
docker exec host3 ncat -e '/usr/bin/echo host3' -nkluv 0.0.0.0 3000 &
set +x set +x
echo echo
echo " *** Testing ping from lighthouse1" echo " *** Testing ping from lighthouse1"
echo echo
set -x set -x
docker exec lighthouse1 ping -c1 192.168.100.2 sudo docker exec lighthouse1 ping -c1 192.168.100.2
docker exec lighthouse1 ping -c1 192.168.100.3 sudo docker exec lighthouse1 ping -c1 192.168.100.3
set +x set +x
echo echo
echo " *** Testing ping from host2" echo " *** Testing ping from host2"
echo echo
set -x set -x
docker exec host2 ping -c1 192.168.100.1 sudo docker exec host2 ping -c1 192.168.100.1
# Should fail because not allowed by host3 inbound firewall # Should fail because not allowed by host3 inbound firewall
! docker exec host2 ping -c1 192.168.100.3 -w5 || exit 1 ! sudo docker exec host2 ping -c1 192.168.100.3 -w5 || exit 1
set +x
echo
echo " *** Testing ncat from host2"
echo
set -x
# Should fail because not allowed by host3 inbound firewall
! docker exec host2 ncat -nzv -w5 192.168.100.3 2000 || exit 1
! docker exec host2 ncat -nzuv -w5 192.168.100.3 3000 | grep -q host3 || exit 1
set +x set +x
echo echo
echo " *** Testing ping from host3" echo " *** Testing ping from host3"
echo echo
set -x set -x
docker exec host3 ping -c1 192.168.100.1 sudo docker exec host3 ping -c1 192.168.100.1
docker exec host3 ping -c1 192.168.100.2 sudo docker exec host3 ping -c1 192.168.100.2
set +x
echo
echo " *** Testing ncat from host3"
echo
set -x
docker exec host3 ncat -nzv -w5 192.168.100.2 2000
docker exec host3 ncat -nzuv -w5 192.168.100.2 3000 | grep -q host2
set +x set +x
echo echo
echo " *** Testing ping from host4" echo " *** Testing ping from host4"
echo echo
set -x set -x
docker exec host4 ping -c1 192.168.100.1 sudo docker exec host4 ping -c1 192.168.100.1
# Should fail because not allowed by host4 outbound firewall # Should fail because not allowed by host4 outbound firewall
! docker exec host4 ping -c1 192.168.100.2 -w5 || exit 1 ! sudo docker exec host4 ping -c1 192.168.100.2 -w5 || exit 1
! docker exec host4 ping -c1 192.168.100.3 -w5 || exit 1 ! sudo docker exec host4 ping -c1 192.168.100.3 -w5 || exit 1
set +x
echo
echo " *** Testing ncat from host4"
echo
set -x
# Should fail because not allowed by host4 outbound firewall
! docker exec host4 ncat -nzv -w5 192.168.100.2 2000 || exit 1
! docker exec host4 ncat -nzv -w5 192.168.100.3 2000 || exit 1
! docker exec host4 ncat -nzuv -w5 192.168.100.2 3000 | grep -q host2 || exit 1
! docker exec host4 ncat -nzuv -w5 192.168.100.3 3000 | grep -q host3 || exit 1
set +x set +x
echo echo
@@ -120,19 +71,13 @@ echo " *** Testing conntrack"
echo echo
set -x set -x
# host2 can ping host3 now that host3 pinged it first # host2 can ping host3 now that host3 pinged it first
docker exec host2 ping -c1 192.168.100.3 sudo docker exec host2 ping -c1 192.168.100.3
# host4 can ping host2 once conntrack established # host4 can ping host2 once conntrack established
docker exec host2 ping -c1 192.168.100.4 sudo docker exec host2 ping -c1 192.168.100.4
docker exec host4 ping -c1 192.168.100.2 sudo docker exec host4 ping -c1 192.168.100.2
docker exec host4 sh -c 'kill 1' sudo docker exec host4 sh -c 'kill 1'
docker exec host3 sh -c 'kill 1' sudo docker exec host3 sh -c 'kill 1'
docker exec host2 sh -c 'kill 1' sudo docker exec host2 sh -c 'kill 1'
docker exec lighthouse1 sh -c 'kill 1' sudo docker exec lighthouse1 sh -c 'kill 1'
sleep 5 sleep 1
if [ "$(jobs -r)" ]
then
echo "nebula still running after SIGTERM sent" >&2
exit 1
fi

View File

@@ -1,7 +0,0 @@
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure("2") do |config|
config.vm.box = "generic/freebsd14"
config.vm.synced_folder "../build", "/nebula", type: "rsync"
end

View File

@@ -1,7 +0,0 @@
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure("2") do |config|
config.vm.box = "ubuntu/xenial32"
config.vm.synced_folder "../build", "/nebula"
end

View File

@@ -1,16 +0,0 @@
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure("2") do |config|
config.vm.box = "ubuntu/jammy64"
config.vm.synced_folder "../build", "/nebula"
config.vm.provision :shell do |shell|
shell.inline = <<-EOF
sed -i 's/GRUB_CMDLINE_LINUX=""/GRUB_CMDLINE_LINUX="ipv6.disable=1"/' /etc/default/grub
update-grub
EOF
shell.privileged = true
shell.reboot = true
end
end

View File

@@ -1,7 +0,0 @@
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure("2") do |config|
config.vm.box = "generic/netbsd9"
config.vm.synced_folder "../build", "/nebula", type: "rsync"
end

View File

@@ -1,7 +0,0 @@
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure("2") do |config|
config.vm.box = "generic/openbsd7"
config.vm.synced_folder "../build", "/nebula", type: "rsync"
end

View File

@@ -18,92 +18,54 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v4 - name: Set up Go 1.17
uses: actions/setup-go@v2
- uses: actions/setup-go@v6
with: with:
go-version: '1.25' go-version: 1.17
check-latest: true id: go
- name: Check out code into the Go module directory
uses: actions/checkout@v2
- uses: actions/cache@v2
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go1.17-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go1.17-
- name: Build - name: Build
run: make all run: make all
- name: Vet
run: make vet
- name: golangci-lint
uses: golangci/golangci-lint-action@v8
with:
version: v2.5
- name: Test - name: Test
run: make test run: make test
- name: End 2 end - name: End 2 end
run: make e2evv run: make e2evv
- name: Build test mobile
run: make build-test-mobile
- uses: actions/upload-artifact@v4
with:
name: e2e packet flow linux-latest
path: e2e/mermaid/linux-latest
if-no-files-found: warn
test-linux-boringcrypto:
name: Build and test on linux with boringcrypto
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-go@v6
with:
go-version: '1.25'
check-latest: true
- name: Build
run: make bin-boringcrypto
- name: Test
run: make test-boringcrypto
- name: End 2 end
run: make e2e GOEXPERIMENT=boringcrypto CGO_ENABLED=1 TEST_ENV="TEST_LOGS=1" TEST_FLAGS="-v -ldflags -checklinkname=0"
test-linux-pkcs11:
name: Build and test on linux with pkcs11
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-go@v6
with:
go-version: '1.25'
check-latest: true
- name: Build
run: make bin-pkcs11
- name: Test
run: make test-pkcs11
test: test:
name: Build and test on ${{ matrix.os }} name: Build and test on ${{ matrix.os }}
runs-on: ${{ matrix.os }} runs-on: ${{ matrix.os }}
strategy: strategy:
matrix: matrix:
os: [windows-latest, macos-latest] os: [windows-latest, macOS-latest]
steps: steps:
- uses: actions/checkout@v4 - name: Set up Go 1.17
uses: actions/setup-go@v2
- uses: actions/setup-go@v6
with: with:
go-version: '1.25' go-version: 1.17
check-latest: true id: go
- name: Check out code into the Go module directory
uses: actions/checkout@v2
- uses: actions/cache@v2
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go1.17-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go1.17-
- name: Build nebula - name: Build nebula
run: go build ./cmd/nebula run: go build ./cmd/nebula
@@ -111,22 +73,8 @@ jobs:
- name: Build nebula-cert - name: Build nebula-cert
run: go build ./cmd/nebula-cert run: go build ./cmd/nebula-cert
- name: Vet
run: make vet
- name: golangci-lint
uses: golangci/golangci-lint-action@v8
with:
version: v2.5
- name: Test - name: Test
run: make test run: go test -v ./...
- name: End 2 end - name: End 2 end
run: make e2evv run: make e2evv
- uses: actions/upload-artifact@v4
with:
name: e2e packet flow ${{ matrix.os }}
path: e2e/mermaid/${{ matrix.os }}
if-no-files-found: warn

13
.gitignore vendored
View File

@@ -4,16 +4,9 @@
/nebula-arm6 /nebula-arm6
/nebula-darwin /nebula-darwin
/nebula.exe /nebula.exe
/nebula-cert.exe /cert/*.crt
**/coverage.out /cert/*.key
**/cover.out /coverage.out
/cpu.pprof /cpu.pprof
/build /build
/*.tar.gz /*.tar.gz
/e2e/mermaid/
**.crt
**.key
**.pem
**.pub
!/examples/quickstart-vagrant/ansible/roles/nebula/files/vagrant-test-ca.key
!/examples/quickstart-vagrant/ansible/roles/nebula/files/vagrant-test-ca.crt

View File

@@ -1,23 +0,0 @@
version: "2"
linters:
default: none
enable:
- testifylint
exclusions:
generated: lax
presets:
- comments
- common-false-positives
- legacy
- std-error-handling
paths:
- third_party$
- builtin$
- examples$
formatters:
exclusions:
generated: lax
paths:
- third_party$
- builtin$
- examples$

View File

@@ -7,373 +7,6 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
## [Unreleased] ## [Unreleased]
### Changed
- `default_local_cidr_any` now defaults to false, meaning that any firewall rule
intended to target an `unsafe_routes` entry must explicitly declare it via the
`local_cidr` field. This is almost always the intended behavior. This flag is
deprecated and will be removed in a future release.
## [1.9.4] - 2024-09-09
### Added
- Support UDP dialing with gVisor. (#1181)
### Changed
- Make some Nebula state programmatically available via control object. (#1188)
- Switch internal representation of IPs to netip, to prepare for IPv6 support
in the overlay. (#1173)
- Minor build and cleanup changes. (#1171, #1164, #1162)
- Various dependency updates. (#1195, #1190, #1174, #1168, #1167, #1161, #1147, #1146)
### Fixed
- Fix a bug on big endian hosts, like mips. (#1194)
- Fix a rare panic if a local index collision happens. (#1191)
- Fix integer wraparound in the calculation of handshake timeouts on 32-bit targets. (#1185)
## [1.9.3] - 2024-06-06
### Fixed
- Initialize messageCounter to 2 instead of verifying later. (#1156)
## [1.9.2] - 2024-06-03
### Fixed
- Ensure messageCounter is set before handshake is complete. (#1154)
## [1.9.1] - 2024-05-29
### Fixed
- Fixed a potential deadlock in GetOrHandshake. (#1151)
## [1.9.0] - 2024-05-07
### Deprecated
- This release adds a new setting `default_local_cidr_any` that defaults to
true to match previous behavior, but will default to false in the next
release (1.10). When set to false, `local_cidr` is matched correctly for
firewall rules on hosts acting as unsafe routers, and should be set for any
firewall rules you want to allow unsafe route hosts to access. See the issue
and example config for more details. (#1071, #1099)
### Added
- Nebula now has an official Docker image `nebulaoss/nebula` that is
distroless and contains just the `nebula` and `nebula-cert` binaries. You
can find it here: https://hub.docker.com/r/nebulaoss/nebula (#1037)
- Experimental binaries for `loong64` are now provided. (#1003)
- Added example service script for OpenRC. (#711)
- The SSH daemon now supports inlined host keys. (#1054)
- The SSH daemon now supports certificates with `sshd.trusted_cas`. (#1098)
### Changed
- Config setting `tun.unsafe_routes` is now reloadable. (#1083)
- Small documentation and internal improvements. (#1065, #1067, #1069, #1108,
#1109, #1111, #1135)
- Various dependency updates. (#1139, #1138, #1134, #1133, #1126, #1123, #1110,
#1094, #1092, #1087, #1086, #1085, #1072, #1063, #1059, #1055, #1053, #1047,
#1046, #1034, #1022)
### Removed
- Support for the deprecated `local_range` option has been removed. Please
change to `preferred_ranges` (which is also now reloadable). (#1043)
- We are now building with go1.22, which means that for Windows you need at
least Windows 10 or Windows Server 2016. This is because support for earlier
versions was removed in Go 1.21. See https://go.dev/doc/go1.21#windows (#981)
- Removed vagrant example, as it was unmaintained. (#1129)
- Removed Fedora and Arch nebula.service files, as they are maintained in the
upstream repos. (#1128, #1132)
- Remove the TCP round trip tracking metrics, as they never had correct data
and were an experiment to begin with. (#1114)
### Fixed
- Fixed a potential deadlock introduced in 1.8.1. (#1112)
- Fixed support for Linux when IPv6 has been disabled at the OS level. (#787)
- DNS will return NXDOMAIN now when there are no results. (#845)
- Allow `::` in `lighthouse.dns.host`. (#1115)
- Capitalization of `NotAfter` fixed in DNS TXT response. (#1127)
- Don't log invalid certificates. It is untrusted data and can cause a large
volume of logs. (#1116)
## [1.8.2] - 2024-01-08
### Fixed
- Fix multiple routines when listen.port is zero. This was a regression
introduced in v1.6.0. (#1057)
### Changed
- Small dependency update for Noise. (#1038)
## [1.8.1] - 2023-12-19
### Security
- Update `golang.org/x/crypto`, which includes a fix for CVE-2023-48795. (#1048)
### Fixed
- Fix a deadlock introduced in v1.8.0 that could occur during handshakes. (#1044)
- Fix mobile builds. (#1035)
## [1.8.0] - 2023-12-06
### Deprecated
- The next minor release of Nebula, 1.9.0, will require at least Windows 10 or
Windows Server 2016. This is because support for earlier versions was removed
in Go 1.21. See https://go.dev/doc/go1.21#windows
### Added
- Linux: Notify systemd of service readiness. This should resolve timing issues
with services that depend on Nebula being active. For an example of how to
enable this, see: `examples/service_scripts/nebula.service`. (#929)
- Windows: Use Registered IO (RIO) when possible. Testing on a Windows 11
machine shows ~50x improvement in throughput. (#905)
- NetBSD, OpenBSD: Added rudimentary support. (#916, #812)
- FreeBSD: Add support for naming tun devices. (#903)
### Changed
- `pki.disconnect_invalid` will now default to true. This means that once a
certificate expires, the tunnel will be disconnected. If you use SIGHUP to
reload certificates without restarting Nebula, you should ensure all of your
clients are on 1.7.0 or newer before you enable this feature. (#859)
- Limit how often a busy tunnel can requery the lighthouse. The new config
option `timers.requery_wait_duration` defaults to `60s`. (#940)
- The internal structures for hostmaps were refactored to reduce memory usage
and the potential for subtle bugs. (#843, #938, #953, #954, #955)
- Lots of dependency updates.
### Fixed
- Windows: Retry wintun device creation if it fails the first time. (#985)
- Fix issues with firewall reject packets that could cause panics. (#957)
- Fix relay migration during re-handshakes. (#964)
- Various other refactors and fixes. (#935, #952, #972, #961, #996, #1002,
#987, #1004, #1030, #1032, ...)
## [1.7.2] - 2023-06-01
### Fixed
- Fix a freeze during config reload if the `static_host_map` config was changed. (#886)
## [1.7.1] - 2023-05-18
### Fixed
- Fix IPv4 addresses returned by `static_host_map` DNS lookup queries being
treated as IPv6 addresses. (#877)
## [1.7.0] - 2023-05-17
### Added
- `nebula-cert ca` now supports encrypting the CA's private key with a
passphrase. Pass `-encrypt` in order to be prompted for a passphrase.
Encryption is performed using AES-256-GCM and Argon2id for KDF. KDF
parameters default to RFC recommendations, but can be overridden via CLI
flags `-argon-memory`, `-argon-parallelism`, and `-argon-iterations`. (#386)
- Support for curve P256 and BoringCrypto has been added. See README section
"Curve P256 and BoringCrypto" for more details. (#865, #861, #769, #856, #803)
- New firewall rule `local_cidr`. This could be used to filter destinations
when using `unsafe_routes`. (#507)
- Add `unsafe_route` option `install`. This controls whether the route is
installed in the systems routing table. (#831)
- Add `tun.use_system_route_table` option. Set to true to manage unsafe routes
directly on the system route table with gateway routes instead of in Nebula
configuration files. This is only supported on Linux. (#839)
- The metric `certificate.ttl_seconds` is now exposed via stats. (#782)
- Add `punchy.respond_delay` option. This allows you to change the delay
before attempting punchy.respond. Default is 5 seconds. (#721)
- Added SSH commands to allow the capture of a mutex profile. (#737)
- You can now set `lighthouse.calculated_remotes` to make it possible to do
handshakes without a lighthouse in certain configurations. (#759)
- The firewall can be configured to send REJECT replies instead of the default
DROP behavior. (#738)
- For macOS, an example launchd configuration file is now provided. (#762)
### Changed
- Lighthouses and other `static_host_map` entries that use DNS names will now
be automatically refreshed to detect when the IP address changes. (#796)
- Lighthouses send ACK replies back to clients so that they do not fall into
connection testing as often by clients. (#851, #408)
- Allow the `listen.host` option to contain a hostname. (#825)
- When Nebula switches to a new certificate (such as via SIGHUP), we now
rehandshake with all existing tunnels. This allows firewall groups to be
updated and `pki.disconnect_invalid` to know about the new certificate
expiration time. (#838, #857, #842, #840, #835, #828, #820, #807)
### Fixed
- Always disconnect blocklisted hosts, even if `pki.disconnect_invalid` is
not set. (#858)
- Dependencies updated and go1.20 required. (#780, #824, #855, #854)
- Fix possible race condition with relays. (#827)
- FreeBSD: Fix connection to the localhost's own Nebula IP. (#808)
- Normalize and document some common log field values. (#837, #811)
- Fix crash if you set unlucky values for the firewall timeout configuration
options. (#802)
- Make DNS queries case insensitive. (#793)
- Update example systemd configurations to want `nss-lookup`. (#791)
- Errors with SSH commands now go to the SSH tunnel instead of stderr. (#757)
- Fix a hang when shutting down Android. (#772)
## [1.6.1] - 2022-09-26
### Fixed
- Refuse to process underlay packets received from overlay IPs. This prevents
confusion on hosts that have unsafe routes configured. (#741)
- The ssh `reload` command did not work on Windows, since it relied on sending
a SIGHUP signal internally. This has been fixed. (#725)
- A regression in v1.5.2 that broke unsafe routes on Mobile clients has been
fixed. (#729)
## [1.6.0] - 2022-06-30
### Added
- Experimental: nebula clients can be configured to act as relays for other nebula clients.
Primarily useful when stubborn NATs make a direct tunnel impossible. (#678)
- Configuration option to report manually specified `ip:port`s to lighthouses. (#650)
- Windows arm64 build. (#638)
- `punchy` and most `lighthouse` config options now support hot reloading. (#649)
### Changed
- Build against go 1.18. (#656)
- Promoted `routines` config from experimental to supported feature. (#702)
- Dependencies updated. (#664)
### Fixed
- Packets destined for the same host that sent it will be returned on MacOS.
This matches the default behavior of other operating systems. (#501)
- `unsafe_route` configuration will no longer crash on Windows. (#648)
- A few panics that were introduced in 1.5.x. (#657, #658, #675)
### Security
- You can set `listen.send_recv_error` to control the conditions in which
`recv_error` messages are sent. Sending these messages can expose the fact
that Nebula is running on a host, but it speeds up re-handshaking. (#670)
### Removed
- `x509` config stanza support has been removed. (#685)
## [1.5.2] - 2021-12-14
### Added
- Warn when a non lighthouse node does not have lighthouse hosts configured. (#587)
### Changed
- No longer fatals if expired CA certificates are present in `pki.ca`, as long as 1 valid CA is present. (#599)
- `nebula-cert` will now enforce ipv4 addresses. (#604)
- Warn on macOS if an unsafe route cannot be created due to a collision with an
existing route. (#610)
- Warn if you set a route MTU on platforms where we don't support it. (#611)
### Fixed
- Rare race condition when tearing down a tunnel due to `recv_error` and sending packets on another thread. (#590)
- Bug in `routes` and `unsafe_routes` handling that was introduced in 1.5.0. (#595)
- `-test` mode no longer results in a crash. (#602)
### Removed
- `x509.ca` config alias for `pki.ca`. (#604)
### Security
- Upgraded `golang.org/x/crypto` to address an issue which allowed unauthenticated clients to cause a panic in SSH
servers. (#603)
## 1.5.1 - 2021-12-13
(This release was skipped due to discovering #610 and #611 after the tag was
created.)
## [1.5.0] - 2021-11-11
### Added ### Added
- SSH `print-cert` has a new `-raw` flag to get the PEM representation of a certificate. (#483) - SSH `print-cert` has a new `-raw` flag to get the PEM representation of a certificate. (#483)
@@ -387,27 +20,12 @@ created.)
certificates since the last handshake. (#370) certificates since the last handshake. (#370)
- New config option `unsafe_routes.<route>.metric` will set a metric for a specific unsafe route. It's useful if you have - New config option `unsafe_routes.<route>.metric` will set a metric for a specific unsafe route. It's useful if you have
more than one identical route and want to prefer one against the other. (#353) more than one identical route and want to prefer one against the other.
### Changed ### Changed
- Build against go 1.17. (#553) - Build against go 1.17. (#553)
- Build with `CGO_ENABLED=0` set, to create more portable binaries. This could
have an effect on DNS resolution if you rely on anything non-standard. (#421)
- Windows now uses the [wintun](https://www.wintun.net/) driver which does not require installation. This driver
is a large improvement over the TAP driver that was used in previous versions. If you had a previous version
of `nebula` running, you will want to disable the tap driver in Control Panel, or uninstall the `tap0901` driver
before running this version. (#289)
- Darwin binaries are now universal (works on both amd64 and arm64), signed, and shipped in a notarized zip file.
`nebula-darwin.zip` will be the only darwin release artifact. (#571)
- Darwin uses syscalls and AF_ROUTE to configure the routing table, instead of
using `/sbin/route`. Setting `tun.dev` is now allowed on Darwin as well, it
must be in the format `utun[0-9]+` or it will be ignored. (#163)
### Deprecated ### Deprecated
- The `preferred_ranges` option has been supported as a replacement for - The `preferred_ranges` option has been supported as a replacement for
@@ -431,8 +49,6 @@ created.)
- A race condition when `punchy.respond` is enabled and ensures the correct - A race condition when `punchy.respond` is enabled and ensures the correct
vpn ip is sent a punch back response in highly queried node. (#566) vpn ip is sent a punch back response in highly queried node. (#566)
- Fix a rare crash during handshake due to a race condition. (#535)
## [1.4.0] - 2021-05-11 ## [1.4.0] - 2021-05-11
### Added ### Added
@@ -671,22 +287,7 @@ created.)
- Initial public release. - Initial public release.
[Unreleased]: https://github.com/slackhq/nebula/compare/v1.9.4...HEAD [Unreleased]: https://github.com/slackhq/nebula/compare/v1.4.0...HEAD
[1.9.4]: https://github.com/slackhq/nebula/releases/tag/v1.9.4
[1.9.3]: https://github.com/slackhq/nebula/releases/tag/v1.9.3
[1.9.2]: https://github.com/slackhq/nebula/releases/tag/v1.9.2
[1.9.1]: https://github.com/slackhq/nebula/releases/tag/v1.9.1
[1.9.0]: https://github.com/slackhq/nebula/releases/tag/v1.9.0
[1.8.2]: https://github.com/slackhq/nebula/releases/tag/v1.8.2
[1.8.1]: https://github.com/slackhq/nebula/releases/tag/v1.8.1
[1.8.0]: https://github.com/slackhq/nebula/releases/tag/v1.8.0
[1.7.2]: https://github.com/slackhq/nebula/releases/tag/v1.7.2
[1.7.1]: https://github.com/slackhq/nebula/releases/tag/v1.7.1
[1.7.0]: https://github.com/slackhq/nebula/releases/tag/v1.7.0
[1.6.1]: https://github.com/slackhq/nebula/releases/tag/v1.6.1
[1.6.0]: https://github.com/slackhq/nebula/releases/tag/v1.6.0
[1.5.2]: https://github.com/slackhq/nebula/releases/tag/v1.5.2
[1.5.0]: https://github.com/slackhq/nebula/releases/tag/v1.5.0
[1.4.0]: https://github.com/slackhq/nebula/releases/tag/v1.4.0 [1.4.0]: https://github.com/slackhq/nebula/releases/tag/v1.4.0
[1.3.0]: https://github.com/slackhq/nebula/releases/tag/v1.3.0 [1.3.0]: https://github.com/slackhq/nebula/releases/tag/v1.3.0
[1.2.0]: https://github.com/slackhq/nebula/releases/tag/v1.2.0 [1.2.0]: https://github.com/slackhq/nebula/releases/tag/v1.2.0

View File

@@ -1,37 +0,0 @@
### Logging conventions
A log message (the string/format passed to `Info`, `Error`, `Debug` etc, as well as their `Sprintf` counterparts) should
be a descriptive message about the event and may contain specific identifying characteristics. Regardless of the
level of detail in the message identifying characteristics should always be included via `WithField`, `WithFields` or
`WithError`
If an error is being logged use `l.WithError(err)` so that there is better discoverability about the event as well
as the specific error condition.
#### Common fields
- `cert` - a `cert.NebulaCertificate` object, do not `.String()` this manually, `logrus` will marshal objects properly
for the formatter it is using.
- `fingerprint` - a single `NebeulaCertificate` hex encoded fingerprint
- `fingerprints` - an array of `NebulaCertificate` hex encoded fingerprints
- `fwPacket` - a FirewallPacket object
- `handshake` - an object containing:
- `stage` - the current stage counter
- `style` - noise handshake style `ix_psk0`, `xx`, etc
- `header` - a nebula header object
- `udpAddr` - a `net.UDPAddr` object
- `udpIp` - a udp ip address
- `vpnIp` - vpn ip of the host (remote or local)
- `relay` - the vpnIp of the relay host that is or should be handling the relay packet
- `relayFrom` - The vpnIp of the initial sender of the relayed packet
- `relayTo` - The vpnIp of the final destination of a relayed packet
#### Example:
```
l.WithError(err).
WithField("vpnIp", IntIp(hostinfo.hostId)).
WithField("udpAddr", addr).
WithField("handshake", m{"stage": 1, "style": "ix"}).
Info("Invalid certificate from host")
```

104
Makefile
View File

@@ -1,14 +1,18 @@
GOMINVERSION = 1.17
NEBULA_CMD_PATH = "./cmd/nebula" NEBULA_CMD_PATH = "./cmd/nebula"
CGO_ENABLED = 0 GO111MODULE = on
export CGO_ENABLED export GO111MODULE
# Set up OS specific bits # Set up OS specific bits
ifeq ($(OS),Windows_NT) ifeq ($(OS),Windows_NT)
#TODO: we should be able to ditch awk as well
GOVERSION := $(shell go version | awk "{print substr($$3, 3)}")
GOISMIN := $(shell IF "$(GOVERSION)" GEQ "$(GOMINVERSION)" ECHO 1)
NEBULA_CMD_SUFFIX = .exe NEBULA_CMD_SUFFIX = .exe
NULL_FILE = nul NULL_FILE = nul
# RIO on windows does pointer stuff that makes go vet angry
VET_FLAGS = -unsafeptr=false
else else
GOVERSION := $(shell go version | awk '{print substr($$3, 3)}')
GOISMIN := $(shell expr "$(GOVERSION)" ">=" "$(GOMINVERSION)")
NEBULA_CMD_SUFFIX = NEBULA_CMD_SUFFIX =
NULL_FILE = /dev/null NULL_FILE = /dev/null
endif endif
@@ -22,9 +26,6 @@ ifndef BUILD_NUMBER
endif endif
endif endif
DOCKER_IMAGE_REPO ?= nebulaoss/nebula
DOCKER_IMAGE_TAG ?= latest
LDFLAGS = -X main.Build=$(BUILD_NUMBER) LDFLAGS = -X main.Build=$(BUILD_NUMBER)
ALL_LINUX = linux-amd64 \ ALL_LINUX = linux-amd64 \
@@ -39,31 +40,18 @@ ALL_LINUX = linux-amd64 \
linux-mips64 \ linux-mips64 \
linux-mips64le \ linux-mips64le \
linux-mips-softfloat \ linux-mips-softfloat \
linux-riscv64 \ linux-riscv64
linux-loong64
ALL_FREEBSD = freebsd-amd64 \
freebsd-arm64
ALL_OPENBSD = openbsd-amd64 \
openbsd-arm64
ALL_NETBSD = netbsd-amd64 \
netbsd-arm64
ALL = $(ALL_LINUX) \ ALL = $(ALL_LINUX) \
$(ALL_FREEBSD) \
$(ALL_OPENBSD) \
$(ALL_NETBSD) \
darwin-amd64 \ darwin-amd64 \
darwin-arm64 \ darwin-arm64 \
windows-amd64 \ freebsd-amd64 \
windows-arm64 windows-amd64
e2e: e2e:
$(TEST_ENV) go test -tags=e2e_testing -count=1 $(TEST_FLAGS) ./e2e $(TEST_ENV) go test -tags=e2e_testing -count=1 $(TEST_FLAGS) ./e2e
e2ev: TEST_FLAGS += -v e2ev: TEST_FLAGS = -v
e2ev: e2e e2ev: e2e
e2evv: TEST_ENV += TEST_LOGS=1 e2evv: TEST_ENV += TEST_LOGS=1
@@ -75,51 +63,25 @@ e2evvv: e2ev
e2evvvv: TEST_ENV += TEST_LOGS=3 e2evvvv: TEST_ENV += TEST_LOGS=3
e2evvvv: e2ev e2evvvv: e2ev
e2e-bench: TEST_FLAGS = -bench=. -benchmem -run=^$
e2e-bench: e2e
DOCKER_BIN = build/linux-amd64/nebula build/linux-amd64/nebula-cert
all: $(ALL:%=build/%/nebula) $(ALL:%=build/%/nebula-cert) all: $(ALL:%=build/%/nebula) $(ALL:%=build/%/nebula-cert)
docker: docker/linux-$(shell go env GOARCH)
release: $(ALL:%=build/nebula-%.tar.gz) release: $(ALL:%=build/nebula-%.tar.gz)
release-linux: $(ALL_LINUX:%=build/nebula-%.tar.gz) release-linux: $(ALL_LINUX:%=build/nebula-%.tar.gz)
release-freebsd: $(ALL_FREEBSD:%=build/nebula-%.tar.gz) release-freebsd: build/nebula-freebsd-amd64.tar.gz
release-openbsd: $(ALL_OPENBSD:%=build/nebula-%.tar.gz) BUILD_ARGS = -trimpath
release-netbsd: $(ALL_NETBSD:%=build/nebula-%.tar.gz)
release-boringcrypto: build/nebula-linux-$(shell go env GOARCH)-boringcrypto.tar.gz
BUILD_ARGS += -trimpath
bin-windows: build/windows-amd64/nebula.exe build/windows-amd64/nebula-cert.exe bin-windows: build/windows-amd64/nebula.exe build/windows-amd64/nebula-cert.exe
mv $? . mv $? .
bin-windows-arm64: build/windows-arm64/nebula.exe build/windows-arm64/nebula-cert.exe
mv $? .
bin-darwin: build/darwin-amd64/nebula build/darwin-amd64/nebula-cert bin-darwin: build/darwin-amd64/nebula build/darwin-amd64/nebula-cert
mv $? . mv $? .
bin-freebsd: build/freebsd-amd64/nebula build/freebsd-amd64/nebula-cert bin-freebsd: build/freebsd-amd64/nebula build/freebsd-amd64/nebula-cert
mv $? . mv $? .
bin-freebsd-arm64: build/freebsd-arm64/nebula build/freebsd-arm64/nebula-cert
mv $? .
bin-boringcrypto: build/linux-$(shell go env GOARCH)-boringcrypto/nebula build/linux-$(shell go env GOARCH)-boringcrypto/nebula-cert
mv $? .
bin-pkcs11: BUILD_ARGS += -tags pkcs11
bin-pkcs11: CGO_ENABLED = 1
bin-pkcs11: bin
bin: bin:
go build $(BUILD_ARGS) -ldflags "$(LDFLAGS)" -o ./nebula${NEBULA_CMD_SUFFIX} ${NEBULA_CMD_PATH} go build $(BUILD_ARGS) -ldflags "$(LDFLAGS)" -o ./nebula${NEBULA_CMD_SUFFIX} ${NEBULA_CMD_PATH}
go build $(BUILD_ARGS) -ldflags "$(LDFLAGS)" -o ./nebula-cert${NEBULA_CMD_SUFFIX} ./cmd/nebula-cert go build $(BUILD_ARGS) -ldflags "$(LDFLAGS)" -o ./nebula-cert${NEBULA_CMD_SUFFIX} ./cmd/nebula-cert
@@ -134,12 +96,6 @@ build/linux-mips-%: GOENV += GOMIPS=$(word 3, $(subst -, ,$*))
# Build an extra small binary for mips-softfloat # Build an extra small binary for mips-softfloat
build/linux-mips-softfloat/%: LDFLAGS += -s -w build/linux-mips-softfloat/%: LDFLAGS += -s -w
# boringcrypto
build/linux-amd64-boringcrypto/%: GOENV += GOEXPERIMENT=boringcrypto CGO_ENABLED=1
build/linux-arm64-boringcrypto/%: GOENV += GOEXPERIMENT=boringcrypto CGO_ENABLED=1
build/linux-amd64-boringcrypto/%: LDFLAGS += -checklinkname=0
build/linux-arm64-boringcrypto/%: LDFLAGS += -checklinkname=0
build/%/nebula: .FORCE build/%/nebula: .FORCE
GOOS=$(firstword $(subst -, , $*)) \ GOOS=$(firstword $(subst -, , $*)) \
GOARCH=$(word 2, $(subst -, ,$*)) $(GOENV) \ GOARCH=$(word 2, $(subst -, ,$*)) $(GOENV) \
@@ -162,31 +118,16 @@ build/nebula-%.tar.gz: build/%/nebula build/%/nebula-cert
build/nebula-%.zip: build/%/nebula.exe build/%/nebula-cert.exe build/nebula-%.zip: build/%/nebula.exe build/%/nebula-cert.exe
cd build/$* && zip ../nebula-$*.zip nebula.exe nebula-cert.exe cd build/$* && zip ../nebula-$*.zip nebula.exe nebula-cert.exe
docker/%: build/%/nebula build/%/nebula-cert
docker build . $(DOCKER_BUILD_ARGS) -f docker/Dockerfile --platform "$(subst -,/,$*)" --tag "${DOCKER_IMAGE_REPO}:${DOCKER_IMAGE_TAG}" --tag "${DOCKER_IMAGE_REPO}:$(BUILD_NUMBER)"
vet: vet:
go vet $(VET_FLAGS) -v ./... go vet -v ./...
test: test:
go test -v ./... go test -v ./...
test-boringcrypto:
GOEXPERIMENT=boringcrypto CGO_ENABLED=1 go test -ldflags "-checklinkname=0" -v ./...
test-pkcs11:
CGO_ENABLED=1 go test -v -tags pkcs11 ./...
test-cov-html: test-cov-html:
go test -coverprofile=coverage.out go test -coverprofile=coverage.out
go tool cover -html=coverage.out go tool cover -html=coverage.out
build-test-mobile:
GOARCH=amd64 GOOS=ios go build $(shell go list ./... | grep -v '/cmd/\|/examples/')
GOARCH=arm64 GOOS=ios go build $(shell go list ./... | grep -v '/cmd/\|/examples/')
GOARCH=amd64 GOOS=android go build $(shell go list ./... | grep -v '/cmd/\|/examples/')
GOARCH=arm64 GOOS=android go build $(shell go list ./... | grep -v '/cmd/\|/examples/')
bench: bench:
go test -bench=. go test -bench=.
@@ -198,7 +139,7 @@ bench-cpu-long:
go test -bench=. -benchtime=60s -cpuprofile=cpu.pprof go test -bench=. -benchtime=60s -cpuprofile=cpu.pprof
go tool pprof go-audit.test cpu.pprof go tool pprof go-audit.test cpu.pprof
proto: nebula.pb.go cert/cert_v1.pb.go proto: nebula.pb.go cert/cert.pb.go
nebula.pb.go: nebula.proto .FORCE nebula.pb.go: nebula.proto .FORCE
go build github.com/gogo/protobuf/protoc-gen-gogofaster go build github.com/gogo/protobuf/protoc-gen-gogofaster
@@ -220,21 +161,10 @@ bin-docker: bin build/linux-amd64/nebula build/linux-amd64/nebula-cert
smoke-docker: bin-docker smoke-docker: bin-docker
cd .github/workflows/smoke/ && ./build.sh cd .github/workflows/smoke/ && ./build.sh
cd .github/workflows/smoke/ && ./smoke.sh cd .github/workflows/smoke/ && ./smoke.sh
cd .github/workflows/smoke/ && NAME="smoke-p256" CURVE="P256" ./build.sh
cd .github/workflows/smoke/ && NAME="smoke-p256" ./smoke.sh
smoke-relay-docker: bin-docker
cd .github/workflows/smoke/ && ./build-relay.sh
cd .github/workflows/smoke/ && ./smoke-relay.sh
smoke-docker-race: BUILD_ARGS = -race smoke-docker-race: BUILD_ARGS = -race
smoke-docker-race: CGO_ENABLED = 1
smoke-docker-race: smoke-docker smoke-docker-race: smoke-docker
smoke-vagrant/%: bin-docker build/%/nebula
cd .github/workflows/smoke/ && ./build.sh $*
cd .github/workflows/smoke/ && ./smoke-vagrant.sh $*
.FORCE: .FORCE:
.PHONY: bench bench-cpu bench-cpu-long bin build-test-mobile e2e e2ev e2evv e2evvv e2evvvv proto release service smoke-docker smoke-docker-race test test-cov-html smoke-vagrant/% .PHONY: e2e e2ev e2evv e2evvv e2evvvv test test-cov-html bench bench-cpu bench-cpu-long bin proto release service smoke-docker smoke-docker-race
.DEFAULT_GOAL := bin .DEFAULT_GOAL := bin

View File

@@ -4,15 +4,15 @@ It lets you seamlessly connect computers anywhere in the world. Nebula is portab
It can be used to connect a small number of computers, but is also able to connect tens of thousands of computers. It can be used to connect a small number of computers, but is also able to connect tens of thousands of computers.
Nebula incorporates a number of existing concepts like encryption, security groups, certificates, Nebula incorporates a number of existing concepts like encryption, security groups, certificates,
and tunneling. and tunneling, and each of those individual pieces existed before Nebula in various forms.
What makes Nebula different to existing offerings is that it brings all of these ideas together, What makes Nebula different to existing offerings is that it brings all of these ideas together,
resulting in a sum that is greater than its individual parts. resulting in a sum that is greater than its individual parts.
Further documentation can be found [here](https://nebula.defined.net/docs/). Further documentation can be found [here](https://www.defined.net/nebula/introduction/).
You can read more about Nebula [here](https://medium.com/p/884110a5579). You can read more about Nebula [here](https://medium.com/p/884110a5579).
You can also join the NebulaOSS Slack group [here](https://join.slack.com/t/nebulaoss/shared_invite/zt-39pk4xopc-CUKlGcb5Z39dQ0cK1v7ehA). You can also join the NebulaOSS Slack group [here](https://join.slack.com/t/nebulaoss/shared_invite/enQtOTA5MDI4NDg3MTg4LTkwY2EwNTI4NzQyMzc0M2ZlODBjNWI3NTY1MzhiOThiMmZlZjVkMTI0NGY4YTMyNjUwMWEyNzNkZTJmYzQxOGU).
## Supported Platforms ## Supported Platforms
@@ -27,34 +27,14 @@ Check the [releases](https://github.com/slackhq/nebula/releases/latest) page for
#### Distribution Packages #### Distribution Packages
- [Arch Linux](https://archlinux.org/packages/extra/x86_64/nebula/) - [Arch Linux](https://archlinux.org/packages/community/x86_64/nebula/)
```sh
sudo pacman -S nebula
``` ```
$ sudo pacman -S nebula
- [Fedora Linux](https://src.fedoraproject.org/rpms/nebula)
```sh
sudo dnf install nebula
``` ```
- [Fedora Linux](https://copr.fedorainfracloud.org/coprs/jdoss/nebula/)
- [Debian Linux](https://packages.debian.org/source/stable/nebula)
```sh
sudo apt install nebula
``` ```
$ sudo dnf copr enable jdoss/nebula
- [Alpine Linux](https://pkgs.alpinelinux.org/packages?name=nebula) $ sudo dnf install nebula
```sh
sudo apk add nebula
```
- [macOS Homebrew](https://github.com/Homebrew/homebrew-core/blob/HEAD/Formula/n/nebula.rb)
```sh
brew install nebula
```
- [Docker](https://hub.docker.com/r/nebulaoss/nebula)
```sh
docker pull nebulaoss/nebula
``` ```
#### Mobile #### Mobile
@@ -64,15 +44,15 @@ Check the [releases](https://github.com/slackhq/nebula/releases/latest) page for
## Technical Overview ## Technical Overview
Nebula is a mutually authenticated peer-to-peer software-defined network based on the [Noise Protocol Framework](https://noiseprotocol.org/). Nebula is a mutually authenticated peer-to-peer software defined network based on the [Noise Protocol Framework](https://noiseprotocol.org/).
Nebula uses certificates to assert a node's IP address, name, and membership within user-defined groups. Nebula uses certificates to assert a node's IP address, name, and membership within user-defined groups.
Nebula's user-defined groups allow for provider agnostic traffic filtering between nodes. Nebula's user-defined groups allow for provider agnostic traffic filtering between nodes.
Discovery nodes (aka lighthouses) allow individual peers to find each other and optionally use UDP hole punching to establish connections from behind most firewalls or NATs. Discovery nodes allow individual peers to find each other and optionally use UDP hole punching to establish connections from behind most firewalls or NATs.
Users can move data between nodes in any number of cloud service providers, datacenters, and endpoints, without needing to maintain a particular addressing scheme. Users can move data between nodes in any number of cloud service providers, datacenters, and endpoints, without needing to maintain a particular addressing scheme.
Nebula uses Elliptic-curve Diffie-Hellman (`ECDH`) key exchange and `AES-256-GCM` in its default configuration. Nebula uses elliptic curve Diffie-Hellman key exchange, and AES-256-GCM in its default configuration.
Nebula was created to provide a mechanism for groups of hosts to communicate securely, even across the internet, while enabling expressive firewall definitions similar in style to cloud security groups. Nebula was created to provide a mechanism for groups hosts to communicate securely, even across the internet, while enabling expressive firewall definitions similar in style to cloud security groups.
## Getting started (quickly) ## Getting started (quickly)
@@ -82,34 +62,28 @@ To set up a Nebula network, you'll need:
#### 2. (Optional, but you really should..) At least one discovery node with a routable IP address, which we call a lighthouse. #### 2. (Optional, but you really should..) At least one discovery node with a routable IP address, which we call a lighthouse.
Nebula lighthouses allow nodes to find each other, anywhere in the world. A lighthouse is the only node in a Nebula network whose IP should not change. Running a lighthouse requires very few compute resources, and you can easily use the least expensive option from a cloud hosting provider. If you're not sure which provider to use, a number of us have used $6/mo [DigitalOcean](https://digitalocean.com) droplets as lighthouses. Nebula lighthouses allow nodes to find each other, anywhere in the world. A lighthouse is the only node in a Nebula network whose IP should not change. Running a lighthouse requires very few compute resources, and you can easily use the least expensive option from a cloud hosting provider. If you're not sure which provider to use, a number of us have used $5/mo [DigitalOcean](https://digitalocean.com) droplets as lighthouses.
Once you have launched an instance, ensure that Nebula udp traffic (default port udp/4242) can reach it over the internet.
Once you have launched an instance, ensure that Nebula udp traffic (default port udp/4242) can reach it over the internet.
#### 3. A Nebula certificate authority, which will be the root of trust for a particular Nebula network. #### 3. A Nebula certificate authority, which will be the root of trust for a particular Nebula network.
```sh ```
./nebula-cert ca -name "Myorganization, Inc" ./nebula-cert ca -name "Myorganization, Inc"
``` ```
This will create files named `ca.key` and `ca.cert` in the current directory. The `ca.key` file is the most sensitive file you'll create, because it is the key used to sign the certificates for individual nebula nodes/hosts. Please store this file somewhere safe, preferably with strong encryption.
This will create files named `ca.key` and `ca.cert` in the current directory. The `ca.key` file is the most sensitive file you'll create, because it is the key used to sign the certificates for individual nebula nodes/hosts. Please store this file somewhere safe, preferably with strong encryption.
**Be aware!** By default, certificate authorities have a 1-year lifetime before expiration. See [this guide](https://nebula.defined.net/docs/guides/rotating-certificate-authority/) for details on rotating a CA.
#### 4. Nebula host keys and certificates generated from that certificate authority #### 4. Nebula host keys and certificates generated from that certificate authority
This assumes you have four nodes, named lighthouse1, laptop, server1, host3. You can name the nodes any way you'd like, including FQDN. You'll also need to choose IP addresses and the associated subnet. In this example, we are creating a nebula network that will use 192.168.100.x/24 as its network range. This example also demonstrates nebula groups, which can later be used to define traffic rules in a nebula network. This assumes you have four nodes, named lighthouse1, laptop, server1, host3. You can name the nodes any way you'd like, including FQDN. You'll also need to choose IP addresses and the associated subnet. In this example, we are creating a nebula network that will use 192.168.100.x/24 as its network range. This example also demonstrates nebula groups, which can later be used to define traffic rules in a nebula network.
```sh ```
./nebula-cert sign -name "lighthouse1" -ip "192.168.100.1/24" ./nebula-cert sign -name "lighthouse1" -ip "192.168.100.1/24"
./nebula-cert sign -name "laptop" -ip "192.168.100.2/24" -groups "laptop,home,ssh" ./nebula-cert sign -name "laptop" -ip "192.168.100.2/24" -groups "laptop,home,ssh"
./nebula-cert sign -name "server1" -ip "192.168.100.9/24" -groups "servers" ./nebula-cert sign -name "server1" -ip "192.168.100.9/24" -groups "servers"
./nebula-cert sign -name "host3" -ip "192.168.100.10/24" ./nebula-cert sign -name "host3" -ip "192.168.100.10/24"
``` ```
By default, host certificates will expire 1 second before the CA expires. Use the `-duration` flag to specify a shorter lifetime.
#### 5. Configuration files for each host #### 5. Configuration files for each host
Download a copy of the nebula [example configuration](https://github.com/slackhq/nebula/blob/master/examples/config.yml). Download a copy of the nebula [example configuration](https://github.com/slackhq/nebula/blob/master/examples/config.yml).
* On the lighthouse node, you'll need to ensure `am_lighthouse: true` is set. * On the lighthouse node, you'll need to ensure `am_lighthouse: true` is set.
@@ -119,21 +93,18 @@ Download a copy of the nebula [example configuration](https://github.com/slackhq
#### 6. Copy nebula credentials, configuration, and binaries to each host #### 6. Copy nebula credentials, configuration, and binaries to each host
For each host, copy the nebula binary to the host, along with `config.yml` from step 5, and the files `ca.crt`, `{host}.crt`, and `{host}.key` from step 4. For each host, copy the nebula binary to the host, along with `config.yaml` from step 5, and the files `ca.crt`, `{host}.crt`, and `{host}.key` from step 4.
**DO NOT COPY `ca.key` TO INDIVIDUAL NODES.** **DO NOT COPY `ca.key` TO INDIVIDUAL NODES.**
#### 7. Run nebula on each host #### 7. Run nebula on each host
```sh
./nebula -config /path/to/config.yml
``` ```
./nebula -config /path/to/config.yaml
For more detailed instructions, [find the full documentation here](https://nebula.defined.net/docs/). ```
## Building Nebula from source ## Building Nebula from source
Make sure you have [go](https://go.dev/doc/install) installed and clone this repo. Change to the nebula directory. Download go and clone this repo. Change to the nebula directory.
To build nebula for all platforms: To build nebula for all platforms:
`make all` `make all`
@@ -143,20 +114,9 @@ To build nebula for a specific platform (ex, Windows):
See the [Makefile](Makefile) for more details on build targets See the [Makefile](Makefile) for more details on build targets
## Curve P256 and BoringCrypto
The default curve used for cryptographic handshakes and signatures is Curve25519. This is the recommended setting for most users. If your deployment has certain compliance requirements, you have the option of creating your CA using `nebula-cert ca -curve P256` to use NIST Curve P256. The CA will then sign certificates using ECDSA P256, and any hosts using these certificates will use P256 for ECDH handshakes.
In addition, Nebula can be built using the [BoringCrypto GOEXPERIMENT](https://github.com/golang/go/blob/go1.20/src/crypto/internal/boring/README.md) by running either of the following make targets:
```sh
make bin-boringcrypto
make release-boringcrypto
```
This is not the recommended default deployment, but may be useful based on your compliance requirements.
## Credits ## Credits
Nebula was created at Slack Technologies, Inc by Nate Brown and Ryan Huber, with contributions from Oliver Fross, Alan Lam, Wade Simmons, and Lining Wang. Nebula was created at Slack Technologies, Inc by Nate Brown and Ryan Huber, with contributions from Oliver Fross, Alan Lam, Wade Simmons, and Lining Wang.

View File

@@ -1,12 +0,0 @@
Security Policy
===============
Reporting a Vulnerability
-------------------------
If you believe you have found a security vulnerability with Nebula, please let
us know right away. We will investigate all reports and do our best to quickly
fix valid issues.
You can submit your report on [HackerOne](https://hackerone.com/slack) and our
security team will respond as soon as possible.

View File

@@ -2,16 +2,17 @@ package nebula
import ( import (
"fmt" "fmt"
"net/netip" "net"
"regexp" "regexp"
"github.com/gaissmai/bart" "github.com/slackhq/nebula/cidr"
"github.com/slackhq/nebula/config" "github.com/slackhq/nebula/config"
"github.com/slackhq/nebula/iputil"
) )
type AllowList struct { type AllowList struct {
// The values of this cidrTree are `bool`, signifying allow/deny // The values of this cidrTree are `bool`, signifying allow/deny
cidrTree *bart.Table[bool] cidrTree *cidr.Tree6
} }
type RemoteAllowList struct { type RemoteAllowList struct {
@@ -19,7 +20,7 @@ type RemoteAllowList struct {
// Inside Range Specific, keys of this tree are inside CIDRs and values // Inside Range Specific, keys of this tree are inside CIDRs and values
// are *AllowList // are *AllowList
insideAllowLists *bart.Table[*AllowList] insideAllowLists *cidr.Tree6
} }
type LocalAllowList struct { type LocalAllowList struct {
@@ -36,7 +37,7 @@ type AllowListNameRule struct {
func NewLocalAllowListFromConfig(c *config.C, k string) (*LocalAllowList, error) { func NewLocalAllowListFromConfig(c *config.C, k string) (*LocalAllowList, error) {
var nameRules []AllowListNameRule var nameRules []AllowListNameRule
handleKey := func(key string, value any) (bool, error) { handleKey := func(key string, value interface{}) (bool, error) {
if key == "interfaces" { if key == "interfaces" {
var err error var err error
nameRules, err = getAllowListInterfaces(k, value) nameRules, err = getAllowListInterfaces(k, value)
@@ -70,7 +71,7 @@ func NewRemoteAllowListFromConfig(c *config.C, k, rangesKey string) (*RemoteAllo
// If the handleKey func returns true, the rest of the parsing is skipped // If the handleKey func returns true, the rest of the parsing is skipped
// for this key. This allows parsing of special values like `interfaces`. // for this key. This allows parsing of special values like `interfaces`.
func newAllowListFromConfig(c *config.C, k string, handleKey func(key string, value any) (bool, error)) (*AllowList, error) { func newAllowListFromConfig(c *config.C, k string, handleKey func(key string, value interface{}) (bool, error)) (*AllowList, error) {
r := c.Get(k) r := c.Get(k)
if r == nil { if r == nil {
return nil, nil return nil, nil
@@ -81,13 +82,13 @@ func newAllowListFromConfig(c *config.C, k string, handleKey func(key string, va
// If the handleKey func returns true, the rest of the parsing is skipped // If the handleKey func returns true, the rest of the parsing is skipped
// for this key. This allows parsing of special values like `interfaces`. // for this key. This allows parsing of special values like `interfaces`.
func newAllowList(k string, raw any, handleKey func(key string, value any) (bool, error)) (*AllowList, error) { func newAllowList(k string, raw interface{}, handleKey func(key string, value interface{}) (bool, error)) (*AllowList, error) {
rawMap, ok := raw.(map[string]any) rawMap, ok := raw.(map[interface{}]interface{})
if !ok { if !ok {
return nil, fmt.Errorf("config `%s` has invalid type: %T", k, raw) return nil, fmt.Errorf("config `%s` has invalid type: %T", k, raw)
} }
tree := new(bart.Table[bool]) tree := cidr.NewTree6()
// Keep track of the rules we have added for both ipv4 and ipv6 // Keep track of the rules we have added for both ipv4 and ipv6
type allowListRules struct { type allowListRules struct {
@@ -100,7 +101,12 @@ func newAllowList(k string, raw any, handleKey func(key string, value any) (bool
rules4 := allowListRules{firstValue: true, allValuesMatch: true, defaultSet: false} rules4 := allowListRules{firstValue: true, allValuesMatch: true, defaultSet: false}
rules6 := allowListRules{firstValue: true, allValuesMatch: true, defaultSet: false} rules6 := allowListRules{firstValue: true, allValuesMatch: true, defaultSet: false}
for rawCIDR, rawValue := range rawMap { for rawKey, rawValue := range rawMap {
rawCIDR, ok := rawKey.(string)
if !ok {
return nil, fmt.Errorf("config `%s` has invalid key (type %T): %v", k, rawKey, rawKey)
}
if handleKey != nil { if handleKey != nil {
handled, err := handleKey(rawCIDR, rawValue) handled, err := handleKey(rawCIDR, rawValue)
if err != nil { if err != nil {
@@ -111,24 +117,23 @@ func newAllowList(k string, raw any, handleKey func(key string, value any) (bool
} }
} }
value, ok := config.AsBool(rawValue) value, ok := rawValue.(bool)
if !ok { if !ok {
return nil, fmt.Errorf("config `%s` has invalid value (type %T): %v", k, rawValue, rawValue) return nil, fmt.Errorf("config `%s` has invalid value (type %T): %v", k, rawValue, rawValue)
} }
ipNet, err := netip.ParsePrefix(rawCIDR) _, ipNet, err := net.ParseCIDR(rawCIDR)
if err != nil { if err != nil {
return nil, fmt.Errorf("config `%s` has invalid CIDR: %s. %w", k, rawCIDR, err) return nil, fmt.Errorf("config `%s` has invalid CIDR: %s", k, rawCIDR)
} }
ipNet = netip.PrefixFrom(ipNet.Addr().Unmap(), ipNet.Bits()) // TODO: should we error on duplicate CIDRs in the config?
tree.AddCIDR(ipNet, value)
tree.Insert(ipNet, value) maskBits, maskSize := ipNet.Mask.Size()
maskBits := ipNet.Bits()
var rules *allowListRules var rules *allowListRules
if ipNet.Addr().Is4() { if maskSize == 32 {
rules = &rules4 rules = &rules4
} else { } else {
rules = &rules6 rules = &rules6
@@ -151,7 +156,8 @@ func newAllowList(k string, raw any, handleKey func(key string, value any) (bool
if !rules4.defaultSet { if !rules4.defaultSet {
if rules4.allValuesMatch { if rules4.allValuesMatch {
tree.Insert(netip.PrefixFrom(netip.IPv4Unspecified(), 0), !rules4.allValues) _, zeroCIDR, _ := net.ParseCIDR("0.0.0.0/0")
tree.AddCIDR(zeroCIDR, !rules4.allValues)
} else { } else {
return nil, fmt.Errorf("config `%s` contains both true and false rules, but no default set for 0.0.0.0/0", k) return nil, fmt.Errorf("config `%s` contains both true and false rules, but no default set for 0.0.0.0/0", k)
} }
@@ -159,7 +165,8 @@ func newAllowList(k string, raw any, handleKey func(key string, value any) (bool
if !rules6.defaultSet { if !rules6.defaultSet {
if rules6.allValuesMatch { if rules6.allValuesMatch {
tree.Insert(netip.PrefixFrom(netip.IPv6Unspecified(), 0), !rules6.allValues) _, zeroCIDR, _ := net.ParseCIDR("::/0")
tree.AddCIDR(zeroCIDR, !rules6.allValues)
} else { } else {
return nil, fmt.Errorf("config `%s` contains both true and false rules, but no default set for ::/0", k) return nil, fmt.Errorf("config `%s` contains both true and false rules, but no default set for ::/0", k)
} }
@@ -168,18 +175,22 @@ func newAllowList(k string, raw any, handleKey func(key string, value any) (bool
return &AllowList{cidrTree: tree}, nil return &AllowList{cidrTree: tree}, nil
} }
func getAllowListInterfaces(k string, v any) ([]AllowListNameRule, error) { func getAllowListInterfaces(k string, v interface{}) ([]AllowListNameRule, error) {
var nameRules []AllowListNameRule var nameRules []AllowListNameRule
rawRules, ok := v.(map[string]any) rawRules, ok := v.(map[interface{}]interface{})
if !ok { if !ok {
return nil, fmt.Errorf("config `%s.interfaces` is invalid (type %T): %v", k, v, v) return nil, fmt.Errorf("config `%s.interfaces` is invalid (type %T): %v", k, v, v)
} }
firstEntry := true firstEntry := true
var allValues bool var allValues bool
for name, rawAllow := range rawRules { for rawName, rawAllow := range rawRules {
allow, ok := config.AsBool(rawAllow) name, ok := rawName.(string)
if !ok {
return nil, fmt.Errorf("config `%s.interfaces` has invalid key (type %T): %v", k, rawName, rawName)
}
allow, ok := rawAllow.(bool)
if !ok { if !ok {
return nil, fmt.Errorf("config `%s.interfaces` has invalid value (type %T): %v", k, rawAllow, rawAllow) return nil, fmt.Errorf("config `%s.interfaces` has invalid value (type %T): %v", k, rawAllow, rawAllow)
} }
@@ -207,49 +218,87 @@ func getAllowListInterfaces(k string, v any) ([]AllowListNameRule, error) {
return nameRules, nil return nameRules, nil
} }
func getRemoteAllowRanges(c *config.C, k string) (*bart.Table[*AllowList], error) { func getRemoteAllowRanges(c *config.C, k string) (*cidr.Tree6, error) {
value := c.Get(k) value := c.Get(k)
if value == nil { if value == nil {
return nil, nil return nil, nil
} }
remoteAllowRanges := new(bart.Table[*AllowList]) remoteAllowRanges := cidr.NewTree6()
rawMap, ok := value.(map[string]any) rawMap, ok := value.(map[interface{}]interface{})
if !ok { if !ok {
return nil, fmt.Errorf("config `%s` has invalid type: %T", k, value) return nil, fmt.Errorf("config `%s` has invalid type: %T", k, value)
} }
for rawCIDR, rawValue := range rawMap { for rawKey, rawValue := range rawMap {
rawCIDR, ok := rawKey.(string)
if !ok {
return nil, fmt.Errorf("config `%s` has invalid key (type %T): %v", k, rawKey, rawKey)
}
allowList, err := newAllowList(fmt.Sprintf("%s.%s", k, rawCIDR), rawValue, nil) allowList, err := newAllowList(fmt.Sprintf("%s.%s", k, rawCIDR), rawValue, nil)
if err != nil { if err != nil {
return nil, err return nil, err
} }
ipNet, err := netip.ParsePrefix(rawCIDR) _, ipNet, err := net.ParseCIDR(rawCIDR)
if err != nil { if err != nil {
return nil, fmt.Errorf("config `%s` has invalid CIDR: %s. %w", k, rawCIDR, err) return nil, fmt.Errorf("config `%s` has invalid CIDR: %s", k, rawCIDR)
} }
remoteAllowRanges.Insert(netip.PrefixFrom(ipNet.Addr().Unmap(), ipNet.Bits()), allowList) remoteAllowRanges.AddCIDR(ipNet, allowList)
} }
return remoteAllowRanges, nil return remoteAllowRanges, nil
} }
func (al *AllowList) Allow(addr netip.Addr) bool { func (al *AllowList) Allow(ip net.IP) bool {
if al == nil { if al == nil {
return true return true
} }
result, _ := al.cidrTree.Lookup(addr) result := al.cidrTree.MostSpecificContains(ip)
return result switch v := result.(type) {
case bool:
return v
default:
panic(fmt.Errorf("invalid state, allowlist returned: %T %v", result, result))
}
} }
func (al *LocalAllowList) Allow(udpAddr netip.Addr) bool { func (al *AllowList) AllowIpV4(ip iputil.VpnIp) bool {
if al == nil { if al == nil {
return true return true
} }
return al.AllowList.Allow(udpAddr)
result := al.cidrTree.MostSpecificContainsIpV4(ip)
switch v := result.(type) {
case bool:
return v
default:
panic(fmt.Errorf("invalid state, allowlist returned: %T %v", result, result))
}
}
func (al *AllowList) AllowIpV6(hi, lo uint64) bool {
if al == nil {
return true
}
result := al.cidrTree.MostSpecificContainsIpV6(hi, lo)
switch v := result.(type) {
case bool:
return v
default:
panic(fmt.Errorf("invalid state, allowlist returned: %T %v", result, result))
}
}
func (al *LocalAllowList) Allow(ip net.IP) bool {
if al == nil {
return true
}
return al.AllowList.Allow(ip)
} }
func (al *LocalAllowList) AllowName(name string) bool { func (al *LocalAllowList) AllowName(name string) bool {
@@ -267,39 +316,45 @@ func (al *LocalAllowList) AllowName(name string) bool {
return !al.nameRules[0].Allow return !al.nameRules[0].Allow
} }
func (al *RemoteAllowList) AllowUnknownVpnAddr(vpnAddr netip.Addr) bool { func (al *RemoteAllowList) AllowUnknownVpnIp(ip net.IP) bool {
if al == nil { if al == nil {
return true return true
} }
return al.AllowList.Allow(vpnAddr) return al.AllowList.Allow(ip)
} }
func (al *RemoteAllowList) Allow(vpnAddr netip.Addr, udpAddr netip.Addr) bool { func (al *RemoteAllowList) Allow(vpnIp iputil.VpnIp, ip net.IP) bool {
if !al.getInsideAllowList(vpnAddr).Allow(udpAddr) { if !al.getInsideAllowList(vpnIp).Allow(ip) {
return false return false
} }
return al.AllowList.Allow(udpAddr) return al.AllowList.Allow(ip)
} }
func (al *RemoteAllowList) AllowAll(vpnAddrs []netip.Addr, udpAddr netip.Addr) bool { func (al *RemoteAllowList) AllowIpV4(vpnIp iputil.VpnIp, ip iputil.VpnIp) bool {
if !al.AllowList.Allow(udpAddr) { if al == nil {
return true
}
if !al.getInsideAllowList(vpnIp).AllowIpV4(ip) {
return false return false
} }
return al.AllowList.AllowIpV4(ip)
for _, vpnAddr := range vpnAddrs {
if !al.getInsideAllowList(vpnAddr).Allow(udpAddr) {
return false
}
}
return true
} }
func (al *RemoteAllowList) getInsideAllowList(vpnAddr netip.Addr) *AllowList { func (al *RemoteAllowList) AllowIpV6(vpnIp iputil.VpnIp, hi, lo uint64) bool {
if al == nil {
return true
}
if !al.getInsideAllowList(vpnIp).AllowIpV6(hi, lo) {
return false
}
return al.AllowList.AllowIpV6(hi, lo)
}
func (al *RemoteAllowList) getInsideAllowList(vpnIp iputil.VpnIp) *AllowList {
if al.insideAllowLists != nil { if al.insideAllowLists != nil {
inside, ok := al.insideAllowLists.Lookup(vpnAddr) inside := al.insideAllowLists.MostSpecificContainsIpV4(vpnIp)
if ok { if inside != nil {
return inside return inside.(*AllowList)
} }
} }
return nil return nil

View File

@@ -1,41 +1,40 @@
package nebula package nebula
import ( import (
"net/netip" "net"
"regexp" "regexp"
"testing" "testing"
"github.com/gaissmai/bart" "github.com/slackhq/nebula/cidr"
"github.com/slackhq/nebula/config" "github.com/slackhq/nebula/config"
"github.com/slackhq/nebula/test" "github.com/slackhq/nebula/util"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
) )
func TestNewAllowListFromConfig(t *testing.T) { func TestNewAllowListFromConfig(t *testing.T) {
l := test.NewLogger() l := util.NewTestLogger()
c := config.NewC(l) c := config.NewC(l)
c.Settings["allowlist"] = map[string]any{ c.Settings["allowlist"] = map[interface{}]interface{}{
"192.168.0.0": true, "192.168.0.0": true,
} }
r, err := newAllowListFromConfig(c, "allowlist", nil) r, err := newAllowListFromConfig(c, "allowlist", nil)
require.EqualError(t, err, "config `allowlist` has invalid CIDR: 192.168.0.0. netip.ParsePrefix(\"192.168.0.0\"): no '/'") assert.EqualError(t, err, "config `allowlist` has invalid CIDR: 192.168.0.0")
assert.Nil(t, r) assert.Nil(t, r)
c.Settings["allowlist"] = map[string]any{ c.Settings["allowlist"] = map[interface{}]interface{}{
"192.168.0.0/16": "abc", "192.168.0.0/16": "abc",
} }
r, err = newAllowListFromConfig(c, "allowlist", nil) r, err = newAllowListFromConfig(c, "allowlist", nil)
require.EqualError(t, err, "config `allowlist` has invalid value (type string): abc") assert.EqualError(t, err, "config `allowlist` has invalid value (type string): abc")
c.Settings["allowlist"] = map[string]any{ c.Settings["allowlist"] = map[interface{}]interface{}{
"192.168.0.0/16": true, "192.168.0.0/16": true,
"10.0.0.0/8": false, "10.0.0.0/8": false,
} }
r, err = newAllowListFromConfig(c, "allowlist", nil) r, err = newAllowListFromConfig(c, "allowlist", nil)
require.EqualError(t, err, "config `allowlist` contains both true and false rules, but no default set for 0.0.0.0/0") assert.EqualError(t, err, "config `allowlist` contains both true and false rules, but no default set for 0.0.0.0/0")
c.Settings["allowlist"] = map[string]any{ c.Settings["allowlist"] = map[interface{}]interface{}{
"0.0.0.0/0": true, "0.0.0.0/0": true,
"10.0.0.0/8": false, "10.0.0.0/8": false,
"10.42.42.0/24": true, "10.42.42.0/24": true,
@@ -43,9 +42,9 @@ func TestNewAllowListFromConfig(t *testing.T) {
"fd00:fd00::/16": false, "fd00:fd00::/16": false,
} }
r, err = newAllowListFromConfig(c, "allowlist", nil) r, err = newAllowListFromConfig(c, "allowlist", nil)
require.EqualError(t, err, "config `allowlist` contains both true and false rules, but no default set for ::/0") assert.EqualError(t, err, "config `allowlist` contains both true and false rules, but no default set for ::/0")
c.Settings["allowlist"] = map[string]any{ c.Settings["allowlist"] = map[interface{}]interface{}{
"0.0.0.0/0": true, "0.0.0.0/0": true,
"10.0.0.0/8": false, "10.0.0.0/8": false,
"10.42.42.0/24": true, "10.42.42.0/24": true,
@@ -55,7 +54,7 @@ func TestNewAllowListFromConfig(t *testing.T) {
assert.NotNil(t, r) assert.NotNil(t, r)
} }
c.Settings["allowlist"] = map[string]any{ c.Settings["allowlist"] = map[interface{}]interface{}{
"0.0.0.0/0": true, "0.0.0.0/0": true,
"10.0.0.0/8": false, "10.0.0.0/8": false,
"10.42.42.0/24": true, "10.42.42.0/24": true,
@@ -70,25 +69,25 @@ func TestNewAllowListFromConfig(t *testing.T) {
// Test interface names // Test interface names
c.Settings["allowlist"] = map[string]any{ c.Settings["allowlist"] = map[interface{}]interface{}{
"interfaces": map[string]any{ "interfaces": map[interface{}]interface{}{
`docker.*`: "foo", `docker.*`: "foo",
}, },
} }
lr, err := NewLocalAllowListFromConfig(c, "allowlist") lr, err := NewLocalAllowListFromConfig(c, "allowlist")
require.EqualError(t, err, "config `allowlist.interfaces` has invalid value (type string): foo") assert.EqualError(t, err, "config `allowlist.interfaces` has invalid value (type string): foo")
c.Settings["allowlist"] = map[string]any{ c.Settings["allowlist"] = map[interface{}]interface{}{
"interfaces": map[string]any{ "interfaces": map[interface{}]interface{}{
`docker.*`: false, `docker.*`: false,
`eth.*`: true, `eth.*`: true,
}, },
} }
lr, err = NewLocalAllowListFromConfig(c, "allowlist") lr, err = NewLocalAllowListFromConfig(c, "allowlist")
require.EqualError(t, err, "config `allowlist.interfaces` values must all be the same true/false value") assert.EqualError(t, err, "config `allowlist.interfaces` values must all be the same true/false value")
c.Settings["allowlist"] = map[string]any{ c.Settings["allowlist"] = map[interface{}]interface{}{
"interfaces": map[string]any{ "interfaces": map[interface{}]interface{}{
`docker.*`: false, `docker.*`: false,
}, },
} }
@@ -99,30 +98,30 @@ func TestNewAllowListFromConfig(t *testing.T) {
} }
func TestAllowList_Allow(t *testing.T) { func TestAllowList_Allow(t *testing.T) {
assert.True(t, ((*AllowList)(nil)).Allow(netip.MustParseAddr("1.1.1.1"))) assert.Equal(t, true, ((*AllowList)(nil)).Allow(net.ParseIP("1.1.1.1")))
tree := new(bart.Table[bool]) tree := cidr.NewTree6()
tree.Insert(netip.MustParsePrefix("0.0.0.0/0"), true) tree.AddCIDR(cidr.Parse("0.0.0.0/0"), true)
tree.Insert(netip.MustParsePrefix("10.0.0.0/8"), false) tree.AddCIDR(cidr.Parse("10.0.0.0/8"), false)
tree.Insert(netip.MustParsePrefix("10.42.42.42/32"), true) tree.AddCIDR(cidr.Parse("10.42.42.42/32"), true)
tree.Insert(netip.MustParsePrefix("10.42.0.0/16"), true) tree.AddCIDR(cidr.Parse("10.42.0.0/16"), true)
tree.Insert(netip.MustParsePrefix("10.42.42.0/24"), true) tree.AddCIDR(cidr.Parse("10.42.42.0/24"), true)
tree.Insert(netip.MustParsePrefix("10.42.42.0/24"), false) tree.AddCIDR(cidr.Parse("10.42.42.0/24"), false)
tree.Insert(netip.MustParsePrefix("::1/128"), true) tree.AddCIDR(cidr.Parse("::1/128"), true)
tree.Insert(netip.MustParsePrefix("::2/128"), false) tree.AddCIDR(cidr.Parse("::2/128"), false)
al := &AllowList{cidrTree: tree} al := &AllowList{cidrTree: tree}
assert.True(t, al.Allow(netip.MustParseAddr("1.1.1.1"))) assert.Equal(t, true, al.Allow(net.ParseIP("1.1.1.1")))
assert.False(t, al.Allow(netip.MustParseAddr("10.0.0.4"))) assert.Equal(t, false, al.Allow(net.ParseIP("10.0.0.4")))
assert.True(t, al.Allow(netip.MustParseAddr("10.42.42.42"))) assert.Equal(t, true, al.Allow(net.ParseIP("10.42.42.42")))
assert.False(t, al.Allow(netip.MustParseAddr("10.42.42.41"))) assert.Equal(t, false, al.Allow(net.ParseIP("10.42.42.41")))
assert.True(t, al.Allow(netip.MustParseAddr("10.42.0.1"))) assert.Equal(t, true, al.Allow(net.ParseIP("10.42.0.1")))
assert.True(t, al.Allow(netip.MustParseAddr("::1"))) assert.Equal(t, true, al.Allow(net.ParseIP("::1")))
assert.False(t, al.Allow(netip.MustParseAddr("::2"))) assert.Equal(t, false, al.Allow(net.ParseIP("::2")))
} }
func TestLocalAllowList_AllowName(t *testing.T) { func TestLocalAllowList_AllowName(t *testing.T) {
assert.True(t, ((*LocalAllowList)(nil)).AllowName("docker0")) assert.Equal(t, true, ((*LocalAllowList)(nil)).AllowName("docker0"))
rules := []AllowListNameRule{ rules := []AllowListNameRule{
{Name: regexp.MustCompile("^docker.*$"), Allow: false}, {Name: regexp.MustCompile("^docker.*$"), Allow: false},
@@ -130,9 +129,9 @@ func TestLocalAllowList_AllowName(t *testing.T) {
} }
al := &LocalAllowList{nameRules: rules} al := &LocalAllowList{nameRules: rules}
assert.False(t, al.AllowName("docker0")) assert.Equal(t, false, al.AllowName("docker0"))
assert.False(t, al.AllowName("tun0")) assert.Equal(t, false, al.AllowName("tun0"))
assert.True(t, al.AllowName("eth0")) assert.Equal(t, true, al.AllowName("eth0"))
rules = []AllowListNameRule{ rules = []AllowListNameRule{
{Name: regexp.MustCompile("^eth.*$"), Allow: true}, {Name: regexp.MustCompile("^eth.*$"), Allow: true},
@@ -140,7 +139,7 @@ func TestLocalAllowList_AllowName(t *testing.T) {
} }
al = &LocalAllowList{nameRules: rules} al = &LocalAllowList{nameRules: rules}
assert.False(t, al.AllowName("docker0")) assert.Equal(t, false, al.AllowName("docker0"))
assert.True(t, al.AllowName("eth0")) assert.Equal(t, true, al.AllowName("eth0"))
assert.True(t, al.AllowName("ens5")) assert.Equal(t, true, al.AllowName("ens5"))
} }

View File

@@ -5,7 +5,6 @@ import (
"github.com/sirupsen/logrus" "github.com/sirupsen/logrus"
) )
// TODO: Pretty sure this is just all sorts of racy now, we need it to be atomic
type Bits struct { type Bits struct {
length uint64 length uint64
current uint64 current uint64
@@ -44,7 +43,7 @@ func (b *Bits) Check(l logrus.FieldLogger, i uint64) bool {
} }
// Not within the window // Not within the window
l.Error("rejected a packet (top) %d %d\n", b.current, i) l.Debugf("rejected a packet (top) %d %d\n", b.current, i)
return false return false
} }

View File

@@ -3,12 +3,12 @@ package nebula
import ( import (
"testing" "testing"
"github.com/slackhq/nebula/test" "github.com/slackhq/nebula/util"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
) )
func TestBits(t *testing.T) { func TestBits(t *testing.T) {
l := test.NewLogger() l := util.NewTestLogger()
b := NewBits(10) b := NewBits(10)
// make sure it is the right size // make sure it is the right size
@@ -76,7 +76,7 @@ func TestBits(t *testing.T) {
} }
func TestBitsDupeCounter(t *testing.T) { func TestBitsDupeCounter(t *testing.T) {
l := test.NewLogger() l := util.NewTestLogger()
b := NewBits(10) b := NewBits(10)
b.lostCounter.Clear() b.lostCounter.Clear()
b.dupeCounter.Clear() b.dupeCounter.Clear()
@@ -101,7 +101,7 @@ func TestBitsDupeCounter(t *testing.T) {
} }
func TestBitsOutOfWindowCounter(t *testing.T) { func TestBitsOutOfWindowCounter(t *testing.T) {
l := test.NewLogger() l := util.NewTestLogger()
b := NewBits(10) b := NewBits(10)
b.lostCounter.Clear() b.lostCounter.Clear()
b.dupeCounter.Clear() b.dupeCounter.Clear()
@@ -131,7 +131,7 @@ func TestBitsOutOfWindowCounter(t *testing.T) {
} }
func TestBitsLostCounter(t *testing.T) { func TestBitsLostCounter(t *testing.T) {
l := test.NewLogger() l := util.NewTestLogger()
b := NewBits(10) b := NewBits(10)
b.lostCounter.Clear() b.lostCounter.Clear()
b.dupeCounter.Clear() b.dupeCounter.Clear()

View File

@@ -1,8 +0,0 @@
//go:build boringcrypto
// +build boringcrypto
package nebula
import "crypto/boring"
var boringEnabled = boring.Enabled

View File

@@ -1,163 +0,0 @@
package nebula
import (
"encoding/binary"
"fmt"
"math"
"net"
"net/netip"
"strconv"
"github.com/gaissmai/bart"
"github.com/slackhq/nebula/config"
)
// This allows us to "guess" what the remote might be for a host while we wait
// for the lighthouse response. See "lighthouse.calculated_remotes" in the
// example config file.
type calculatedRemote struct {
ipNet netip.Prefix
mask netip.Prefix
port uint32
}
func newCalculatedRemote(cidr, maskCidr netip.Prefix, port int) (*calculatedRemote, error) {
if maskCidr.Addr().BitLen() != cidr.Addr().BitLen() {
return nil, fmt.Errorf("invalid mask: %s for cidr: %s", maskCidr, cidr)
}
masked := maskCidr.Masked()
if port < 0 || port > math.MaxUint16 {
return nil, fmt.Errorf("invalid port: %d", port)
}
return &calculatedRemote{
ipNet: maskCidr,
mask: masked,
port: uint32(port),
}, nil
}
func (c *calculatedRemote) String() string {
return fmt.Sprintf("CalculatedRemote(mask=%v port=%d)", c.ipNet, c.port)
}
func (c *calculatedRemote) ApplyV4(addr netip.Addr) *V4AddrPort {
// Combine the masked bytes of the "mask" IP with the unmasked bytes of the overlay IP
maskb := net.CIDRMask(c.mask.Bits(), c.mask.Addr().BitLen())
mask := binary.BigEndian.Uint32(maskb[:])
b := c.mask.Addr().As4()
maskAddr := binary.BigEndian.Uint32(b[:])
b = addr.As4()
intAddr := binary.BigEndian.Uint32(b[:])
return &V4AddrPort{(maskAddr & mask) | (intAddr & ^mask), c.port}
}
func (c *calculatedRemote) ApplyV6(addr netip.Addr) *V6AddrPort {
mask := net.CIDRMask(c.mask.Bits(), c.mask.Addr().BitLen())
maskAddr := c.mask.Addr().As16()
calcAddr := addr.As16()
ap := V6AddrPort{Port: c.port}
maskb := binary.BigEndian.Uint64(mask[:8])
maskAddrb := binary.BigEndian.Uint64(maskAddr[:8])
calcAddrb := binary.BigEndian.Uint64(calcAddr[:8])
ap.Hi = (maskAddrb & maskb) | (calcAddrb & ^maskb)
maskb = binary.BigEndian.Uint64(mask[8:])
maskAddrb = binary.BigEndian.Uint64(maskAddr[8:])
calcAddrb = binary.BigEndian.Uint64(calcAddr[8:])
ap.Lo = (maskAddrb & maskb) | (calcAddrb & ^maskb)
return &ap
}
func NewCalculatedRemotesFromConfig(c *config.C, k string) (*bart.Table[[]*calculatedRemote], error) {
value := c.Get(k)
if value == nil {
return nil, nil
}
calculatedRemotes := new(bart.Table[[]*calculatedRemote])
rawMap, ok := value.(map[string]any)
if !ok {
return nil, fmt.Errorf("config `%s` has invalid type: %T", k, value)
}
for rawCIDR, rawValue := range rawMap {
cidr, err := netip.ParsePrefix(rawCIDR)
if err != nil {
return nil, fmt.Errorf("config `%s` has invalid CIDR: %s", k, rawCIDR)
}
entry, err := newCalculatedRemotesListFromConfig(cidr, rawValue)
if err != nil {
return nil, fmt.Errorf("config '%s.%s': %w", k, rawCIDR, err)
}
calculatedRemotes.Insert(cidr, entry)
}
return calculatedRemotes, nil
}
func newCalculatedRemotesListFromConfig(cidr netip.Prefix, raw any) ([]*calculatedRemote, error) {
rawList, ok := raw.([]any)
if !ok {
return nil, fmt.Errorf("calculated_remotes entry has invalid type: %T", raw)
}
var l []*calculatedRemote
for _, e := range rawList {
c, err := newCalculatedRemotesEntryFromConfig(cidr, e)
if err != nil {
return nil, fmt.Errorf("calculated_remotes entry: %w", err)
}
l = append(l, c)
}
return l, nil
}
func newCalculatedRemotesEntryFromConfig(cidr netip.Prefix, raw any) (*calculatedRemote, error) {
rawMap, ok := raw.(map[string]any)
if !ok {
return nil, fmt.Errorf("invalid type: %T", raw)
}
rawValue := rawMap["mask"]
if rawValue == nil {
return nil, fmt.Errorf("missing mask: %v", rawMap)
}
rawMask, ok := rawValue.(string)
if !ok {
return nil, fmt.Errorf("invalid mask (type %T): %v", rawValue, rawValue)
}
maskCidr, err := netip.ParsePrefix(rawMask)
if err != nil {
return nil, fmt.Errorf("invalid mask: %s", rawMask)
}
var port int
rawValue = rawMap["port"]
if rawValue == nil {
return nil, fmt.Errorf("missing port: %v", rawMap)
}
switch v := rawValue.(type) {
case int:
port = v
case string:
port, err = strconv.Atoi(v)
if err != nil {
return nil, fmt.Errorf("invalid port: %s: %w", v, err)
}
default:
return nil, fmt.Errorf("invalid port (type %T): %v", rawValue, rawValue)
}
return newCalculatedRemote(cidr, maskCidr, port)
}

View File

@@ -1,81 +0,0 @@
package nebula
import (
"net/netip"
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestCalculatedRemoteApply(t *testing.T) {
// Test v4 addresses
ipNet := netip.MustParsePrefix("192.168.1.0/24")
c, err := newCalculatedRemote(ipNet, ipNet, 4242)
require.NoError(t, err)
input, err := netip.ParseAddr("10.0.10.182")
require.NoError(t, err)
expected, err := netip.ParseAddr("192.168.1.182")
require.NoError(t, err)
assert.Equal(t, netAddrToProtoV4AddrPort(expected, 4242), c.ApplyV4(input))
// Test v6 addresses
ipNet = netip.MustParsePrefix("ffff:ffff:ffff:ffff::0/64")
c, err = newCalculatedRemote(ipNet, ipNet, 4242)
require.NoError(t, err)
input, err = netip.ParseAddr("beef:beef:beef:beef:beef:beef:beef:beef")
require.NoError(t, err)
expected, err = netip.ParseAddr("ffff:ffff:ffff:ffff:beef:beef:beef:beef")
require.NoError(t, err)
assert.Equal(t, netAddrToProtoV6AddrPort(expected, 4242), c.ApplyV6(input))
// Test v6 addresses part 2
ipNet = netip.MustParsePrefix("ffff:ffff:ffff:ffff:ffff::0/80")
c, err = newCalculatedRemote(ipNet, ipNet, 4242)
require.NoError(t, err)
input, err = netip.ParseAddr("beef:beef:beef:beef:beef:beef:beef:beef")
require.NoError(t, err)
expected, err = netip.ParseAddr("ffff:ffff:ffff:ffff:ffff:beef:beef:beef")
require.NoError(t, err)
assert.Equal(t, netAddrToProtoV6AddrPort(expected, 4242), c.ApplyV6(input))
// Test v6 addresses part 2
ipNet = netip.MustParsePrefix("ffff:ffff:ffff::0/48")
c, err = newCalculatedRemote(ipNet, ipNet, 4242)
require.NoError(t, err)
input, err = netip.ParseAddr("beef:beef:beef:beef:beef:beef:beef:beef")
require.NoError(t, err)
expected, err = netip.ParseAddr("ffff:ffff:ffff:beef:beef:beef:beef:beef")
require.NoError(t, err)
assert.Equal(t, netAddrToProtoV6AddrPort(expected, 4242), c.ApplyV6(input))
}
func Test_newCalculatedRemote(t *testing.T) {
c, err := newCalculatedRemote(netip.MustParsePrefix("1::1/128"), netip.MustParsePrefix("1.0.0.0/32"), 4242)
require.EqualError(t, err, "invalid mask: 1.0.0.0/32 for cidr: 1::1/128")
require.Nil(t, c)
c, err = newCalculatedRemote(netip.MustParsePrefix("1.0.0.0/32"), netip.MustParsePrefix("1::1/128"), 4242)
require.EqualError(t, err, "invalid mask: 1::1/128 for cidr: 1.0.0.0/32")
require.Nil(t, c)
c, err = newCalculatedRemote(netip.MustParsePrefix("1.0.0.0/32"), netip.MustParsePrefix("1.0.0.0/32"), 4242)
require.NoError(t, err)
require.NotNil(t, c)
c, err = newCalculatedRemote(netip.MustParsePrefix("1::1/128"), netip.MustParsePrefix("1::1/128"), 4242)
require.NoError(t, err)
require.NotNil(t, c)
}

165
cert.go Normal file
View File

@@ -0,0 +1,165 @@
package nebula
import (
"errors"
"fmt"
"io/ioutil"
"strings"
"time"
"github.com/sirupsen/logrus"
"github.com/slackhq/nebula/cert"
"github.com/slackhq/nebula/config"
)
type CertState struct {
certificate *cert.NebulaCertificate
rawCertificate []byte
rawCertificateNoKey []byte
publicKey []byte
privateKey []byte
}
func NewCertState(certificate *cert.NebulaCertificate, privateKey []byte) (*CertState, error) {
// Marshal the certificate to ensure it is valid
rawCertificate, err := certificate.Marshal()
if err != nil {
return nil, fmt.Errorf("invalid nebula certificate on interface: %s", err)
}
publicKey := certificate.Details.PublicKey
cs := &CertState{
rawCertificate: rawCertificate,
certificate: certificate, // PublicKey has been set to nil above
privateKey: privateKey,
publicKey: publicKey,
}
cs.certificate.Details.PublicKey = nil
rawCertNoKey, err := cs.certificate.Marshal()
if err != nil {
return nil, fmt.Errorf("error marshalling certificate no key: %s", err)
}
cs.rawCertificateNoKey = rawCertNoKey
// put public key back
cs.certificate.Details.PublicKey = cs.publicKey
return cs, nil
}
func NewCertStateFromConfig(c *config.C) (*CertState, error) {
var pemPrivateKey []byte
var err error
privPathOrPEM := c.GetString("pki.key", "")
if privPathOrPEM == "" {
// Support backwards compat with the old x509
//TODO: remove after this is rolled out everywhere - NB 2018/02/23
privPathOrPEM = c.GetString("x509.key", "")
}
if privPathOrPEM == "" {
return nil, errors.New("no pki.key path or PEM data provided")
}
if strings.Contains(privPathOrPEM, "-----BEGIN") {
pemPrivateKey = []byte(privPathOrPEM)
privPathOrPEM = "<inline>"
} else {
pemPrivateKey, err = ioutil.ReadFile(privPathOrPEM)
if err != nil {
return nil, fmt.Errorf("unable to read pki.key file %s: %s", privPathOrPEM, err)
}
}
rawKey, _, err := cert.UnmarshalX25519PrivateKey(pemPrivateKey)
if err != nil {
return nil, fmt.Errorf("error while unmarshaling pki.key %s: %s", privPathOrPEM, err)
}
var rawCert []byte
pubPathOrPEM := c.GetString("pki.cert", "")
if pubPathOrPEM == "" {
// Support backwards compat with the old x509
//TODO: remove after this is rolled out everywhere - NB 2018/02/23
pubPathOrPEM = c.GetString("x509.cert", "")
}
if pubPathOrPEM == "" {
return nil, errors.New("no pki.cert path or PEM data provided")
}
if strings.Contains(pubPathOrPEM, "-----BEGIN") {
rawCert = []byte(pubPathOrPEM)
pubPathOrPEM = "<inline>"
} else {
rawCert, err = ioutil.ReadFile(pubPathOrPEM)
if err != nil {
return nil, fmt.Errorf("unable to read pki.cert file %s: %s", pubPathOrPEM, err)
}
}
nebulaCert, _, err := cert.UnmarshalNebulaCertificateFromPEM(rawCert)
if err != nil {
return nil, fmt.Errorf("error while unmarshaling pki.cert %s: %s", pubPathOrPEM, err)
}
if nebulaCert.Expired(time.Now()) {
return nil, fmt.Errorf("nebula certificate for this host is expired")
}
if len(nebulaCert.Details.Ips) == 0 {
return nil, fmt.Errorf("no IPs encoded in certificate")
}
if err = nebulaCert.VerifyPrivateKey(rawKey); err != nil {
return nil, fmt.Errorf("private key is not a pair with public key in nebula cert")
}
return NewCertState(nebulaCert, rawKey)
}
func loadCAFromConfig(l *logrus.Logger, c *config.C) (*cert.NebulaCAPool, error) {
var rawCA []byte
var err error
caPathOrPEM := c.GetString("pki.ca", "")
if caPathOrPEM == "" {
// Support backwards compat with the old x509
//TODO: remove after this is rolled out everywhere - NB 2018/02/23
caPathOrPEM = c.GetString("x509.ca", "")
}
if caPathOrPEM == "" {
return nil, errors.New("no pki.ca path or PEM data provided")
}
if strings.Contains(caPathOrPEM, "-----BEGIN") {
rawCA = []byte(caPathOrPEM)
caPathOrPEM = "<inline>"
} else {
rawCA, err = ioutil.ReadFile(caPathOrPEM)
if err != nil {
return nil, fmt.Errorf("unable to read pki.ca file %s: %s", caPathOrPEM, err)
}
}
CAs, err := cert.NewCAPoolFromBytes(rawCA)
if err != nil {
return nil, fmt.Errorf("error while adding CA certificate to CA trust store: %s", err)
}
for _, fp := range c.GetStringSlice("pki.blocklist", []string{}) {
l.WithField("fingerprint", fp).Infof("Blocklisting cert")
CAs.BlocklistFingerprint(fp)
}
// Support deprecated config for at leaast one minor release to allow for migrations
for _, fp := range c.GetStringSlice("pki.blacklist", []string{}) {
l.WithField("fingerprint", fp).Infof("Blocklisting cert")
l.Warn("pki.blacklist is deprecated and will not be supported in a future release. Please migrate your config to use pki.blocklist")
CAs.BlocklistFingerprint(fp)
}
return CAs, nil
}

View File

@@ -1,7 +1,7 @@
GO111MODULE = on GO111MODULE = on
export GO111MODULE export GO111MODULE
cert_v1.pb.go: cert_v1.proto .FORCE cert.pb.go: cert.proto .FORCE
go build google.golang.org/protobuf/cmd/protoc-gen-go go build google.golang.org/protobuf/cmd/protoc-gen-go
PATH="$(CURDIR):$(PATH)" protoc --go_out=. --go_opt=paths=source_relative $< PATH="$(CURDIR):$(PATH)" protoc --go_out=. --go_opt=paths=source_relative $<
rm protoc-gen-go rm protoc-gen-go

View File

@@ -2,25 +2,14 @@
This is a library for interacting with `nebula` style certificates and authorities. This is a library for interacting with `nebula` style certificates and authorities.
There are now 2 versions of `nebula` certificates: A `protobuf` definition of the certificate format is also included
## v1 ### Compiling the protobuf definition
This version is deprecated. Make sure you have `protoc` installed.
A `protobuf` definition of the certificate format is included at `cert_v1.proto`
To compile the definition you will need `protoc` installed.
To compile for `go` with the same version of protobuf specified in go.mod: To compile for `go` with the same version of protobuf specified in go.mod:
```bash ```bash
make proto make
``` ```
## v2
This is the latest version which uses asn.1 DER encoding. It can support ipv4 and ipv6 and tolerate
future certificate changes better than v1.
`cert_v2.asn1` defines the wire format and can be used to compile marshalers.

View File

@@ -1,52 +0,0 @@
package cert
import (
"golang.org/x/crypto/cryptobyte"
"golang.org/x/crypto/cryptobyte/asn1"
)
// readOptionalASN1Boolean reads an asn.1 boolean with a specific tag instead of a asn.1 tag wrapping a boolean with a value
// https://github.com/golang/go/issues/64811#issuecomment-1944446920
func readOptionalASN1Boolean(b *cryptobyte.String, out *bool, tag asn1.Tag, defaultValue bool) bool {
var present bool
var child cryptobyte.String
if !b.ReadOptionalASN1(&child, &present, tag) {
return false
}
if !present {
*out = defaultValue
return true
}
// Ensure we have 1 byte
if len(child) == 1 {
*out = child[0] > 0
return true
}
return false
}
// readOptionalASN1Byte reads an asn.1 uint8 with a specific tag instead of a asn.1 tag wrapping a uint8 with a value
// Similar issue as with readOptionalASN1Boolean
func readOptionalASN1Byte(b *cryptobyte.String, out *byte, tag asn1.Tag, defaultValue byte) bool {
var present bool
var child cryptobyte.String
if !b.ReadOptionalASN1(&child, &present, tag) {
return false
}
if !present {
*out = defaultValue
return true
}
// Ensure we have 1 byte
if len(child) == 1 {
*out = child[0]
return true
}
return false
}

120
cert/ca.go Normal file
View File

@@ -0,0 +1,120 @@
package cert
import (
"fmt"
"strings"
"time"
)
type NebulaCAPool struct {
CAs map[string]*NebulaCertificate
certBlocklist map[string]struct{}
}
// NewCAPool creates a CAPool
func NewCAPool() *NebulaCAPool {
ca := NebulaCAPool{
CAs: make(map[string]*NebulaCertificate),
certBlocklist: make(map[string]struct{}),
}
return &ca
}
func NewCAPoolFromBytes(caPEMs []byte) (*NebulaCAPool, error) {
pool := NewCAPool()
var err error
for {
caPEMs, err = pool.AddCACertificate(caPEMs)
if err != nil {
return nil, err
}
if caPEMs == nil || len(caPEMs) == 0 || strings.TrimSpace(string(caPEMs)) == "" {
break
}
}
return pool, nil
}
// AddCACertificate verifies a Nebula CA certificate and adds it to the pool
// Only the first pem encoded object will be consumed, any remaining bytes are returned.
// Parsed certificates will be verified and must be a CA
func (ncp *NebulaCAPool) AddCACertificate(pemBytes []byte) ([]byte, error) {
c, pemBytes, err := UnmarshalNebulaCertificateFromPEM(pemBytes)
if err != nil {
return pemBytes, err
}
if !c.Details.IsCA {
return pemBytes, fmt.Errorf("provided certificate was not a CA; %s", c.Details.Name)
}
if !c.CheckSignature(c.Details.PublicKey) {
return pemBytes, fmt.Errorf("provided certificate was not self signed; %s", c.Details.Name)
}
if c.Expired(time.Now()) {
return pemBytes, fmt.Errorf("provided CA certificate is expired; %s", c.Details.Name)
}
sum, err := c.Sha256Sum()
if err != nil {
return pemBytes, fmt.Errorf("could not calculate shasum for provided CA; error: %s; %s", err, c.Details.Name)
}
ncp.CAs[sum] = c
return pemBytes, nil
}
// BlocklistFingerprint adds a cert fingerprint to the blocklist
func (ncp *NebulaCAPool) BlocklistFingerprint(f string) {
ncp.certBlocklist[f] = struct{}{}
}
// ResetCertBlocklist removes all previously blocklisted cert fingerprints
func (ncp *NebulaCAPool) ResetCertBlocklist() {
ncp.certBlocklist = make(map[string]struct{})
}
// IsBlocklisted returns true if the fingerprint fails to generate or has been explicitly blocklisted
func (ncp *NebulaCAPool) IsBlocklisted(c *NebulaCertificate) bool {
h, err := c.Sha256Sum()
if err != nil {
return true
}
if _, ok := ncp.certBlocklist[h]; ok {
return true
}
return false
}
// GetCAForCert attempts to return the signing certificate for the provided certificate.
// No signature validation is performed
func (ncp *NebulaCAPool) GetCAForCert(c *NebulaCertificate) (*NebulaCertificate, error) {
if c.Details.Issuer == "" {
return nil, fmt.Errorf("no issuer in certificate")
}
signer, ok := ncp.CAs[c.Details.Issuer]
if ok {
return signer, nil
}
return nil, fmt.Errorf("could not find ca for the certificate")
}
// GetFingerprints returns an array of trusted CA fingerprints
func (ncp *NebulaCAPool) GetFingerprints() []string {
fp := make([]string, len(ncp.CAs))
i := 0
for k := range ncp.CAs {
fp[i] = k
i++
}
return fp
}

View File

@@ -1,296 +0,0 @@
package cert
import (
"errors"
"fmt"
"net/netip"
"slices"
"strings"
"time"
)
type CAPool struct {
CAs map[string]*CachedCertificate
certBlocklist map[string]struct{}
}
// NewCAPool creates an empty CAPool
func NewCAPool() *CAPool {
ca := CAPool{
CAs: make(map[string]*CachedCertificate),
certBlocklist: make(map[string]struct{}),
}
return &ca
}
// NewCAPoolFromPEM will create a new CA pool from the provided
// input bytes, which must be a PEM-encoded set of nebula certificates.
// If the pool contains any expired certificates, an ErrExpired will be
// returned along with the pool. The caller must handle any such errors.
func NewCAPoolFromPEM(caPEMs []byte) (*CAPool, error) {
pool := NewCAPool()
var err error
var expired bool
for {
caPEMs, err = pool.AddCAFromPEM(caPEMs)
if errors.Is(err, ErrExpired) {
expired = true
err = nil
}
if err != nil {
return nil, err
}
if len(caPEMs) == 0 || strings.TrimSpace(string(caPEMs)) == "" {
break
}
}
if expired {
return pool, ErrExpired
}
return pool, nil
}
// AddCAFromPEM verifies a Nebula CA certificate and adds it to the pool.
// Only the first pem encoded object will be consumed, any remaining bytes are returned.
// Parsed certificates will be verified and must be a CA
func (ncp *CAPool) AddCAFromPEM(pemBytes []byte) ([]byte, error) {
c, pemBytes, err := UnmarshalCertificateFromPEM(pemBytes)
if err != nil {
return pemBytes, err
}
err = ncp.AddCA(c)
if err != nil {
return pemBytes, err
}
return pemBytes, nil
}
// AddCA verifies a Nebula CA certificate and adds it to the pool.
func (ncp *CAPool) AddCA(c Certificate) error {
if !c.IsCA() {
return fmt.Errorf("%s: %w", c.Name(), ErrNotCA)
}
if !c.CheckSignature(c.PublicKey()) {
return fmt.Errorf("%s: %w", c.Name(), ErrNotSelfSigned)
}
sum, err := c.Fingerprint()
if err != nil {
return fmt.Errorf("could not calculate fingerprint for provided CA; error: %w; %s", err, c.Name())
}
cc := &CachedCertificate{
Certificate: c,
Fingerprint: sum,
InvertedGroups: make(map[string]struct{}),
}
for _, g := range c.Groups() {
cc.InvertedGroups[g] = struct{}{}
}
ncp.CAs[sum] = cc
if c.Expired(time.Now()) {
return fmt.Errorf("%s: %w", c.Name(), ErrExpired)
}
return nil
}
// BlocklistFingerprint adds a cert fingerprint to the blocklist
func (ncp *CAPool) BlocklistFingerprint(f string) {
ncp.certBlocklist[f] = struct{}{}
}
// ResetCertBlocklist removes all previously blocklisted cert fingerprints
func (ncp *CAPool) ResetCertBlocklist() {
ncp.certBlocklist = make(map[string]struct{})
}
// IsBlocklisted tests the provided fingerprint against the pools blocklist.
// Returns true if the fingerprint is blocked.
func (ncp *CAPool) IsBlocklisted(fingerprint string) bool {
if _, ok := ncp.certBlocklist[fingerprint]; ok {
return true
}
return false
}
// VerifyCertificate verifies the certificate is valid and is signed by a trusted CA in the pool.
// If the certificate is valid then the returned CachedCertificate can be used in subsequent verification attempts
// to increase performance.
func (ncp *CAPool) VerifyCertificate(now time.Time, c Certificate) (*CachedCertificate, error) {
if c == nil {
return nil, fmt.Errorf("no certificate")
}
fp, err := c.Fingerprint()
if err != nil {
return nil, fmt.Errorf("could not calculate fingerprint to verify: %w", err)
}
signer, err := ncp.verify(c, now, fp, "")
if err != nil {
return nil, err
}
cc := CachedCertificate{
Certificate: c,
InvertedGroups: make(map[string]struct{}),
Fingerprint: fp,
signerFingerprint: signer.Fingerprint,
}
for _, g := range c.Groups() {
cc.InvertedGroups[g] = struct{}{}
}
return &cc, nil
}
// VerifyCachedCertificate is the same as VerifyCertificate other than it operates on a pre-verified structure and
// is a cheaper operation to perform as a result.
func (ncp *CAPool) VerifyCachedCertificate(now time.Time, c *CachedCertificate) error {
_, err := ncp.verify(c.Certificate, now, c.Fingerprint, c.signerFingerprint)
return err
}
func (ncp *CAPool) verify(c Certificate, now time.Time, certFp string, signerFp string) (*CachedCertificate, error) {
if ncp.IsBlocklisted(certFp) {
return nil, ErrBlockListed
}
signer, err := ncp.GetCAForCert(c)
if err != nil {
return nil, err
}
if signer.Certificate.Expired(now) {
return nil, ErrRootExpired
}
if c.Expired(now) {
return nil, ErrExpired
}
// If we are checking a cached certificate then we can bail early here
// Either the root is no longer trusted or everything is fine
if len(signerFp) > 0 {
if signerFp != signer.Fingerprint {
return nil, ErrFingerprintMismatch
}
return signer, nil
}
if !c.CheckSignature(signer.Certificate.PublicKey()) {
return nil, ErrSignatureMismatch
}
err = CheckCAConstraints(signer.Certificate, c)
if err != nil {
return nil, err
}
return signer, nil
}
// GetCAForCert attempts to return the signing certificate for the provided certificate.
// No signature validation is performed
func (ncp *CAPool) GetCAForCert(c Certificate) (*CachedCertificate, error) {
issuer := c.Issuer()
if issuer == "" {
return nil, fmt.Errorf("no issuer in certificate")
}
signer, ok := ncp.CAs[issuer]
if ok {
return signer, nil
}
return nil, ErrCaNotFound
}
// GetFingerprints returns an array of trusted CA fingerprints
func (ncp *CAPool) GetFingerprints() []string {
fp := make([]string, len(ncp.CAs))
i := 0
for k := range ncp.CAs {
fp[i] = k
i++
}
return fp
}
// CheckCAConstraints returns an error if the sub certificate violates constraints present in the signer certificate.
func CheckCAConstraints(signer Certificate, sub Certificate) error {
return checkCAConstraints(signer, sub.NotBefore(), sub.NotAfter(), sub.Groups(), sub.Networks(), sub.UnsafeNetworks())
}
// checkCAConstraints is a very generic function allowing both Certificates and TBSCertificates to be tested.
func checkCAConstraints(signer Certificate, notBefore, notAfter time.Time, groups []string, networks, unsafeNetworks []netip.Prefix) error {
// Make sure this cert isn't valid after the root
if notAfter.After(signer.NotAfter()) {
return fmt.Errorf("certificate expires after signing certificate")
}
// Make sure this cert wasn't valid before the root
if notBefore.Before(signer.NotBefore()) {
return fmt.Errorf("certificate is valid before the signing certificate")
}
// If the signer has a limited set of groups make sure the cert only contains a subset
signerGroups := signer.Groups()
if len(signerGroups) > 0 {
for _, g := range groups {
if !slices.Contains(signerGroups, g) {
return fmt.Errorf("certificate contained a group not present on the signing ca: %s", g)
}
}
}
// If the signer has a limited set of ip ranges to issue from make sure the cert only contains a subset
signingNetworks := signer.Networks()
if len(signingNetworks) > 0 {
for _, certNetwork := range networks {
found := false
for _, signingNetwork := range signingNetworks {
if signingNetwork.Contains(certNetwork.Addr()) && signingNetwork.Bits() <= certNetwork.Bits() {
found = true
break
}
}
if !found {
return fmt.Errorf("certificate contained a network assignment outside the limitations of the signing ca: %s", certNetwork.String())
}
}
}
// If the signer has a limited set of subnet ranges to issue from make sure the cert only contains a subset
signingUnsafeNetworks := signer.UnsafeNetworks()
if len(signingUnsafeNetworks) > 0 {
for _, certUnsafeNetwork := range unsafeNetworks {
found := false
for _, caNetwork := range signingUnsafeNetworks {
if caNetwork.Contains(certUnsafeNetwork.Addr()) && caNetwork.Bits() <= certUnsafeNetwork.Bits() {
found = true
break
}
}
if !found {
return fmt.Errorf("certificate contained an unsafe network assignment outside the limitations of the signing ca: %s", certUnsafeNetwork.String())
}
}
}
return nil
}

View File

@@ -1,560 +0,0 @@
package cert
import (
"net/netip"
"testing"
"time"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestNewCAPoolFromBytes(t *testing.T) {
noNewLines := `
# Current provisional, Remove once everything moves over to the real root.
-----BEGIN NEBULA CERTIFICATE-----
Cj4KDm5lYnVsYSByb290IGNhKM0cMM24zPCvBzogV24YEw5YiqeI/oYo8XXFsoo+
PBmiOafNJhLacf9rsspAARJAz9OAnh8TKAUKix1kKVMyQU4iM3LsFfZRf6ODWXIf
2qWMpB6fpd3PSoVYziPoOt2bIHIFLlgRLPJz3I3xBEdBCQ==
-----END NEBULA CERTIFICATE-----
# root-ca01
-----BEGIN NEBULA CERTIFICATE-----
CkEKEW5lYnVsYSByb290IGNhIDAxKM0cMM24zPCvBzogPzbWTxt8ZgXPQEwup7Br
BrtIt1O0q5AuTRT3+t2x1VJAARJAZ+2ib23qBXjdy49oU1YysrwuKkWWKrtJ7Jye
rFBQpDXikOukhQD/mfkloFwJ+Yjsfru7IpTN4ZfjXL+kN/2sCA==
-----END NEBULA CERTIFICATE-----
`
withNewLines := `
# Current provisional, Remove once everything moves over to the real root.
-----BEGIN NEBULA CERTIFICATE-----
Cj4KDm5lYnVsYSByb290IGNhKM0cMM24zPCvBzogV24YEw5YiqeI/oYo8XXFsoo+
PBmiOafNJhLacf9rsspAARJAz9OAnh8TKAUKix1kKVMyQU4iM3LsFfZRf6ODWXIf
2qWMpB6fpd3PSoVYziPoOt2bIHIFLlgRLPJz3I3xBEdBCQ==
-----END NEBULA CERTIFICATE-----
# root-ca01
-----BEGIN NEBULA CERTIFICATE-----
CkEKEW5lYnVsYSByb290IGNhIDAxKM0cMM24zPCvBzogPzbWTxt8ZgXPQEwup7Br
BrtIt1O0q5AuTRT3+t2x1VJAARJAZ+2ib23qBXjdy49oU1YysrwuKkWWKrtJ7Jye
rFBQpDXikOukhQD/mfkloFwJ+Yjsfru7IpTN4ZfjXL+kN/2sCA==
-----END NEBULA CERTIFICATE-----
`
expired := `
# expired certificate
-----BEGIN NEBULA CERTIFICATE-----
CjMKB2V4cGlyZWQozRwwzRw6ICJSG94CqX8wn5I65Pwn25V6HftVfWeIySVtp2DA
7TY/QAESQMaAk5iJT5EnQwK524ZaaHGEJLUqqbh5yyOHhboIGiVTWkFeH3HccTW8
Tq5a8AyWDQdfXbtEZ1FwabeHfH5Asw0=
-----END NEBULA CERTIFICATE-----
`
p256 := `
# p256 certificate
-----BEGIN NEBULA CERTIFICATE-----
CmQKEG5lYnVsYSBQMjU2IHRlc3QozRwwzbjM8K8HOkEEdrmmg40zQp44AkMq6DZp
k+coOv04r+zh33ISyhbsafnYduN17p2eD7CmHvHuerguXD9f32gcxo/KsFCKEjMe
+0ABoAYBEkcwRQIgVoTg38L7uWku9xQgsr06kxZ/viQLOO/w1Qj1vFUEnhcCIQCq
75SjTiV92kv/1GcbT3wWpAZQQDBiUHVMVmh1822szA==
-----END NEBULA CERTIFICATE-----
`
rootCA := certificateV1{
details: detailsV1{
name: "nebula root ca",
},
}
rootCA01 := certificateV1{
details: detailsV1{
name: "nebula root ca 01",
},
}
rootCAP256 := certificateV1{
details: detailsV1{
name: "nebula P256 test",
},
}
p, err := NewCAPoolFromPEM([]byte(noNewLines))
require.NoError(t, err)
assert.Equal(t, p.CAs["ce4e6c7a596996eb0d82a8875f0f0137a4b53ce22d2421c9fd7150e7a26f6300"].Certificate.Name(), rootCA.details.name)
assert.Equal(t, p.CAs["04c585fcd9a49b276df956a22b7ebea3bf23f1fca5a17c0b56ce2e626631969e"].Certificate.Name(), rootCA01.details.name)
pp, err := NewCAPoolFromPEM([]byte(withNewLines))
require.NoError(t, err)
assert.Equal(t, pp.CAs["ce4e6c7a596996eb0d82a8875f0f0137a4b53ce22d2421c9fd7150e7a26f6300"].Certificate.Name(), rootCA.details.name)
assert.Equal(t, pp.CAs["04c585fcd9a49b276df956a22b7ebea3bf23f1fca5a17c0b56ce2e626631969e"].Certificate.Name(), rootCA01.details.name)
// expired cert, no valid certs
ppp, err := NewCAPoolFromPEM([]byte(expired))
assert.Equal(t, ErrExpired, err)
assert.Equal(t, "expired", ppp.CAs["c39b35a0e8f246203fe4f32b9aa8bfd155f1ae6a6be9d78370641e43397f48f5"].Certificate.Name())
// expired cert, with valid certs
pppp, err := NewCAPoolFromPEM(append([]byte(expired), noNewLines...))
assert.Equal(t, ErrExpired, err)
assert.Equal(t, pppp.CAs["ce4e6c7a596996eb0d82a8875f0f0137a4b53ce22d2421c9fd7150e7a26f6300"].Certificate.Name(), rootCA.details.name)
assert.Equal(t, pppp.CAs["04c585fcd9a49b276df956a22b7ebea3bf23f1fca5a17c0b56ce2e626631969e"].Certificate.Name(), rootCA01.details.name)
assert.Equal(t, "expired", pppp.CAs["c39b35a0e8f246203fe4f32b9aa8bfd155f1ae6a6be9d78370641e43397f48f5"].Certificate.Name())
assert.Len(t, pppp.CAs, 3)
ppppp, err := NewCAPoolFromPEM([]byte(p256))
require.NoError(t, err)
assert.Equal(t, ppppp.CAs["552bf7d99bec1fc775a0e4c324bf6d8f789b3078f1919c7960d2e5e0c351ee97"].Certificate.Name(), rootCAP256.details.name)
assert.Len(t, ppppp.CAs, 1)
}
func TestCertificateV1_Verify(t *testing.T) {
ca, _, caKey, _ := NewTestCaCert(Version1, Curve_CURVE25519, time.Now(), time.Now().Add(10*time.Minute), nil, nil, nil)
c, _, _, _ := NewTestCert(Version1, Curve_CURVE25519, ca, caKey, "test cert", time.Now(), time.Now().Add(5*time.Minute), nil, nil, nil)
caPool := NewCAPool()
require.NoError(t, caPool.AddCA(ca))
f, err := c.Fingerprint()
require.NoError(t, err)
caPool.BlocklistFingerprint(f)
_, err = caPool.VerifyCertificate(time.Now(), c)
require.EqualError(t, err, "certificate is in the block list")
caPool.ResetCertBlocklist()
_, err = caPool.VerifyCertificate(time.Now(), c)
require.NoError(t, err)
_, err = caPool.VerifyCertificate(time.Now().Add(time.Hour*1000), c)
require.EqualError(t, err, "root certificate is expired")
assert.PanicsWithError(t, "certificate is valid before the signing certificate", func() {
NewTestCert(Version1, Curve_CURVE25519, ca, caKey, "test cert2", time.Time{}, time.Time{}, nil, nil, nil)
})
// Test group assertion
ca, _, caKey, _ = NewTestCaCert(Version1, Curve_CURVE25519, time.Now(), time.Now().Add(10*time.Minute), nil, nil, []string{"test1", "test2"})
caPem, err := ca.MarshalPEM()
require.NoError(t, err)
caPool = NewCAPool()
b, err := caPool.AddCAFromPEM(caPem)
require.NoError(t, err)
assert.Empty(t, b)
assert.PanicsWithError(t, "certificate contained a group not present on the signing ca: bad", func() {
NewTestCert(Version1, Curve_CURVE25519, ca, caKey, "test", time.Now(), time.Now().Add(5*time.Minute), nil, nil, []string{"test1", "bad"})
})
c, _, _, _ = NewTestCert(Version1, Curve_CURVE25519, ca, caKey, "test2", time.Now(), time.Now().Add(5*time.Minute), nil, nil, []string{"test1"})
require.NoError(t, err)
_, err = caPool.VerifyCertificate(time.Now(), c)
require.NoError(t, err)
}
func TestCertificateV1_VerifyP256(t *testing.T) {
ca, _, caKey, _ := NewTestCaCert(Version1, Curve_P256, time.Now(), time.Now().Add(10*time.Minute), nil, nil, nil)
c, _, _, _ := NewTestCert(Version1, Curve_P256, ca, caKey, "test", time.Now(), time.Now().Add(5*time.Minute), nil, nil, nil)
caPool := NewCAPool()
require.NoError(t, caPool.AddCA(ca))
f, err := c.Fingerprint()
require.NoError(t, err)
caPool.BlocklistFingerprint(f)
_, err = caPool.VerifyCertificate(time.Now(), c)
require.EqualError(t, err, "certificate is in the block list")
caPool.ResetCertBlocklist()
_, err = caPool.VerifyCertificate(time.Now(), c)
require.NoError(t, err)
_, err = caPool.VerifyCertificate(time.Now().Add(time.Hour*1000), c)
require.EqualError(t, err, "root certificate is expired")
assert.PanicsWithError(t, "certificate is valid before the signing certificate", func() {
NewTestCert(Version1, Curve_P256, ca, caKey, "test", time.Time{}, time.Time{}, nil, nil, nil)
})
// Test group assertion
ca, _, caKey, _ = NewTestCaCert(Version1, Curve_P256, time.Now(), time.Now().Add(10*time.Minute), nil, nil, []string{"test1", "test2"})
caPem, err := ca.MarshalPEM()
require.NoError(t, err)
caPool = NewCAPool()
b, err := caPool.AddCAFromPEM(caPem)
require.NoError(t, err)
assert.Empty(t, b)
assert.PanicsWithError(t, "certificate contained a group not present on the signing ca: bad", func() {
NewTestCert(Version1, Curve_P256, ca, caKey, "test", time.Now(), time.Now().Add(5*time.Minute), nil, nil, []string{"test1", "bad"})
})
c, _, _, _ = NewTestCert(Version1, Curve_P256, ca, caKey, "test", time.Now(), time.Now().Add(5*time.Minute), nil, nil, []string{"test1"})
_, err = caPool.VerifyCertificate(time.Now(), c)
require.NoError(t, err)
}
func TestCertificateV1_Verify_IPs(t *testing.T) {
caIp1 := mustParsePrefixUnmapped("10.0.0.0/16")
caIp2 := mustParsePrefixUnmapped("192.168.0.0/24")
ca, _, caKey, _ := NewTestCaCert(Version1, Curve_CURVE25519, time.Now(), time.Now().Add(10*time.Minute), []netip.Prefix{caIp1, caIp2}, nil, []string{"test"})
caPem, err := ca.MarshalPEM()
require.NoError(t, err)
caPool := NewCAPool()
b, err := caPool.AddCAFromPEM(caPem)
require.NoError(t, err)
assert.Empty(t, b)
// ip is outside the network
cIp1 := mustParsePrefixUnmapped("10.1.0.0/24")
cIp2 := mustParsePrefixUnmapped("192.168.0.1/16")
assert.PanicsWithError(t, "certificate contained a network assignment outside the limitations of the signing ca: 10.1.0.0/24", func() {
NewTestCert(Version1, Curve_CURVE25519, ca, caKey, "test", time.Now(), time.Now().Add(5*time.Minute), []netip.Prefix{cIp1, cIp2}, nil, []string{"test"})
})
// ip is outside the network reversed order of above
cIp1 = mustParsePrefixUnmapped("192.168.0.1/24")
cIp2 = mustParsePrefixUnmapped("10.1.0.0/24")
assert.PanicsWithError(t, "certificate contained a network assignment outside the limitations of the signing ca: 10.1.0.0/24", func() {
NewTestCert(Version1, Curve_CURVE25519, ca, caKey, "test", time.Now(), time.Now().Add(5*time.Minute), []netip.Prefix{cIp1, cIp2}, nil, []string{"test"})
})
// ip is within the network but mask is outside
cIp1 = mustParsePrefixUnmapped("10.0.1.0/15")
cIp2 = mustParsePrefixUnmapped("192.168.0.1/24")
assert.PanicsWithError(t, "certificate contained a network assignment outside the limitations of the signing ca: 10.0.1.0/15", func() {
NewTestCert(Version1, Curve_CURVE25519, ca, caKey, "test", time.Now(), time.Now().Add(5*time.Minute), []netip.Prefix{cIp1, cIp2}, nil, []string{"test"})
})
// ip is within the network but mask is outside reversed order of above
cIp1 = mustParsePrefixUnmapped("192.168.0.1/24")
cIp2 = mustParsePrefixUnmapped("10.0.1.0/15")
assert.PanicsWithError(t, "certificate contained a network assignment outside the limitations of the signing ca: 10.0.1.0/15", func() {
NewTestCert(Version1, Curve_CURVE25519, ca, caKey, "test", time.Now(), time.Now().Add(5*time.Minute), []netip.Prefix{cIp1, cIp2}, nil, []string{"test"})
})
// ip and mask are within the network
cIp1 = mustParsePrefixUnmapped("10.0.1.0/16")
cIp2 = mustParsePrefixUnmapped("192.168.0.1/25")
c, _, _, _ := NewTestCert(Version1, Curve_CURVE25519, ca, caKey, "test", time.Now(), time.Now().Add(5*time.Minute), []netip.Prefix{cIp1, cIp2}, nil, []string{"test"})
_, err = caPool.VerifyCertificate(time.Now(), c)
require.NoError(t, err)
// Exact matches
c, _, _, _ = NewTestCert(Version1, Curve_CURVE25519, ca, caKey, "test", time.Now(), time.Now().Add(5*time.Minute), []netip.Prefix{caIp1, caIp2}, nil, []string{"test"})
require.NoError(t, err)
_, err = caPool.VerifyCertificate(time.Now(), c)
require.NoError(t, err)
// Exact matches reversed
c, _, _, _ = NewTestCert(Version1, Curve_CURVE25519, ca, caKey, "test", time.Now(), time.Now().Add(5*time.Minute), []netip.Prefix{caIp2, caIp1}, nil, []string{"test"})
require.NoError(t, err)
_, err = caPool.VerifyCertificate(time.Now(), c)
require.NoError(t, err)
// Exact matches reversed with just 1
c, _, _, _ = NewTestCert(Version1, Curve_CURVE25519, ca, caKey, "test", time.Now(), time.Now().Add(5*time.Minute), []netip.Prefix{caIp1}, nil, []string{"test"})
require.NoError(t, err)
_, err = caPool.VerifyCertificate(time.Now(), c)
require.NoError(t, err)
}
func TestCertificateV1_Verify_Subnets(t *testing.T) {
caIp1 := mustParsePrefixUnmapped("10.0.0.0/16")
caIp2 := mustParsePrefixUnmapped("192.168.0.0/24")
ca, _, caKey, _ := NewTestCaCert(Version1, Curve_CURVE25519, time.Now(), time.Now().Add(10*time.Minute), nil, []netip.Prefix{caIp1, caIp2}, []string{"test"})
caPem, err := ca.MarshalPEM()
require.NoError(t, err)
caPool := NewCAPool()
b, err := caPool.AddCAFromPEM(caPem)
require.NoError(t, err)
assert.Empty(t, b)
// ip is outside the network
cIp1 := mustParsePrefixUnmapped("10.1.0.0/24")
cIp2 := mustParsePrefixUnmapped("192.168.0.1/16")
assert.PanicsWithError(t, "certificate contained an unsafe network assignment outside the limitations of the signing ca: 10.1.0.0/24", func() {
NewTestCert(Version1, Curve_CURVE25519, ca, caKey, "test", time.Now(), time.Now().Add(5*time.Minute), nil, []netip.Prefix{cIp1, cIp2}, []string{"test"})
})
// ip is outside the network reversed order of above
cIp1 = mustParsePrefixUnmapped("192.168.0.1/24")
cIp2 = mustParsePrefixUnmapped("10.1.0.0/24")
assert.PanicsWithError(t, "certificate contained an unsafe network assignment outside the limitations of the signing ca: 10.1.0.0/24", func() {
NewTestCert(Version1, Curve_CURVE25519, ca, caKey, "test", time.Now(), time.Now().Add(5*time.Minute), nil, []netip.Prefix{cIp1, cIp2}, []string{"test"})
})
// ip is within the network but mask is outside
cIp1 = mustParsePrefixUnmapped("10.0.1.0/15")
cIp2 = mustParsePrefixUnmapped("192.168.0.1/24")
assert.PanicsWithError(t, "certificate contained an unsafe network assignment outside the limitations of the signing ca: 10.0.1.0/15", func() {
NewTestCert(Version1, Curve_CURVE25519, ca, caKey, "test", time.Now(), time.Now().Add(5*time.Minute), nil, []netip.Prefix{cIp1, cIp2}, []string{"test"})
})
// ip is within the network but mask is outside reversed order of above
cIp1 = mustParsePrefixUnmapped("192.168.0.1/24")
cIp2 = mustParsePrefixUnmapped("10.0.1.0/15")
assert.PanicsWithError(t, "certificate contained an unsafe network assignment outside the limitations of the signing ca: 10.0.1.0/15", func() {
NewTestCert(Version1, Curve_CURVE25519, ca, caKey, "test", time.Now(), time.Now().Add(5*time.Minute), nil, []netip.Prefix{cIp1, cIp2}, []string{"test"})
})
// ip and mask are within the network
cIp1 = mustParsePrefixUnmapped("10.0.1.0/16")
cIp2 = mustParsePrefixUnmapped("192.168.0.1/25")
c, _, _, _ := NewTestCert(Version1, Curve_CURVE25519, ca, caKey, "test", time.Now(), time.Now().Add(5*time.Minute), nil, []netip.Prefix{cIp1, cIp2}, []string{"test"})
require.NoError(t, err)
_, err = caPool.VerifyCertificate(time.Now(), c)
require.NoError(t, err)
// Exact matches
c, _, _, _ = NewTestCert(Version1, Curve_CURVE25519, ca, caKey, "test", time.Now(), time.Now().Add(5*time.Minute), nil, []netip.Prefix{caIp1, caIp2}, []string{"test"})
require.NoError(t, err)
_, err = caPool.VerifyCertificate(time.Now(), c)
require.NoError(t, err)
// Exact matches reversed
c, _, _, _ = NewTestCert(Version1, Curve_CURVE25519, ca, caKey, "test", time.Now(), time.Now().Add(5*time.Minute), nil, []netip.Prefix{caIp2, caIp1}, []string{"test"})
require.NoError(t, err)
_, err = caPool.VerifyCertificate(time.Now(), c)
require.NoError(t, err)
// Exact matches reversed with just 1
c, _, _, _ = NewTestCert(Version1, Curve_CURVE25519, ca, caKey, "test", time.Now(), time.Now().Add(5*time.Minute), nil, []netip.Prefix{caIp1}, []string{"test"})
require.NoError(t, err)
_, err = caPool.VerifyCertificate(time.Now(), c)
require.NoError(t, err)
}
func TestCertificateV2_Verify(t *testing.T) {
ca, _, caKey, _ := NewTestCaCert(Version2, Curve_CURVE25519, time.Now(), time.Now().Add(10*time.Minute), nil, nil, nil)
c, _, _, _ := NewTestCert(Version2, Curve_CURVE25519, ca, caKey, "test cert", time.Now(), time.Now().Add(5*time.Minute), nil, nil, nil)
caPool := NewCAPool()
require.NoError(t, caPool.AddCA(ca))
f, err := c.Fingerprint()
require.NoError(t, err)
caPool.BlocklistFingerprint(f)
_, err = caPool.VerifyCertificate(time.Now(), c)
require.EqualError(t, err, "certificate is in the block list")
caPool.ResetCertBlocklist()
_, err = caPool.VerifyCertificate(time.Now(), c)
require.NoError(t, err)
_, err = caPool.VerifyCertificate(time.Now().Add(time.Hour*1000), c)
require.EqualError(t, err, "root certificate is expired")
assert.PanicsWithError(t, "certificate is valid before the signing certificate", func() {
NewTestCert(Version2, Curve_CURVE25519, ca, caKey, "test cert2", time.Time{}, time.Time{}, nil, nil, nil)
})
// Test group assertion
ca, _, caKey, _ = NewTestCaCert(Version2, Curve_CURVE25519, time.Now(), time.Now().Add(10*time.Minute), nil, nil, []string{"test1", "test2"})
caPem, err := ca.MarshalPEM()
require.NoError(t, err)
caPool = NewCAPool()
b, err := caPool.AddCAFromPEM(caPem)
require.NoError(t, err)
assert.Empty(t, b)
assert.PanicsWithError(t, "certificate contained a group not present on the signing ca: bad", func() {
NewTestCert(Version2, Curve_CURVE25519, ca, caKey, "test", time.Now(), time.Now().Add(5*time.Minute), nil, nil, []string{"test1", "bad"})
})
c, _, _, _ = NewTestCert(Version2, Curve_CURVE25519, ca, caKey, "test2", time.Now(), time.Now().Add(5*time.Minute), nil, nil, []string{"test1"})
require.NoError(t, err)
_, err = caPool.VerifyCertificate(time.Now(), c)
require.NoError(t, err)
}
func TestCertificateV2_VerifyP256(t *testing.T) {
ca, _, caKey, _ := NewTestCaCert(Version2, Curve_P256, time.Now(), time.Now().Add(10*time.Minute), nil, nil, nil)
c, _, _, _ := NewTestCert(Version2, Curve_P256, ca, caKey, "test", time.Now(), time.Now().Add(5*time.Minute), nil, nil, nil)
caPool := NewCAPool()
require.NoError(t, caPool.AddCA(ca))
f, err := c.Fingerprint()
require.NoError(t, err)
caPool.BlocklistFingerprint(f)
_, err = caPool.VerifyCertificate(time.Now(), c)
require.EqualError(t, err, "certificate is in the block list")
caPool.ResetCertBlocklist()
_, err = caPool.VerifyCertificate(time.Now(), c)
require.NoError(t, err)
_, err = caPool.VerifyCertificate(time.Now().Add(time.Hour*1000), c)
require.EqualError(t, err, "root certificate is expired")
assert.PanicsWithError(t, "certificate is valid before the signing certificate", func() {
NewTestCert(Version2, Curve_P256, ca, caKey, "test", time.Time{}, time.Time{}, nil, nil, nil)
})
// Test group assertion
ca, _, caKey, _ = NewTestCaCert(Version2, Curve_P256, time.Now(), time.Now().Add(10*time.Minute), nil, nil, []string{"test1", "test2"})
caPem, err := ca.MarshalPEM()
require.NoError(t, err)
caPool = NewCAPool()
b, err := caPool.AddCAFromPEM(caPem)
require.NoError(t, err)
assert.Empty(t, b)
assert.PanicsWithError(t, "certificate contained a group not present on the signing ca: bad", func() {
NewTestCert(Version2, Curve_P256, ca, caKey, "test", time.Now(), time.Now().Add(5*time.Minute), nil, nil, []string{"test1", "bad"})
})
c, _, _, _ = NewTestCert(Version2, Curve_P256, ca, caKey, "test", time.Now(), time.Now().Add(5*time.Minute), nil, nil, []string{"test1"})
_, err = caPool.VerifyCertificate(time.Now(), c)
require.NoError(t, err)
}
func TestCertificateV2_Verify_IPs(t *testing.T) {
caIp1 := mustParsePrefixUnmapped("10.0.0.0/16")
caIp2 := mustParsePrefixUnmapped("192.168.0.0/24")
ca, _, caKey, _ := NewTestCaCert(Version2, Curve_CURVE25519, time.Now(), time.Now().Add(10*time.Minute), []netip.Prefix{caIp1, caIp2}, nil, []string{"test"})
caPem, err := ca.MarshalPEM()
require.NoError(t, err)
caPool := NewCAPool()
b, err := caPool.AddCAFromPEM(caPem)
require.NoError(t, err)
assert.Empty(t, b)
// ip is outside the network
cIp1 := mustParsePrefixUnmapped("10.1.0.0/24")
cIp2 := mustParsePrefixUnmapped("192.168.0.1/16")
assert.PanicsWithError(t, "certificate contained a network assignment outside the limitations of the signing ca: 10.1.0.0/24", func() {
NewTestCert(Version2, Curve_CURVE25519, ca, caKey, "test", time.Now(), time.Now().Add(5*time.Minute), []netip.Prefix{cIp1, cIp2}, nil, []string{"test"})
})
// ip is outside the network reversed order of above
cIp1 = mustParsePrefixUnmapped("192.168.0.1/24")
cIp2 = mustParsePrefixUnmapped("10.1.0.0/24")
assert.PanicsWithError(t, "certificate contained a network assignment outside the limitations of the signing ca: 10.1.0.0/24", func() {
NewTestCert(Version2, Curve_CURVE25519, ca, caKey, "test", time.Now(), time.Now().Add(5*time.Minute), []netip.Prefix{cIp1, cIp2}, nil, []string{"test"})
})
// ip is within the network but mask is outside
cIp1 = mustParsePrefixUnmapped("10.0.1.0/15")
cIp2 = mustParsePrefixUnmapped("192.168.0.1/24")
assert.PanicsWithError(t, "certificate contained a network assignment outside the limitations of the signing ca: 10.0.1.0/15", func() {
NewTestCert(Version2, Curve_CURVE25519, ca, caKey, "test", time.Now(), time.Now().Add(5*time.Minute), []netip.Prefix{cIp1, cIp2}, nil, []string{"test"})
})
// ip is within the network but mask is outside reversed order of above
cIp1 = mustParsePrefixUnmapped("192.168.0.1/24")
cIp2 = mustParsePrefixUnmapped("10.0.1.0/15")
assert.PanicsWithError(t, "certificate contained a network assignment outside the limitations of the signing ca: 10.0.1.0/15", func() {
NewTestCert(Version2, Curve_CURVE25519, ca, caKey, "test", time.Now(), time.Now().Add(5*time.Minute), []netip.Prefix{cIp1, cIp2}, nil, []string{"test"})
})
// ip and mask are within the network
cIp1 = mustParsePrefixUnmapped("10.0.1.0/16")
cIp2 = mustParsePrefixUnmapped("192.168.0.1/25")
c, _, _, _ := NewTestCert(Version2, Curve_CURVE25519, ca, caKey, "test", time.Now(), time.Now().Add(5*time.Minute), []netip.Prefix{cIp1, cIp2}, nil, []string{"test"})
_, err = caPool.VerifyCertificate(time.Now(), c)
require.NoError(t, err)
// Exact matches
c, _, _, _ = NewTestCert(Version2, Curve_CURVE25519, ca, caKey, "test", time.Now(), time.Now().Add(5*time.Minute), []netip.Prefix{caIp1, caIp2}, nil, []string{"test"})
require.NoError(t, err)
_, err = caPool.VerifyCertificate(time.Now(), c)
require.NoError(t, err)
// Exact matches reversed
c, _, _, _ = NewTestCert(Version2, Curve_CURVE25519, ca, caKey, "test", time.Now(), time.Now().Add(5*time.Minute), []netip.Prefix{caIp2, caIp1}, nil, []string{"test"})
require.NoError(t, err)
_, err = caPool.VerifyCertificate(time.Now(), c)
require.NoError(t, err)
// Exact matches reversed with just 1
c, _, _, _ = NewTestCert(Version2, Curve_CURVE25519, ca, caKey, "test", time.Now(), time.Now().Add(5*time.Minute), []netip.Prefix{caIp1}, nil, []string{"test"})
require.NoError(t, err)
_, err = caPool.VerifyCertificate(time.Now(), c)
require.NoError(t, err)
}
func TestCertificateV2_Verify_Subnets(t *testing.T) {
caIp1 := mustParsePrefixUnmapped("10.0.0.0/16")
caIp2 := mustParsePrefixUnmapped("192.168.0.0/24")
ca, _, caKey, _ := NewTestCaCert(Version2, Curve_CURVE25519, time.Now(), time.Now().Add(10*time.Minute), nil, []netip.Prefix{caIp1, caIp2}, []string{"test"})
caPem, err := ca.MarshalPEM()
require.NoError(t, err)
caPool := NewCAPool()
b, err := caPool.AddCAFromPEM(caPem)
require.NoError(t, err)
assert.Empty(t, b)
// ip is outside the network
cIp1 := mustParsePrefixUnmapped("10.1.0.0/24")
cIp2 := mustParsePrefixUnmapped("192.168.0.1/16")
assert.PanicsWithError(t, "certificate contained an unsafe network assignment outside the limitations of the signing ca: 10.1.0.0/24", func() {
NewTestCert(Version2, Curve_CURVE25519, ca, caKey, "test", time.Now(), time.Now().Add(5*time.Minute), nil, []netip.Prefix{cIp1, cIp2}, []string{"test"})
})
// ip is outside the network reversed order of above
cIp1 = mustParsePrefixUnmapped("192.168.0.1/24")
cIp2 = mustParsePrefixUnmapped("10.1.0.0/24")
assert.PanicsWithError(t, "certificate contained an unsafe network assignment outside the limitations of the signing ca: 10.1.0.0/24", func() {
NewTestCert(Version2, Curve_CURVE25519, ca, caKey, "test", time.Now(), time.Now().Add(5*time.Minute), nil, []netip.Prefix{cIp1, cIp2}, []string{"test"})
})
// ip is within the network but mask is outside
cIp1 = mustParsePrefixUnmapped("10.0.1.0/15")
cIp2 = mustParsePrefixUnmapped("192.168.0.1/24")
assert.PanicsWithError(t, "certificate contained an unsafe network assignment outside the limitations of the signing ca: 10.0.1.0/15", func() {
NewTestCert(Version2, Curve_CURVE25519, ca, caKey, "test", time.Now(), time.Now().Add(5*time.Minute), nil, []netip.Prefix{cIp1, cIp2}, []string{"test"})
})
// ip is within the network but mask is outside reversed order of above
cIp1 = mustParsePrefixUnmapped("192.168.0.1/24")
cIp2 = mustParsePrefixUnmapped("10.0.1.0/15")
assert.PanicsWithError(t, "certificate contained an unsafe network assignment outside the limitations of the signing ca: 10.0.1.0/15", func() {
NewTestCert(Version2, Curve_CURVE25519, ca, caKey, "test", time.Now(), time.Now().Add(5*time.Minute), nil, []netip.Prefix{cIp1, cIp2}, []string{"test"})
})
// ip and mask are within the network
cIp1 = mustParsePrefixUnmapped("10.0.1.0/16")
cIp2 = mustParsePrefixUnmapped("192.168.0.1/25")
c, _, _, _ := NewTestCert(Version2, Curve_CURVE25519, ca, caKey, "test", time.Now(), time.Now().Add(5*time.Minute), nil, []netip.Prefix{cIp1, cIp2}, []string{"test"})
require.NoError(t, err)
_, err = caPool.VerifyCertificate(time.Now(), c)
require.NoError(t, err)
// Exact matches
c, _, _, _ = NewTestCert(Version2, Curve_CURVE25519, ca, caKey, "test", time.Now(), time.Now().Add(5*time.Minute), nil, []netip.Prefix{caIp1, caIp2}, []string{"test"})
require.NoError(t, err)
_, err = caPool.VerifyCertificate(time.Now(), c)
require.NoError(t, err)
// Exact matches reversed
c, _, _, _ = NewTestCert(Version2, Curve_CURVE25519, ca, caKey, "test", time.Now(), time.Now().Add(5*time.Minute), nil, []netip.Prefix{caIp2, caIp1}, []string{"test"})
require.NoError(t, err)
_, err = caPool.VerifyCertificate(time.Now(), c)
require.NoError(t, err)
// Exact matches reversed with just 1
c, _, _, _ = NewTestCert(Version2, Curve_CURVE25519, ca, caKey, "test", time.Now(), time.Now().Add(5*time.Minute), nil, []netip.Prefix{caIp1}, []string{"test"})
require.NoError(t, err)
_, err = caPool.VerifyCertificate(time.Now(), c)
require.NoError(t, err)
}

View File

@@ -1,153 +1,609 @@
package cert package cert
import ( import (
"bytes"
"crypto"
"crypto/rand"
"crypto/sha256"
"encoding/binary"
"encoding/hex"
"encoding/json"
"encoding/pem"
"fmt" "fmt"
"net/netip" "net"
"time" "time"
"github.com/golang/protobuf/proto"
"golang.org/x/crypto/curve25519"
"golang.org/x/crypto/ed25519"
) )
type Version uint8 const publicKeyLen = 32
const ( const (
VersionPre1 Version = 0 CertBanner = "NEBULA CERTIFICATE"
Version1 Version = 1 X25519PrivateKeyBanner = "NEBULA X25519 PRIVATE KEY"
Version2 Version = 2 X25519PublicKeyBanner = "NEBULA X25519 PUBLIC KEY"
Ed25519PrivateKeyBanner = "NEBULA ED25519 PRIVATE KEY"
Ed25519PublicKeyBanner = "NEBULA ED25519 PUBLIC KEY"
) )
type Certificate interface { type NebulaCertificate struct {
// Version defines the underlying certificate structure and wire protocol version Details NebulaCertificateDetails
// Version1 certificates are ipv4 only and uses protobuf serialization Signature []byte
// Version2 certificates are ipv4 or ipv6 and uses asn.1 serialization
Version() Version
// Name is the human-readable name that identifies this certificate.
Name() string
// Networks is a list of ip addresses and network sizes assigned to this certificate.
// If IsCA is true then certificates signed by this CA can only have ip addresses and
// networks that are contained by an entry in this list.
Networks() []netip.Prefix
// UnsafeNetworks is a list of networks that this host can act as an unsafe router for.
// If IsCA is true then certificates signed by this CA can only have networks that are
// contained by an entry in this list.
UnsafeNetworks() []netip.Prefix
// Groups is a list of identities that can be used to write more general firewall rule
// definitions.
// If IsCA is true then certificates signed by this CA can only use groups that are
// in this list.
Groups() []string
// IsCA signifies if this is a certificate authority (true) or a host certificate (false).
// It is invalid to use a CA certificate as a host certificate.
IsCA() bool
// NotBefore is the time at which this certificate becomes valid.
// If IsCA is true then certificate signed by this CA can not have a time before this.
NotBefore() time.Time
// NotAfter is the time at which this certificate becomes invalid.
// If IsCA is true then certificate signed by this CA can not have a time after this.
NotAfter() time.Time
// Issuer is the fingerprint of the CA that signed this certificate.
// If IsCA is true then this will be empty.
Issuer() string
// PublicKey is the raw bytes to be used in asymmetric cryptographic operations.
PublicKey() []byte
// MarshalPublicKeyPEM is the value of PublicKey marshalled to PEM
MarshalPublicKeyPEM() []byte
// Curve identifies which curve was used for the PublicKey and Signature.
Curve() Curve
// Signature is the cryptographic seal for all the details of this certificate.
// CheckSignature can be used to verify that the details of this certificate are valid.
Signature() []byte
// CheckSignature will check that the certificate Signature() matches the
// computed signature. A true result means this certificate has not been tampered with.
CheckSignature(signingPublicKey []byte) bool
// Fingerprint returns the hex encoded sha256 sum of the certificate.
// This acts as a unique fingerprint and can be used to blocklist certificates.
Fingerprint() (string, error)
// Expired tests if the certificate is valid for the provided time.
Expired(t time.Time) bool
// VerifyPrivateKey returns an error if the private key is not a pair with the certificates public key.
VerifyPrivateKey(curve Curve, privateKey []byte) error
// Marshal will return the byte representation of this certificate
// This is primarily the format transmitted on the wire.
Marshal() ([]byte, error)
// MarshalForHandshakes prepares the bytes needed to use directly in a handshake
MarshalForHandshakes() ([]byte, error)
// MarshalPEM will return a PEM encoded representation of this certificate
// This is primarily the format stored on disk
MarshalPEM() ([]byte, error)
// MarshalJSON will return the json representation of this certificate
MarshalJSON() ([]byte, error)
// String will return a human-readable representation of this certificate
String() string
// Copy creates a copy of the certificate
Copy() Certificate
} }
// CachedCertificate represents a verified certificate with some cached fields to improve type NebulaCertificateDetails struct {
// performance. Name string
type CachedCertificate struct { Ips []*net.IPNet
Certificate Certificate Subnets []*net.IPNet
InvertedGroups map[string]struct{} Groups []string
Fingerprint string NotBefore time.Time
signerFingerprint string NotAfter time.Time
PublicKey []byte
IsCA bool
Issuer string
// Map of groups for faster lookup
InvertedGroups map[string]struct{}
} }
func (cc *CachedCertificate) String() string { type m map[string]interface{}
return cc.Certificate.String()
}
// Recombine will attempt to unmarshal a certificate received in a handshake. // UnmarshalNebulaCertificate will unmarshal a protobuf byte representation of a nebula cert
// Handshakes save space by placing the peers public key in a different part of the packet, we have to func UnmarshalNebulaCertificate(b []byte) (*NebulaCertificate, error) {
// reassemble the actual certificate structure with that in mind. if len(b) == 0 {
func Recombine(v Version, rawCertBytes, publicKey []byte, curve Curve) (Certificate, error) { return nil, fmt.Errorf("nil byte array")
if publicKey == nil {
return nil, ErrNoPeerStaticKey
} }
var rc RawNebulaCertificate
if rawCertBytes == nil { err := proto.Unmarshal(b, &rc)
return nil, ErrNoPayload
}
var c Certificate
var err error
switch v {
// Implementations must ensure the result is a valid cert!
case VersionPre1, Version1:
c, err = unmarshalCertificateV1(rawCertBytes, publicKey)
case Version2:
c, err = unmarshalCertificateV2(rawCertBytes, publicKey, curve)
default:
return nil, ErrUnknownVersion
}
if err != nil { if err != nil {
return nil, err return nil, err
} }
if c.Curve() != curve { if rc.Details == nil {
return nil, fmt.Errorf("certificate curve %s does not match expected %s", c.Curve().String(), curve.String()) return nil, fmt.Errorf("encoded Details was nil")
} }
return c, nil if len(rc.Details.Ips)%2 != 0 {
return nil, fmt.Errorf("encoded IPs should be in pairs, an odd number was found")
}
if len(rc.Details.Subnets)%2 != 0 {
return nil, fmt.Errorf("encoded Subnets should be in pairs, an odd number was found")
}
nc := NebulaCertificate{
Details: NebulaCertificateDetails{
Name: rc.Details.Name,
Groups: make([]string, len(rc.Details.Groups)),
Ips: make([]*net.IPNet, len(rc.Details.Ips)/2),
Subnets: make([]*net.IPNet, len(rc.Details.Subnets)/2),
NotBefore: time.Unix(rc.Details.NotBefore, 0),
NotAfter: time.Unix(rc.Details.NotAfter, 0),
PublicKey: make([]byte, len(rc.Details.PublicKey)),
IsCA: rc.Details.IsCA,
InvertedGroups: make(map[string]struct{}),
},
Signature: make([]byte, len(rc.Signature)),
}
copy(nc.Signature, rc.Signature)
copy(nc.Details.Groups, rc.Details.Groups)
nc.Details.Issuer = hex.EncodeToString(rc.Details.Issuer)
if len(rc.Details.PublicKey) < publicKeyLen {
return nil, fmt.Errorf("Public key was fewer than 32 bytes; %v", len(rc.Details.PublicKey))
}
copy(nc.Details.PublicKey, rc.Details.PublicKey)
for i, rawIp := range rc.Details.Ips {
if i%2 == 0 {
nc.Details.Ips[i/2] = &net.IPNet{IP: int2ip(rawIp)}
} else {
nc.Details.Ips[i/2].Mask = net.IPMask(int2ip(rawIp))
}
}
for i, rawIp := range rc.Details.Subnets {
if i%2 == 0 {
nc.Details.Subnets[i/2] = &net.IPNet{IP: int2ip(rawIp)}
} else {
nc.Details.Subnets[i/2].Mask = net.IPMask(int2ip(rawIp))
}
}
for _, g := range rc.Details.Groups {
nc.Details.InvertedGroups[g] = struct{}{}
}
return &nc, nil
}
// UnmarshalNebulaCertificateFromPEM will unmarshal the first pem block in a byte array, returning any non consumed data
// or an error on failure
func UnmarshalNebulaCertificateFromPEM(b []byte) (*NebulaCertificate, []byte, error) {
p, r := pem.Decode(b)
if p == nil {
return nil, r, fmt.Errorf("input did not contain a valid PEM encoded block")
}
if p.Type != CertBanner {
return nil, r, fmt.Errorf("bytes did not contain a proper nebula certificate banner")
}
nc, err := UnmarshalNebulaCertificate(p.Bytes)
return nc, r, err
}
// MarshalX25519PrivateKey is a simple helper to PEM encode an X25519 private key
func MarshalX25519PrivateKey(b []byte) []byte {
return pem.EncodeToMemory(&pem.Block{Type: X25519PrivateKeyBanner, Bytes: b})
}
// MarshalEd25519PrivateKey is a simple helper to PEM encode an Ed25519 private key
func MarshalEd25519PrivateKey(key ed25519.PrivateKey) []byte {
return pem.EncodeToMemory(&pem.Block{Type: Ed25519PrivateKeyBanner, Bytes: key})
}
// UnmarshalX25519PrivateKey will try to pem decode an X25519 private key, returning any other bytes b
// or an error on failure
func UnmarshalX25519PrivateKey(b []byte) ([]byte, []byte, error) {
k, r := pem.Decode(b)
if k == nil {
return nil, r, fmt.Errorf("input did not contain a valid PEM encoded block")
}
if k.Type != X25519PrivateKeyBanner {
return nil, r, fmt.Errorf("bytes did not contain a proper nebula X25519 private key banner")
}
if len(k.Bytes) != publicKeyLen {
return nil, r, fmt.Errorf("key was not 32 bytes, is invalid X25519 private key")
}
return k.Bytes, r, nil
}
// UnmarshalEd25519PrivateKey will try to pem decode an Ed25519 private key, returning any other bytes b
// or an error on failure
func UnmarshalEd25519PrivateKey(b []byte) (ed25519.PrivateKey, []byte, error) {
k, r := pem.Decode(b)
if k == nil {
return nil, r, fmt.Errorf("input did not contain a valid PEM encoded block")
}
if k.Type != Ed25519PrivateKeyBanner {
return nil, r, fmt.Errorf("bytes did not contain a proper nebula Ed25519 private key banner")
}
if len(k.Bytes) != ed25519.PrivateKeySize {
return nil, r, fmt.Errorf("key was not 64 bytes, is invalid ed25519 private key")
}
return k.Bytes, r, nil
}
// MarshalX25519PublicKey is a simple helper to PEM encode an X25519 public key
func MarshalX25519PublicKey(b []byte) []byte {
return pem.EncodeToMemory(&pem.Block{Type: X25519PublicKeyBanner, Bytes: b})
}
// MarshalEd25519PublicKey is a simple helper to PEM encode an Ed25519 public key
func MarshalEd25519PublicKey(key ed25519.PublicKey) []byte {
return pem.EncodeToMemory(&pem.Block{Type: Ed25519PublicKeyBanner, Bytes: key})
}
// UnmarshalX25519PublicKey will try to pem decode an X25519 public key, returning any other bytes b
// or an error on failure
func UnmarshalX25519PublicKey(b []byte) ([]byte, []byte, error) {
k, r := pem.Decode(b)
if k == nil {
return nil, r, fmt.Errorf("input did not contain a valid PEM encoded block")
}
if k.Type != X25519PublicKeyBanner {
return nil, r, fmt.Errorf("bytes did not contain a proper nebula X25519 public key banner")
}
if len(k.Bytes) != publicKeyLen {
return nil, r, fmt.Errorf("key was not 32 bytes, is invalid X25519 public key")
}
return k.Bytes, r, nil
}
// UnmarshalEd25519PublicKey will try to pem decode an Ed25519 public key, returning any other bytes b
// or an error on failure
func UnmarshalEd25519PublicKey(b []byte) (ed25519.PublicKey, []byte, error) {
k, r := pem.Decode(b)
if k == nil {
return nil, r, fmt.Errorf("input did not contain a valid PEM encoded block")
}
if k.Type != Ed25519PublicKeyBanner {
return nil, r, fmt.Errorf("bytes did not contain a proper nebula Ed25519 public key banner")
}
if len(k.Bytes) != ed25519.PublicKeySize {
return nil, r, fmt.Errorf("key was not 32 bytes, is invalid ed25519 public key")
}
return k.Bytes, r, nil
}
// Sign signs a nebula cert with the provided private key
func (nc *NebulaCertificate) Sign(key ed25519.PrivateKey) error {
b, err := proto.Marshal(nc.getRawDetails())
if err != nil {
return err
}
sig, err := key.Sign(rand.Reader, b, crypto.Hash(0))
if err != nil {
return err
}
nc.Signature = sig
return nil
}
// CheckSignature verifies the signature against the provided public key
func (nc *NebulaCertificate) CheckSignature(key ed25519.PublicKey) bool {
b, err := proto.Marshal(nc.getRawDetails())
if err != nil {
return false
}
return ed25519.Verify(key, b, nc.Signature)
}
// Expired will return true if the nebula cert is too young or too old compared to the provided time, otherwise false
func (nc *NebulaCertificate) Expired(t time.Time) bool {
return nc.Details.NotBefore.After(t) || nc.Details.NotAfter.Before(t)
}
// Verify will ensure a certificate is good in all respects (expiry, group membership, signature, cert blocklist, etc)
func (nc *NebulaCertificate) Verify(t time.Time, ncp *NebulaCAPool) (bool, error) {
if ncp.IsBlocklisted(nc) {
return false, fmt.Errorf("certificate has been blocked")
}
signer, err := ncp.GetCAForCert(nc)
if err != nil {
return false, err
}
if signer.Expired(t) {
return false, fmt.Errorf("root certificate is expired")
}
if nc.Expired(t) {
return false, fmt.Errorf("certificate is expired")
}
if !nc.CheckSignature(signer.Details.PublicKey) {
return false, fmt.Errorf("certificate signature did not match")
}
if err := nc.CheckRootConstrains(signer); err != nil {
return false, err
}
return true, nil
}
// CheckRootConstrains returns an error if the certificate violates constraints set on the root (groups, ips, subnets)
func (nc *NebulaCertificate) CheckRootConstrains(signer *NebulaCertificate) error {
// Make sure this cert wasn't valid before the root
if signer.Details.NotAfter.Before(nc.Details.NotAfter) {
return fmt.Errorf("certificate expires after signing certificate")
}
// Make sure this cert isn't valid after the root
if signer.Details.NotBefore.After(nc.Details.NotBefore) {
return fmt.Errorf("certificate is valid before the signing certificate")
}
// If the signer has a limited set of groups make sure the cert only contains a subset
if len(signer.Details.InvertedGroups) > 0 {
for _, g := range nc.Details.Groups {
if _, ok := signer.Details.InvertedGroups[g]; !ok {
return fmt.Errorf("certificate contained a group not present on the signing ca: %s", g)
}
}
}
// If the signer has a limited set of ip ranges to issue from make sure the cert only contains a subset
if len(signer.Details.Ips) > 0 {
for _, ip := range nc.Details.Ips {
if !netMatch(ip, signer.Details.Ips) {
return fmt.Errorf("certificate contained an ip assignment outside the limitations of the signing ca: %s", ip.String())
}
}
}
// If the signer has a limited set of subnet ranges to issue from make sure the cert only contains a subset
if len(signer.Details.Subnets) > 0 {
for _, subnet := range nc.Details.Subnets {
if !netMatch(subnet, signer.Details.Subnets) {
return fmt.Errorf("certificate contained a subnet assignment outside the limitations of the signing ca: %s", subnet)
}
}
}
return nil
}
// VerifyPrivateKey checks that the public key in the Nebula certificate and a supplied private key match
func (nc *NebulaCertificate) VerifyPrivateKey(key []byte) error {
if nc.Details.IsCA {
// the call to PublicKey below will panic slice bounds out of range otherwise
if len(key) != ed25519.PrivateKeySize {
return fmt.Errorf("key was not 64 bytes, is invalid ed25519 private key")
}
if !ed25519.PublicKey(nc.Details.PublicKey).Equal(ed25519.PrivateKey(key).Public()) {
return fmt.Errorf("public key in cert and private key supplied don't match")
}
return nil
}
pub, err := curve25519.X25519(key, curve25519.Basepoint)
if err != nil {
return err
}
if !bytes.Equal(pub, nc.Details.PublicKey) {
return fmt.Errorf("public key in cert and private key supplied don't match")
}
return nil
}
// String will return a pretty printed representation of a nebula cert
func (nc *NebulaCertificate) String() string {
if nc == nil {
return "NebulaCertificate {}\n"
}
s := "NebulaCertificate {\n"
s += "\tDetails {\n"
s += fmt.Sprintf("\t\tName: %v\n", nc.Details.Name)
if len(nc.Details.Ips) > 0 {
s += "\t\tIps: [\n"
for _, ip := range nc.Details.Ips {
s += fmt.Sprintf("\t\t\t%v\n", ip.String())
}
s += "\t\t]\n"
} else {
s += "\t\tIps: []\n"
}
if len(nc.Details.Subnets) > 0 {
s += "\t\tSubnets: [\n"
for _, ip := range nc.Details.Subnets {
s += fmt.Sprintf("\t\t\t%v\n", ip.String())
}
s += "\t\t]\n"
} else {
s += "\t\tSubnets: []\n"
}
if len(nc.Details.Groups) > 0 {
s += "\t\tGroups: [\n"
for _, g := range nc.Details.Groups {
s += fmt.Sprintf("\t\t\t\"%v\"\n", g)
}
s += "\t\t]\n"
} else {
s += "\t\tGroups: []\n"
}
s += fmt.Sprintf("\t\tNot before: %v\n", nc.Details.NotBefore)
s += fmt.Sprintf("\t\tNot After: %v\n", nc.Details.NotAfter)
s += fmt.Sprintf("\t\tIs CA: %v\n", nc.Details.IsCA)
s += fmt.Sprintf("\t\tIssuer: %s\n", nc.Details.Issuer)
s += fmt.Sprintf("\t\tPublic key: %x\n", nc.Details.PublicKey)
s += "\t}\n"
fp, err := nc.Sha256Sum()
if err == nil {
s += fmt.Sprintf("\tFingerprint: %s\n", fp)
}
s += fmt.Sprintf("\tSignature: %x\n", nc.Signature)
s += "}"
return s
}
// getRawDetails marshals the raw details into protobuf ready struct
func (nc *NebulaCertificate) getRawDetails() *RawNebulaCertificateDetails {
rd := &RawNebulaCertificateDetails{
Name: nc.Details.Name,
Groups: nc.Details.Groups,
NotBefore: nc.Details.NotBefore.Unix(),
NotAfter: nc.Details.NotAfter.Unix(),
PublicKey: make([]byte, len(nc.Details.PublicKey)),
IsCA: nc.Details.IsCA,
}
for _, ipNet := range nc.Details.Ips {
rd.Ips = append(rd.Ips, ip2int(ipNet.IP), ip2int(ipNet.Mask))
}
for _, ipNet := range nc.Details.Subnets {
rd.Subnets = append(rd.Subnets, ip2int(ipNet.IP), ip2int(ipNet.Mask))
}
copy(rd.PublicKey, nc.Details.PublicKey[:])
// I know, this is terrible
rd.Issuer, _ = hex.DecodeString(nc.Details.Issuer)
return rd
}
// Marshal will marshal a nebula cert into a protobuf byte array
func (nc *NebulaCertificate) Marshal() ([]byte, error) {
rc := RawNebulaCertificate{
Details: nc.getRawDetails(),
Signature: nc.Signature,
}
return proto.Marshal(&rc)
}
// MarshalToPEM will marshal a nebula cert into a protobuf byte array and pem encode the result
func (nc *NebulaCertificate) MarshalToPEM() ([]byte, error) {
b, err := nc.Marshal()
if err != nil {
return nil, err
}
return pem.EncodeToMemory(&pem.Block{Type: CertBanner, Bytes: b}), nil
}
// Sha256Sum calculates a sha-256 sum of the marshaled certificate
func (nc *NebulaCertificate) Sha256Sum() (string, error) {
b, err := nc.Marshal()
if err != nil {
return "", err
}
sum := sha256.Sum256(b)
return hex.EncodeToString(sum[:]), nil
}
func (nc *NebulaCertificate) MarshalJSON() ([]byte, error) {
toString := func(ips []*net.IPNet) []string {
s := []string{}
for _, ip := range ips {
s = append(s, ip.String())
}
return s
}
fp, _ := nc.Sha256Sum()
jc := m{
"details": m{
"name": nc.Details.Name,
"ips": toString(nc.Details.Ips),
"subnets": toString(nc.Details.Subnets),
"groups": nc.Details.Groups,
"notBefore": nc.Details.NotBefore,
"notAfter": nc.Details.NotAfter,
"publicKey": fmt.Sprintf("%x", nc.Details.PublicKey),
"isCa": nc.Details.IsCA,
"issuer": nc.Details.Issuer,
},
"fingerprint": fp,
"signature": fmt.Sprintf("%x", nc.Signature),
}
return json.Marshal(jc)
}
//func (nc *NebulaCertificate) Copy() *NebulaCertificate {
// r, err := nc.Marshal()
// if err != nil {
// //TODO
// return nil
// }
//
// c, err := UnmarshalNebulaCertificate(r)
// return c
//}
func (nc *NebulaCertificate) Copy() *NebulaCertificate {
c := &NebulaCertificate{
Details: NebulaCertificateDetails{
Name: nc.Details.Name,
Groups: make([]string, len(nc.Details.Groups)),
Ips: make([]*net.IPNet, len(nc.Details.Ips)),
Subnets: make([]*net.IPNet, len(nc.Details.Subnets)),
NotBefore: nc.Details.NotBefore,
NotAfter: nc.Details.NotAfter,
PublicKey: make([]byte, len(nc.Details.PublicKey)),
IsCA: nc.Details.IsCA,
Issuer: nc.Details.Issuer,
InvertedGroups: make(map[string]struct{}, len(nc.Details.InvertedGroups)),
},
Signature: make([]byte, len(nc.Signature)),
}
copy(c.Signature, nc.Signature)
copy(c.Details.Groups, nc.Details.Groups)
copy(c.Details.PublicKey, nc.Details.PublicKey)
for i, p := range nc.Details.Ips {
c.Details.Ips[i] = &net.IPNet{
IP: make(net.IP, len(p.IP)),
Mask: make(net.IPMask, len(p.Mask)),
}
copy(c.Details.Ips[i].IP, p.IP)
copy(c.Details.Ips[i].Mask, p.Mask)
}
for i, p := range nc.Details.Subnets {
c.Details.Subnets[i] = &net.IPNet{
IP: make(net.IP, len(p.IP)),
Mask: make(net.IPMask, len(p.Mask)),
}
copy(c.Details.Subnets[i].IP, p.IP)
copy(c.Details.Subnets[i].Mask, p.Mask)
}
for g := range nc.Details.InvertedGroups {
c.Details.InvertedGroups[g] = struct{}{}
}
return c
}
func netMatch(certIp *net.IPNet, rootIps []*net.IPNet) bool {
for _, net := range rootIps {
if net.Contains(certIp.IP) && maskContains(net.Mask, certIp.Mask) {
return true
}
}
return false
}
func maskContains(caMask, certMask net.IPMask) bool {
caM := maskTo4(caMask)
cM := maskTo4(certMask)
// Make sure forcing to ipv4 didn't nuke us
if caM == nil || cM == nil {
return false
}
// Make sure the cert mask is not greater than the ca mask
for i := 0; i < len(caMask); i++ {
if caM[i] > cM[i] {
return false
}
}
return true
}
func maskTo4(ip net.IPMask) net.IPMask {
if len(ip) == net.IPv4len {
return ip
}
if len(ip) == net.IPv6len && isZeros(ip[0:10]) && ip[10] == 0xff && ip[11] == 0xff {
return ip[12:16]
}
return nil
}
func isZeros(b []byte) bool {
for i := 0; i < len(b); i++ {
if b[i] != 0 {
return false
}
}
return true
}
func ip2int(ip []byte) uint32 {
if len(ip) == 16 {
return binary.BigEndian.Uint32(ip[12:16])
}
return binary.BigEndian.Uint32(ip)
}
func int2ip(nn uint32) net.IP {
ip := make(net.IP, net.IPv4len)
binary.BigEndian.PutUint32(ip, nn)
return ip
} }

298
cert/cert.pb.go Normal file
View File

@@ -0,0 +1,298 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.26.0
// protoc v3.14.0
// source: cert.proto
package cert
import (
protoreflect "google.golang.org/protobuf/reflect/protoreflect"
protoimpl "google.golang.org/protobuf/runtime/protoimpl"
reflect "reflect"
sync "sync"
)
const (
// Verify that this generated code is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion)
// Verify that runtime/protoimpl is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
)
type RawNebulaCertificate struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Details *RawNebulaCertificateDetails `protobuf:"bytes,1,opt,name=Details,proto3" json:"Details,omitempty"`
Signature []byte `protobuf:"bytes,2,opt,name=Signature,proto3" json:"Signature,omitempty"`
}
func (x *RawNebulaCertificate) Reset() {
*x = RawNebulaCertificate{}
if protoimpl.UnsafeEnabled {
mi := &file_cert_proto_msgTypes[0]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *RawNebulaCertificate) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*RawNebulaCertificate) ProtoMessage() {}
func (x *RawNebulaCertificate) ProtoReflect() protoreflect.Message {
mi := &file_cert_proto_msgTypes[0]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use RawNebulaCertificate.ProtoReflect.Descriptor instead.
func (*RawNebulaCertificate) Descriptor() ([]byte, []int) {
return file_cert_proto_rawDescGZIP(), []int{0}
}
func (x *RawNebulaCertificate) GetDetails() *RawNebulaCertificateDetails {
if x != nil {
return x.Details
}
return nil
}
func (x *RawNebulaCertificate) GetSignature() []byte {
if x != nil {
return x.Signature
}
return nil
}
type RawNebulaCertificateDetails struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Name string `protobuf:"bytes,1,opt,name=Name,proto3" json:"Name,omitempty"`
// Ips and Subnets are in big endian 32 bit pairs, 1st the ip, 2nd the mask
Ips []uint32 `protobuf:"varint,2,rep,packed,name=Ips,proto3" json:"Ips,omitempty"`
Subnets []uint32 `protobuf:"varint,3,rep,packed,name=Subnets,proto3" json:"Subnets,omitempty"`
Groups []string `protobuf:"bytes,4,rep,name=Groups,proto3" json:"Groups,omitempty"`
NotBefore int64 `protobuf:"varint,5,opt,name=NotBefore,proto3" json:"NotBefore,omitempty"`
NotAfter int64 `protobuf:"varint,6,opt,name=NotAfter,proto3" json:"NotAfter,omitempty"`
PublicKey []byte `protobuf:"bytes,7,opt,name=PublicKey,proto3" json:"PublicKey,omitempty"`
IsCA bool `protobuf:"varint,8,opt,name=IsCA,proto3" json:"IsCA,omitempty"`
// sha-256 of the issuer certificate, if this field is blank the cert is self-signed
Issuer []byte `protobuf:"bytes,9,opt,name=Issuer,proto3" json:"Issuer,omitempty"`
}
func (x *RawNebulaCertificateDetails) Reset() {
*x = RawNebulaCertificateDetails{}
if protoimpl.UnsafeEnabled {
mi := &file_cert_proto_msgTypes[1]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *RawNebulaCertificateDetails) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*RawNebulaCertificateDetails) ProtoMessage() {}
func (x *RawNebulaCertificateDetails) ProtoReflect() protoreflect.Message {
mi := &file_cert_proto_msgTypes[1]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use RawNebulaCertificateDetails.ProtoReflect.Descriptor instead.
func (*RawNebulaCertificateDetails) Descriptor() ([]byte, []int) {
return file_cert_proto_rawDescGZIP(), []int{1}
}
func (x *RawNebulaCertificateDetails) GetName() string {
if x != nil {
return x.Name
}
return ""
}
func (x *RawNebulaCertificateDetails) GetIps() []uint32 {
if x != nil {
return x.Ips
}
return nil
}
func (x *RawNebulaCertificateDetails) GetSubnets() []uint32 {
if x != nil {
return x.Subnets
}
return nil
}
func (x *RawNebulaCertificateDetails) GetGroups() []string {
if x != nil {
return x.Groups
}
return nil
}
func (x *RawNebulaCertificateDetails) GetNotBefore() int64 {
if x != nil {
return x.NotBefore
}
return 0
}
func (x *RawNebulaCertificateDetails) GetNotAfter() int64 {
if x != nil {
return x.NotAfter
}
return 0
}
func (x *RawNebulaCertificateDetails) GetPublicKey() []byte {
if x != nil {
return x.PublicKey
}
return nil
}
func (x *RawNebulaCertificateDetails) GetIsCA() bool {
if x != nil {
return x.IsCA
}
return false
}
func (x *RawNebulaCertificateDetails) GetIssuer() []byte {
if x != nil {
return x.Issuer
}
return nil
}
var File_cert_proto protoreflect.FileDescriptor
var file_cert_proto_rawDesc = []byte{
0x0a, 0x0a, 0x63, 0x65, 0x72, 0x74, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x12, 0x04, 0x63, 0x65,
0x72, 0x74, 0x22, 0x71, 0x0a, 0x14, 0x52, 0x61, 0x77, 0x4e, 0x65, 0x62, 0x75, 0x6c, 0x61, 0x43,
0x65, 0x72, 0x74, 0x69, 0x66, 0x69, 0x63, 0x61, 0x74, 0x65, 0x12, 0x3b, 0x0a, 0x07, 0x44, 0x65,
0x74, 0x61, 0x69, 0x6c, 0x73, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x21, 0x2e, 0x63, 0x65,
0x72, 0x74, 0x2e, 0x52, 0x61, 0x77, 0x4e, 0x65, 0x62, 0x75, 0x6c, 0x61, 0x43, 0x65, 0x72, 0x74,
0x69, 0x66, 0x69, 0x63, 0x61, 0x74, 0x65, 0x44, 0x65, 0x74, 0x61, 0x69, 0x6c, 0x73, 0x52, 0x07,
0x44, 0x65, 0x74, 0x61, 0x69, 0x6c, 0x73, 0x12, 0x1c, 0x0a, 0x09, 0x53, 0x69, 0x67, 0x6e, 0x61,
0x74, 0x75, 0x72, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0c, 0x52, 0x09, 0x53, 0x69, 0x67, 0x6e,
0x61, 0x74, 0x75, 0x72, 0x65, 0x22, 0xf9, 0x01, 0x0a, 0x1b, 0x52, 0x61, 0x77, 0x4e, 0x65, 0x62,
0x75, 0x6c, 0x61, 0x43, 0x65, 0x72, 0x74, 0x69, 0x66, 0x69, 0x63, 0x61, 0x74, 0x65, 0x44, 0x65,
0x74, 0x61, 0x69, 0x6c, 0x73, 0x12, 0x12, 0x0a, 0x04, 0x4e, 0x61, 0x6d, 0x65, 0x18, 0x01, 0x20,
0x01, 0x28, 0x09, 0x52, 0x04, 0x4e, 0x61, 0x6d, 0x65, 0x12, 0x10, 0x0a, 0x03, 0x49, 0x70, 0x73,
0x18, 0x02, 0x20, 0x03, 0x28, 0x0d, 0x52, 0x03, 0x49, 0x70, 0x73, 0x12, 0x18, 0x0a, 0x07, 0x53,
0x75, 0x62, 0x6e, 0x65, 0x74, 0x73, 0x18, 0x03, 0x20, 0x03, 0x28, 0x0d, 0x52, 0x07, 0x53, 0x75,
0x62, 0x6e, 0x65, 0x74, 0x73, 0x12, 0x16, 0x0a, 0x06, 0x47, 0x72, 0x6f, 0x75, 0x70, 0x73, 0x18,
0x04, 0x20, 0x03, 0x28, 0x09, 0x52, 0x06, 0x47, 0x72, 0x6f, 0x75, 0x70, 0x73, 0x12, 0x1c, 0x0a,
0x09, 0x4e, 0x6f, 0x74, 0x42, 0x65, 0x66, 0x6f, 0x72, 0x65, 0x18, 0x05, 0x20, 0x01, 0x28, 0x03,
0x52, 0x09, 0x4e, 0x6f, 0x74, 0x42, 0x65, 0x66, 0x6f, 0x72, 0x65, 0x12, 0x1a, 0x0a, 0x08, 0x4e,
0x6f, 0x74, 0x41, 0x66, 0x74, 0x65, 0x72, 0x18, 0x06, 0x20, 0x01, 0x28, 0x03, 0x52, 0x08, 0x4e,
0x6f, 0x74, 0x41, 0x66, 0x74, 0x65, 0x72, 0x12, 0x1c, 0x0a, 0x09, 0x50, 0x75, 0x62, 0x6c, 0x69,
0x63, 0x4b, 0x65, 0x79, 0x18, 0x07, 0x20, 0x01, 0x28, 0x0c, 0x52, 0x09, 0x50, 0x75, 0x62, 0x6c,
0x69, 0x63, 0x4b, 0x65, 0x79, 0x12, 0x12, 0x0a, 0x04, 0x49, 0x73, 0x43, 0x41, 0x18, 0x08, 0x20,
0x01, 0x28, 0x08, 0x52, 0x04, 0x49, 0x73, 0x43, 0x41, 0x12, 0x16, 0x0a, 0x06, 0x49, 0x73, 0x73,
0x75, 0x65, 0x72, 0x18, 0x09, 0x20, 0x01, 0x28, 0x0c, 0x52, 0x06, 0x49, 0x73, 0x73, 0x75, 0x65,
0x72, 0x42, 0x20, 0x5a, 0x1e, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f,
0x73, 0x6c, 0x61, 0x63, 0x6b, 0x68, 0x71, 0x2f, 0x6e, 0x65, 0x62, 0x75, 0x6c, 0x61, 0x2f, 0x63,
0x65, 0x72, 0x74, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33,
}
var (
file_cert_proto_rawDescOnce sync.Once
file_cert_proto_rawDescData = file_cert_proto_rawDesc
)
func file_cert_proto_rawDescGZIP() []byte {
file_cert_proto_rawDescOnce.Do(func() {
file_cert_proto_rawDescData = protoimpl.X.CompressGZIP(file_cert_proto_rawDescData)
})
return file_cert_proto_rawDescData
}
var file_cert_proto_msgTypes = make([]protoimpl.MessageInfo, 2)
var file_cert_proto_goTypes = []interface{}{
(*RawNebulaCertificate)(nil), // 0: cert.RawNebulaCertificate
(*RawNebulaCertificateDetails)(nil), // 1: cert.RawNebulaCertificateDetails
}
var file_cert_proto_depIdxs = []int32{
1, // 0: cert.RawNebulaCertificate.Details:type_name -> cert.RawNebulaCertificateDetails
1, // [1:1] is the sub-list for method output_type
1, // [1:1] is the sub-list for method input_type
1, // [1:1] is the sub-list for extension type_name
1, // [1:1] is the sub-list for extension extendee
0, // [0:1] is the sub-list for field type_name
}
func init() { file_cert_proto_init() }
func file_cert_proto_init() {
if File_cert_proto != nil {
return
}
if !protoimpl.UnsafeEnabled {
file_cert_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*RawNebulaCertificate); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_cert_proto_msgTypes[1].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*RawNebulaCertificateDetails); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
}
type x struct{}
out := protoimpl.TypeBuilder{
File: protoimpl.DescBuilder{
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
RawDescriptor: file_cert_proto_rawDesc,
NumEnums: 0,
NumMessages: 2,
NumExtensions: 0,
NumServices: 0,
},
GoTypes: file_cert_proto_goTypes,
DependencyIndexes: file_cert_proto_depIdxs,
MessageInfos: file_cert_proto_msgTypes,
}.Build()
File_cert_proto = out.File
file_cert_proto_rawDesc = nil
file_cert_proto_goTypes = nil
file_cert_proto_depIdxs = nil
}

View File

@@ -5,11 +5,6 @@ option go_package = "github.com/slackhq/nebula/cert";
//import "google/protobuf/timestamp.proto"; //import "google/protobuf/timestamp.proto";
enum Curve {
CURVE25519 = 0;
P256 = 1;
}
message RawNebulaCertificate { message RawNebulaCertificate {
RawNebulaCertificateDetails Details = 1; RawNebulaCertificateDetails Details = 1;
bytes Signature = 2; bytes Signature = 2;
@@ -31,24 +26,4 @@ message RawNebulaCertificateDetails {
// sha-256 of the issuer certificate, if this field is blank the cert is self-signed // sha-256 of the issuer certificate, if this field is blank the cert is self-signed
bytes Issuer = 9; bytes Issuer = 9;
Curve curve = 100;
}
message RawNebulaEncryptedData {
RawNebulaEncryptionMetadata EncryptionMetadata = 1;
bytes Ciphertext = 2;
}
message RawNebulaEncryptionMetadata {
string EncryptionAlgorithm = 1;
RawNebulaArgon2Parameters Argon2Parameters = 2;
}
message RawNebulaArgon2Parameters {
int32 version = 1; // rune in Go
uint32 memory = 2;
uint32 parallelism = 4; // uint8 in Go
uint32 iterations = 3;
bytes salt = 5;
} }

874
cert/cert_test.go Normal file
View File

@@ -0,0 +1,874 @@
package cert
import (
"crypto/rand"
"fmt"
"io"
"net"
"testing"
"time"
"github.com/golang/protobuf/proto"
"github.com/slackhq/nebula/util"
"github.com/stretchr/testify/assert"
"golang.org/x/crypto/curve25519"
"golang.org/x/crypto/ed25519"
)
func TestMarshalingNebulaCertificate(t *testing.T) {
before := time.Now().Add(time.Second * -60).Round(time.Second)
after := time.Now().Add(time.Second * 60).Round(time.Second)
pubKey := []byte("1234567890abcedfghij1234567890ab")
nc := NebulaCertificate{
Details: NebulaCertificateDetails{
Name: "testing",
Ips: []*net.IPNet{
{IP: net.ParseIP("10.1.1.1"), Mask: net.IPMask(net.ParseIP("255.255.255.0"))},
{IP: net.ParseIP("10.1.1.2"), Mask: net.IPMask(net.ParseIP("255.255.0.0"))},
{IP: net.ParseIP("10.1.1.3"), Mask: net.IPMask(net.ParseIP("255.0.255.0"))},
},
Subnets: []*net.IPNet{
{IP: net.ParseIP("9.1.1.1"), Mask: net.IPMask(net.ParseIP("255.0.255.0"))},
{IP: net.ParseIP("9.1.1.2"), Mask: net.IPMask(net.ParseIP("255.255.255.0"))},
{IP: net.ParseIP("9.1.1.3"), Mask: net.IPMask(net.ParseIP("255.255.0.0"))},
},
Groups: []string{"test-group1", "test-group2", "test-group3"},
NotBefore: before,
NotAfter: after,
PublicKey: pubKey,
IsCA: false,
Issuer: "1234567890abcedfghij1234567890ab",
},
Signature: []byte("1234567890abcedfghij1234567890ab"),
}
b, err := nc.Marshal()
assert.Nil(t, err)
//t.Log("Cert size:", len(b))
nc2, err := UnmarshalNebulaCertificate(b)
assert.Nil(t, err)
assert.Equal(t, nc.Signature, nc2.Signature)
assert.Equal(t, nc.Details.Name, nc2.Details.Name)
assert.Equal(t, nc.Details.NotBefore, nc2.Details.NotBefore)
assert.Equal(t, nc.Details.NotAfter, nc2.Details.NotAfter)
assert.Equal(t, nc.Details.PublicKey, nc2.Details.PublicKey)
assert.Equal(t, nc.Details.IsCA, nc2.Details.IsCA)
// IP byte arrays can be 4 or 16 in length so we have to go this route
assert.Equal(t, len(nc.Details.Ips), len(nc2.Details.Ips))
for i, wIp := range nc.Details.Ips {
assert.Equal(t, wIp.String(), nc2.Details.Ips[i].String())
}
assert.Equal(t, len(nc.Details.Subnets), len(nc2.Details.Subnets))
for i, wIp := range nc.Details.Subnets {
assert.Equal(t, wIp.String(), nc2.Details.Subnets[i].String())
}
assert.EqualValues(t, nc.Details.Groups, nc2.Details.Groups)
}
func TestNebulaCertificate_Sign(t *testing.T) {
before := time.Now().Add(time.Second * -60).Round(time.Second)
after := time.Now().Add(time.Second * 60).Round(time.Second)
pubKey := []byte("1234567890abcedfghij1234567890ab")
nc := NebulaCertificate{
Details: NebulaCertificateDetails{
Name: "testing",
Ips: []*net.IPNet{
{IP: net.ParseIP("10.1.1.1"), Mask: net.IPMask(net.ParseIP("255.255.255.0"))},
{IP: net.ParseIP("10.1.1.2"), Mask: net.IPMask(net.ParseIP("255.255.0.0"))},
{IP: net.ParseIP("10.1.1.3"), Mask: net.IPMask(net.ParseIP("255.0.255.0"))},
},
Subnets: []*net.IPNet{
{IP: net.ParseIP("9.1.1.1"), Mask: net.IPMask(net.ParseIP("255.0.255.0"))},
{IP: net.ParseIP("9.1.1.2"), Mask: net.IPMask(net.ParseIP("255.255.255.0"))},
{IP: net.ParseIP("9.1.1.3"), Mask: net.IPMask(net.ParseIP("255.255.0.0"))},
},
Groups: []string{"test-group1", "test-group2", "test-group3"},
NotBefore: before,
NotAfter: after,
PublicKey: pubKey,
IsCA: false,
Issuer: "1234567890abcedfghij1234567890ab",
},
}
pub, priv, err := ed25519.GenerateKey(rand.Reader)
assert.Nil(t, err)
assert.False(t, nc.CheckSignature(pub))
assert.Nil(t, nc.Sign(priv))
assert.True(t, nc.CheckSignature(pub))
_, err = nc.Marshal()
assert.Nil(t, err)
//t.Log("Cert size:", len(b))
}
func TestNebulaCertificate_Expired(t *testing.T) {
nc := NebulaCertificate{
Details: NebulaCertificateDetails{
NotBefore: time.Now().Add(time.Second * -60).Round(time.Second),
NotAfter: time.Now().Add(time.Second * 60).Round(time.Second),
},
}
assert.True(t, nc.Expired(time.Now().Add(time.Hour)))
assert.True(t, nc.Expired(time.Now().Add(-time.Hour)))
assert.False(t, nc.Expired(time.Now()))
}
func TestNebulaCertificate_MarshalJSON(t *testing.T) {
time.Local = time.UTC
pubKey := []byte("1234567890abcedfghij1234567890ab")
nc := NebulaCertificate{
Details: NebulaCertificateDetails{
Name: "testing",
Ips: []*net.IPNet{
{IP: net.ParseIP("10.1.1.1"), Mask: net.IPMask(net.ParseIP("255.255.255.0"))},
{IP: net.ParseIP("10.1.1.2"), Mask: net.IPMask(net.ParseIP("255.255.0.0"))},
{IP: net.ParseIP("10.1.1.3"), Mask: net.IPMask(net.ParseIP("255.0.255.0"))},
},
Subnets: []*net.IPNet{
{IP: net.ParseIP("9.1.1.1"), Mask: net.IPMask(net.ParseIP("255.0.255.0"))},
{IP: net.ParseIP("9.1.1.2"), Mask: net.IPMask(net.ParseIP("255.255.255.0"))},
{IP: net.ParseIP("9.1.1.3"), Mask: net.IPMask(net.ParseIP("255.255.0.0"))},
},
Groups: []string{"test-group1", "test-group2", "test-group3"},
NotBefore: time.Date(1, 0, 0, 1, 0, 0, 0, time.UTC),
NotAfter: time.Date(1, 0, 0, 2, 0, 0, 0, time.UTC),
PublicKey: pubKey,
IsCA: false,
Issuer: "1234567890abcedfghij1234567890ab",
},
Signature: []byte("1234567890abcedfghij1234567890ab"),
}
b, err := nc.MarshalJSON()
assert.Nil(t, err)
assert.Equal(
t,
"{\"details\":{\"groups\":[\"test-group1\",\"test-group2\",\"test-group3\"],\"ips\":[\"10.1.1.1/24\",\"10.1.1.2/16\",\"10.1.1.3/ff00ff00\"],\"isCa\":false,\"issuer\":\"1234567890abcedfghij1234567890ab\",\"name\":\"testing\",\"notAfter\":\"0000-11-30T02:00:00Z\",\"notBefore\":\"0000-11-30T01:00:00Z\",\"publicKey\":\"313233343536373839306162636564666768696a313233343536373839306162\",\"subnets\":[\"9.1.1.1/ff00ff00\",\"9.1.1.2/24\",\"9.1.1.3/16\"]},\"fingerprint\":\"26cb1c30ad7872c804c166b5150fa372f437aa3856b04edb4334b4470ec728e4\",\"signature\":\"313233343536373839306162636564666768696a313233343536373839306162\"}",
string(b),
)
}
func TestNebulaCertificate_Verify(t *testing.T) {
ca, _, caKey, err := newTestCaCert(time.Now(), time.Now().Add(10*time.Minute), []*net.IPNet{}, []*net.IPNet{}, []string{})
assert.Nil(t, err)
c, _, _, err := newTestCert(ca, caKey, time.Now(), time.Now().Add(5*time.Minute), []*net.IPNet{}, []*net.IPNet{}, []string{})
assert.Nil(t, err)
h, err := ca.Sha256Sum()
assert.Nil(t, err)
caPool := NewCAPool()
caPool.CAs[h] = ca
f, err := c.Sha256Sum()
assert.Nil(t, err)
caPool.BlocklistFingerprint(f)
v, err := c.Verify(time.Now(), caPool)
assert.False(t, v)
assert.EqualError(t, err, "certificate has been blocked")
caPool.ResetCertBlocklist()
v, err = c.Verify(time.Now(), caPool)
assert.True(t, v)
assert.Nil(t, err)
v, err = c.Verify(time.Now().Add(time.Hour*1000), caPool)
assert.False(t, v)
assert.EqualError(t, err, "root certificate is expired")
c, _, _, err = newTestCert(ca, caKey, time.Time{}, time.Time{}, []*net.IPNet{}, []*net.IPNet{}, []string{})
assert.Nil(t, err)
v, err = c.Verify(time.Now().Add(time.Minute*6), caPool)
assert.False(t, v)
assert.EqualError(t, err, "certificate is expired")
// Test group assertion
ca, _, caKey, err = newTestCaCert(time.Now(), time.Now().Add(10*time.Minute), []*net.IPNet{}, []*net.IPNet{}, []string{"test1", "test2"})
assert.Nil(t, err)
caPem, err := ca.MarshalToPEM()
assert.Nil(t, err)
caPool = NewCAPool()
caPool.AddCACertificate(caPem)
c, _, _, err = newTestCert(ca, caKey, time.Now(), time.Now().Add(5*time.Minute), []*net.IPNet{}, []*net.IPNet{}, []string{"test1", "bad"})
assert.Nil(t, err)
v, err = c.Verify(time.Now(), caPool)
assert.False(t, v)
assert.EqualError(t, err, "certificate contained a group not present on the signing ca: bad")
c, _, _, err = newTestCert(ca, caKey, time.Now(), time.Now().Add(5*time.Minute), []*net.IPNet{}, []*net.IPNet{}, []string{"test1"})
assert.Nil(t, err)
v, err = c.Verify(time.Now(), caPool)
assert.True(t, v)
assert.Nil(t, err)
}
func TestNebulaCertificate_Verify_IPs(t *testing.T) {
_, caIp1, _ := net.ParseCIDR("10.0.0.0/16")
_, caIp2, _ := net.ParseCIDR("192.168.0.0/24")
ca, _, caKey, err := newTestCaCert(time.Now(), time.Now().Add(10*time.Minute), []*net.IPNet{caIp1, caIp2}, []*net.IPNet{}, []string{"test"})
assert.Nil(t, err)
caPem, err := ca.MarshalToPEM()
assert.Nil(t, err)
caPool := NewCAPool()
caPool.AddCACertificate(caPem)
// ip is outside the network
cIp1 := &net.IPNet{IP: net.ParseIP("10.1.0.0"), Mask: []byte{255, 255, 255, 0}}
cIp2 := &net.IPNet{IP: net.ParseIP("192.168.0.1"), Mask: []byte{255, 255, 0, 0}}
c, _, _, err := newTestCert(ca, caKey, time.Now(), time.Now().Add(5*time.Minute), []*net.IPNet{cIp1, cIp2}, []*net.IPNet{}, []string{"test"})
assert.Nil(t, err)
v, err := c.Verify(time.Now(), caPool)
assert.False(t, v)
assert.EqualError(t, err, "certificate contained an ip assignment outside the limitations of the signing ca: 10.1.0.0/24")
// ip is outside the network reversed order of above
cIp1 = &net.IPNet{IP: net.ParseIP("192.168.0.1"), Mask: []byte{255, 255, 255, 0}}
cIp2 = &net.IPNet{IP: net.ParseIP("10.1.0.0"), Mask: []byte{255, 255, 255, 0}}
c, _, _, err = newTestCert(ca, caKey, time.Now(), time.Now().Add(5*time.Minute), []*net.IPNet{cIp1, cIp2}, []*net.IPNet{}, []string{"test"})
assert.Nil(t, err)
v, err = c.Verify(time.Now(), caPool)
assert.False(t, v)
assert.EqualError(t, err, "certificate contained an ip assignment outside the limitations of the signing ca: 10.1.0.0/24")
// ip is within the network but mask is outside
cIp1 = &net.IPNet{IP: net.ParseIP("10.0.1.0"), Mask: []byte{255, 254, 0, 0}}
cIp2 = &net.IPNet{IP: net.ParseIP("192.168.0.1"), Mask: []byte{255, 255, 255, 0}}
c, _, _, err = newTestCert(ca, caKey, time.Now(), time.Now().Add(5*time.Minute), []*net.IPNet{cIp1, cIp2}, []*net.IPNet{}, []string{"test"})
assert.Nil(t, err)
v, err = c.Verify(time.Now(), caPool)
assert.False(t, v)
assert.EqualError(t, err, "certificate contained an ip assignment outside the limitations of the signing ca: 10.0.1.0/15")
// ip is within the network but mask is outside reversed order of above
cIp1 = &net.IPNet{IP: net.ParseIP("192.168.0.1"), Mask: []byte{255, 255, 255, 0}}
cIp2 = &net.IPNet{IP: net.ParseIP("10.0.1.0"), Mask: []byte{255, 254, 0, 0}}
c, _, _, err = newTestCert(ca, caKey, time.Now(), time.Now().Add(5*time.Minute), []*net.IPNet{cIp1, cIp2}, []*net.IPNet{}, []string{"test"})
assert.Nil(t, err)
v, err = c.Verify(time.Now(), caPool)
assert.False(t, v)
assert.EqualError(t, err, "certificate contained an ip assignment outside the limitations of the signing ca: 10.0.1.0/15")
// ip and mask are within the network
cIp1 = &net.IPNet{IP: net.ParseIP("10.0.1.0"), Mask: []byte{255, 255, 0, 0}}
cIp2 = &net.IPNet{IP: net.ParseIP("192.168.0.1"), Mask: []byte{255, 255, 255, 128}}
c, _, _, err = newTestCert(ca, caKey, time.Now(), time.Now().Add(5*time.Minute), []*net.IPNet{cIp1, cIp2}, []*net.IPNet{}, []string{"test"})
assert.Nil(t, err)
v, err = c.Verify(time.Now(), caPool)
assert.True(t, v)
assert.Nil(t, err)
// Exact matches
c, _, _, err = newTestCert(ca, caKey, time.Now(), time.Now().Add(5*time.Minute), []*net.IPNet{caIp1, caIp2}, []*net.IPNet{}, []string{"test"})
assert.Nil(t, err)
v, err = c.Verify(time.Now(), caPool)
assert.True(t, v)
assert.Nil(t, err)
// Exact matches reversed
c, _, _, err = newTestCert(ca, caKey, time.Now(), time.Now().Add(5*time.Minute), []*net.IPNet{caIp2, caIp1}, []*net.IPNet{}, []string{"test"})
assert.Nil(t, err)
v, err = c.Verify(time.Now(), caPool)
assert.True(t, v)
assert.Nil(t, err)
// Exact matches reversed with just 1
c, _, _, err = newTestCert(ca, caKey, time.Now(), time.Now().Add(5*time.Minute), []*net.IPNet{caIp1}, []*net.IPNet{}, []string{"test"})
assert.Nil(t, err)
v, err = c.Verify(time.Now(), caPool)
assert.True(t, v)
assert.Nil(t, err)
}
func TestNebulaCertificate_Verify_Subnets(t *testing.T) {
_, caIp1, _ := net.ParseCIDR("10.0.0.0/16")
_, caIp2, _ := net.ParseCIDR("192.168.0.0/24")
ca, _, caKey, err := newTestCaCert(time.Now(), time.Now().Add(10*time.Minute), []*net.IPNet{}, []*net.IPNet{caIp1, caIp2}, []string{"test"})
assert.Nil(t, err)
caPem, err := ca.MarshalToPEM()
assert.Nil(t, err)
caPool := NewCAPool()
caPool.AddCACertificate(caPem)
// ip is outside the network
cIp1 := &net.IPNet{IP: net.ParseIP("10.1.0.0"), Mask: []byte{255, 255, 255, 0}}
cIp2 := &net.IPNet{IP: net.ParseIP("192.168.0.1"), Mask: []byte{255, 255, 0, 0}}
c, _, _, err := newTestCert(ca, caKey, time.Now(), time.Now().Add(5*time.Minute), []*net.IPNet{}, []*net.IPNet{cIp1, cIp2}, []string{"test"})
assert.Nil(t, err)
v, err := c.Verify(time.Now(), caPool)
assert.False(t, v)
assert.EqualError(t, err, "certificate contained a subnet assignment outside the limitations of the signing ca: 10.1.0.0/24")
// ip is outside the network reversed order of above
cIp1 = &net.IPNet{IP: net.ParseIP("192.168.0.1"), Mask: []byte{255, 255, 255, 0}}
cIp2 = &net.IPNet{IP: net.ParseIP("10.1.0.0"), Mask: []byte{255, 255, 255, 0}}
c, _, _, err = newTestCert(ca, caKey, time.Now(), time.Now().Add(5*time.Minute), []*net.IPNet{}, []*net.IPNet{cIp1, cIp2}, []string{"test"})
assert.Nil(t, err)
v, err = c.Verify(time.Now(), caPool)
assert.False(t, v)
assert.EqualError(t, err, "certificate contained a subnet assignment outside the limitations of the signing ca: 10.1.0.0/24")
// ip is within the network but mask is outside
cIp1 = &net.IPNet{IP: net.ParseIP("10.0.1.0"), Mask: []byte{255, 254, 0, 0}}
cIp2 = &net.IPNet{IP: net.ParseIP("192.168.0.1"), Mask: []byte{255, 255, 255, 0}}
c, _, _, err = newTestCert(ca, caKey, time.Now(), time.Now().Add(5*time.Minute), []*net.IPNet{}, []*net.IPNet{cIp1, cIp2}, []string{"test"})
assert.Nil(t, err)
v, err = c.Verify(time.Now(), caPool)
assert.False(t, v)
assert.EqualError(t, err, "certificate contained a subnet assignment outside the limitations of the signing ca: 10.0.1.0/15")
// ip is within the network but mask is outside reversed order of above
cIp1 = &net.IPNet{IP: net.ParseIP("192.168.0.1"), Mask: []byte{255, 255, 255, 0}}
cIp2 = &net.IPNet{IP: net.ParseIP("10.0.1.0"), Mask: []byte{255, 254, 0, 0}}
c, _, _, err = newTestCert(ca, caKey, time.Now(), time.Now().Add(5*time.Minute), []*net.IPNet{}, []*net.IPNet{cIp1, cIp2}, []string{"test"})
assert.Nil(t, err)
v, err = c.Verify(time.Now(), caPool)
assert.False(t, v)
assert.EqualError(t, err, "certificate contained a subnet assignment outside the limitations of the signing ca: 10.0.1.0/15")
// ip and mask are within the network
cIp1 = &net.IPNet{IP: net.ParseIP("10.0.1.0"), Mask: []byte{255, 255, 0, 0}}
cIp2 = &net.IPNet{IP: net.ParseIP("192.168.0.1"), Mask: []byte{255, 255, 255, 128}}
c, _, _, err = newTestCert(ca, caKey, time.Now(), time.Now().Add(5*time.Minute), []*net.IPNet{}, []*net.IPNet{cIp1, cIp2}, []string{"test"})
assert.Nil(t, err)
v, err = c.Verify(time.Now(), caPool)
assert.True(t, v)
assert.Nil(t, err)
// Exact matches
c, _, _, err = newTestCert(ca, caKey, time.Now(), time.Now().Add(5*time.Minute), []*net.IPNet{}, []*net.IPNet{caIp1, caIp2}, []string{"test"})
assert.Nil(t, err)
v, err = c.Verify(time.Now(), caPool)
assert.True(t, v)
assert.Nil(t, err)
// Exact matches reversed
c, _, _, err = newTestCert(ca, caKey, time.Now(), time.Now().Add(5*time.Minute), []*net.IPNet{}, []*net.IPNet{caIp2, caIp1}, []string{"test"})
assert.Nil(t, err)
v, err = c.Verify(time.Now(), caPool)
assert.True(t, v)
assert.Nil(t, err)
// Exact matches reversed with just 1
c, _, _, err = newTestCert(ca, caKey, time.Now(), time.Now().Add(5*time.Minute), []*net.IPNet{}, []*net.IPNet{caIp1}, []string{"test"})
assert.Nil(t, err)
v, err = c.Verify(time.Now(), caPool)
assert.True(t, v)
assert.Nil(t, err)
}
func TestNebulaCertificate_VerifyPrivateKey(t *testing.T) {
ca, _, caKey, err := newTestCaCert(time.Time{}, time.Time{}, []*net.IPNet{}, []*net.IPNet{}, []string{})
assert.Nil(t, err)
err = ca.VerifyPrivateKey(caKey)
assert.Nil(t, err)
_, _, caKey2, err := newTestCaCert(time.Time{}, time.Time{}, []*net.IPNet{}, []*net.IPNet{}, []string{})
assert.Nil(t, err)
err = ca.VerifyPrivateKey(caKey2)
assert.NotNil(t, err)
c, _, priv, err := newTestCert(ca, caKey, time.Time{}, time.Time{}, []*net.IPNet{}, []*net.IPNet{}, []string{})
err = c.VerifyPrivateKey(priv)
assert.Nil(t, err)
_, priv2 := x25519Keypair()
err = c.VerifyPrivateKey(priv2)
assert.NotNil(t, err)
}
func TestNewCAPoolFromBytes(t *testing.T) {
noNewLines := `
# Current provisional, Remove once everything moves over to the real root.
-----BEGIN NEBULA CERTIFICATE-----
CkAKDm5lYnVsYSByb290IGNhKJfap9AFMJfg1+YGOiCUQGByMuNRhIlQBOyzXWbL
vcKBwDhov900phEfJ5DN3kABEkDCq5R8qBiu8sl54yVfgRcQXEDt3cHr8UTSLszv
bzBEr00kERQxxTzTsH8cpYEgRoipvmExvg8WP8NdAJEYJosB
-----END NEBULA CERTIFICATE-----
# root-ca01
-----BEGIN NEBULA CERTIFICATE-----
CkMKEW5lYnVsYSByb290IGNhIDAxKJL2u9EFMJL86+cGOiDPXMH4oU6HZTk/CqTG
BVG+oJpAoqokUBbI4U0N8CSfpUABEkB/Pm5A2xyH/nc8mg/wvGUWG3pZ7nHzaDMf
8/phAUt+FLzqTECzQKisYswKvE3pl9mbEYKbOdIHrxdIp95mo4sF
-----END NEBULA CERTIFICATE-----
`
withNewLines := `
# Current provisional, Remove once everything moves over to the real root.
-----BEGIN NEBULA CERTIFICATE-----
CkAKDm5lYnVsYSByb290IGNhKJfap9AFMJfg1+YGOiCUQGByMuNRhIlQBOyzXWbL
vcKBwDhov900phEfJ5DN3kABEkDCq5R8qBiu8sl54yVfgRcQXEDt3cHr8UTSLszv
bzBEr00kERQxxTzTsH8cpYEgRoipvmExvg8WP8NdAJEYJosB
-----END NEBULA CERTIFICATE-----
# root-ca01
-----BEGIN NEBULA CERTIFICATE-----
CkMKEW5lYnVsYSByb290IGNhIDAxKJL2u9EFMJL86+cGOiDPXMH4oU6HZTk/CqTG
BVG+oJpAoqokUBbI4U0N8CSfpUABEkB/Pm5A2xyH/nc8mg/wvGUWG3pZ7nHzaDMf
8/phAUt+FLzqTECzQKisYswKvE3pl9mbEYKbOdIHrxdIp95mo4sF
-----END NEBULA CERTIFICATE-----
`
rootCA := NebulaCertificate{
Details: NebulaCertificateDetails{
Name: "nebula root ca",
},
}
rootCA01 := NebulaCertificate{
Details: NebulaCertificateDetails{
Name: "nebula root ca 01",
},
}
p, err := NewCAPoolFromBytes([]byte(noNewLines))
assert.Nil(t, err)
assert.Equal(t, p.CAs[string("c9bfaf7ce8e84b2eeda2e27b469f4b9617bde192efd214b68891ecda6ed49522")].Details.Name, rootCA.Details.Name)
assert.Equal(t, p.CAs[string("5c9c3f23e7ee7fe97637cbd3a0a5b854154d1d9aaaf7b566a51f4a88f76b64cd")].Details.Name, rootCA01.Details.Name)
pp, err := NewCAPoolFromBytes([]byte(withNewLines))
assert.Nil(t, err)
assert.Equal(t, pp.CAs[string("c9bfaf7ce8e84b2eeda2e27b469f4b9617bde192efd214b68891ecda6ed49522")].Details.Name, rootCA.Details.Name)
assert.Equal(t, pp.CAs[string("5c9c3f23e7ee7fe97637cbd3a0a5b854154d1d9aaaf7b566a51f4a88f76b64cd")].Details.Name, rootCA01.Details.Name)
}
func appendByteSlices(b ...[]byte) []byte {
retSlice := []byte{}
for _, v := range b {
retSlice = append(retSlice, v...)
}
return retSlice
}
func TestUnmrshalCertPEM(t *testing.T) {
goodCert := []byte(`
# A good cert
-----BEGIN NEBULA CERTIFICATE-----
CkAKDm5lYnVsYSByb290IGNhKJfap9AFMJfg1+YGOiCUQGByMuNRhIlQBOyzXWbL
vcKBwDhov900phEfJ5DN3kABEkDCq5R8qBiu8sl54yVfgRcQXEDt3cHr8UTSLszv
bzBEr00kERQxxTzTsH8cpYEgRoipvmExvg8WP8NdAJEYJosB
-----END NEBULA CERTIFICATE-----
`)
badBanner := []byte(`# A bad banner
-----BEGIN NOT A NEBULA CERTIFICATE-----
CkAKDm5lYnVsYSByb290IGNhKJfap9AFMJfg1+YGOiCUQGByMuNRhIlQBOyzXWbL
vcKBwDhov900phEfJ5DN3kABEkDCq5R8qBiu8sl54yVfgRcQXEDt3cHr8UTSLszv
bzBEr00kERQxxTzTsH8cpYEgRoipvmExvg8WP8NdAJEYJosB
-----END NOT A NEBULA CERTIFICATE-----
`)
invalidPem := []byte(`# Not a valid PEM format
-BEGIN NEBULA CERTIFICATE-----
CkAKDm5lYnVsYSByb290IGNhKJfap9AFMJfg1+YGOiCUQGByMuNRhIlQBOyzXWbL
vcKBwDhov900phEfJ5DN3kABEkDCq5R8qBiu8sl54yVfgRcQXEDt3cHr8UTSLszv
bzBEr00kERQxxTzTsH8cpYEgRoipvmExvg8WP8NdAJEYJosB
-END NEBULA CERTIFICATE----`)
certBundle := appendByteSlices(goodCert, badBanner, invalidPem)
// Success test case
cert, rest, err := UnmarshalNebulaCertificateFromPEM(certBundle)
assert.NotNil(t, cert)
assert.Equal(t, rest, append(badBanner, invalidPem...))
assert.Nil(t, err)
// Fail due to invalid banner.
cert, rest, err = UnmarshalNebulaCertificateFromPEM(rest)
assert.Nil(t, cert)
assert.Equal(t, rest, invalidPem)
assert.EqualError(t, err, "bytes did not contain a proper nebula certificate banner")
// Fail due to ivalid PEM format, because
// it's missing the requisite pre-encapsulation boundary.
cert, rest, err = UnmarshalNebulaCertificateFromPEM(rest)
assert.Nil(t, cert)
assert.Equal(t, rest, invalidPem)
assert.EqualError(t, err, "input did not contain a valid PEM encoded block")
}
func TestUnmarshalEd25519PrivateKey(t *testing.T) {
privKey := []byte(`# A good key
-----BEGIN NEBULA ED25519 PRIVATE KEY-----
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA==
-----END NEBULA ED25519 PRIVATE KEY-----
`)
shortKey := []byte(`# A short key
-----BEGIN NEBULA ED25519 PRIVATE KEY-----
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
-----END NEBULA ED25519 PRIVATE KEY-----
`)
invalidBanner := []byte(`# Invalid banner
-----BEGIN NOT A NEBULA PRIVATE KEY-----
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA==
-----END NOT A NEBULA PRIVATE KEY-----
`)
invalidPem := []byte(`# Not a valid PEM format
-BEGIN NEBULA ED25519 PRIVATE KEY-----
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA==
-END NEBULA ED25519 PRIVATE KEY-----`)
keyBundle := appendByteSlices(privKey, shortKey, invalidBanner, invalidPem)
// Success test case
k, rest, err := UnmarshalEd25519PrivateKey(keyBundle)
assert.Len(t, k, 64)
assert.Equal(t, rest, appendByteSlices(shortKey, invalidBanner, invalidPem))
assert.Nil(t, err)
// Fail due to short key
k, rest, err = UnmarshalEd25519PrivateKey(rest)
assert.Nil(t, k)
assert.Equal(t, rest, appendByteSlices(invalidBanner, invalidPem))
assert.EqualError(t, err, "key was not 64 bytes, is invalid ed25519 private key")
// Fail due to invalid banner
k, rest, err = UnmarshalEd25519PrivateKey(rest)
assert.Nil(t, k)
assert.Equal(t, rest, invalidPem)
assert.EqualError(t, err, "bytes did not contain a proper nebula Ed25519 private key banner")
// Fail due to ivalid PEM format, because
// it's missing the requisite pre-encapsulation boundary.
k, rest, err = UnmarshalEd25519PrivateKey(rest)
assert.Nil(t, k)
assert.Equal(t, rest, invalidPem)
assert.EqualError(t, err, "input did not contain a valid PEM encoded block")
}
func TestUnmarshalX25519PrivateKey(t *testing.T) {
privKey := []byte(`# A good key
-----BEGIN NEBULA X25519 PRIVATE KEY-----
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=
-----END NEBULA X25519 PRIVATE KEY-----
`)
shortKey := []byte(`# A short key
-----BEGIN NEBULA X25519 PRIVATE KEY-----
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA==
-----END NEBULA X25519 PRIVATE KEY-----
`)
invalidBanner := []byte(`# Invalid banner
-----BEGIN NOT A NEBULA PRIVATE KEY-----
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=
-----END NOT A NEBULA PRIVATE KEY-----
`)
invalidPem := []byte(`# Not a valid PEM format
-BEGIN NEBULA X25519 PRIVATE KEY-----
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=
-END NEBULA X25519 PRIVATE KEY-----`)
keyBundle := appendByteSlices(privKey, shortKey, invalidBanner, invalidPem)
// Success test case
k, rest, err := UnmarshalX25519PrivateKey(keyBundle)
assert.Len(t, k, 32)
assert.Equal(t, rest, appendByteSlices(shortKey, invalidBanner, invalidPem))
assert.Nil(t, err)
// Fail due to short key
k, rest, err = UnmarshalX25519PrivateKey(rest)
assert.Nil(t, k)
assert.Equal(t, rest, appendByteSlices(invalidBanner, invalidPem))
assert.EqualError(t, err, "key was not 32 bytes, is invalid X25519 private key")
// Fail due to invalid banner
k, rest, err = UnmarshalX25519PrivateKey(rest)
assert.Nil(t, k)
assert.Equal(t, rest, invalidPem)
assert.EqualError(t, err, "bytes did not contain a proper nebula X25519 private key banner")
// Fail due to ivalid PEM format, because
// it's missing the requisite pre-encapsulation boundary.
k, rest, err = UnmarshalX25519PrivateKey(rest)
assert.Nil(t, k)
assert.Equal(t, rest, invalidPem)
assert.EqualError(t, err, "input did not contain a valid PEM encoded block")
}
func TestUnmarshalEd25519PublicKey(t *testing.T) {
pubKey := []byte(`# A good key
-----BEGIN NEBULA ED25519 PUBLIC KEY-----
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=
-----END NEBULA ED25519 PUBLIC KEY-----
`)
shortKey := []byte(`# A short key
-----BEGIN NEBULA ED25519 PUBLIC KEY-----
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA==
-----END NEBULA ED25519 PUBLIC KEY-----
`)
invalidBanner := []byte(`# Invalid banner
-----BEGIN NOT A NEBULA PUBLIC KEY-----
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=
-----END NOT A NEBULA PUBLIC KEY-----
`)
invalidPem := []byte(`# Not a valid PEM format
-BEGIN NEBULA ED25519 PUBLIC KEY-----
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=
-END NEBULA ED25519 PUBLIC KEY-----`)
keyBundle := appendByteSlices(pubKey, shortKey, invalidBanner, invalidPem)
// Success test case
k, rest, err := UnmarshalEd25519PublicKey(keyBundle)
assert.Equal(t, len(k), 32)
assert.Nil(t, err)
assert.Equal(t, rest, appendByteSlices(shortKey, invalidBanner, invalidPem))
// Fail due to short key
k, rest, err = UnmarshalEd25519PublicKey(rest)
assert.Nil(t, k)
assert.Equal(t, rest, appendByteSlices(invalidBanner, invalidPem))
assert.EqualError(t, err, "key was not 32 bytes, is invalid ed25519 public key")
// Fail due to invalid banner
k, rest, err = UnmarshalEd25519PublicKey(rest)
assert.Nil(t, k)
assert.EqualError(t, err, "bytes did not contain a proper nebula Ed25519 public key banner")
assert.Equal(t, rest, invalidPem)
// Fail due to ivalid PEM format, because
// it's missing the requisite pre-encapsulation boundary.
k, rest, err = UnmarshalEd25519PublicKey(rest)
assert.Nil(t, k)
assert.Equal(t, rest, invalidPem)
assert.EqualError(t, err, "input did not contain a valid PEM encoded block")
}
func TestUnmarshalX25519PublicKey(t *testing.T) {
pubKey := []byte(`# A good key
-----BEGIN NEBULA X25519 PUBLIC KEY-----
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=
-----END NEBULA X25519 PUBLIC KEY-----
`)
shortKey := []byte(`# A short key
-----BEGIN NEBULA X25519 PUBLIC KEY-----
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA==
-----END NEBULA X25519 PUBLIC KEY-----
`)
invalidBanner := []byte(`# Invalid banner
-----BEGIN NOT A NEBULA PUBLIC KEY-----
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=
-----END NOT A NEBULA PUBLIC KEY-----
`)
invalidPem := []byte(`# Not a valid PEM format
-BEGIN NEBULA X25519 PUBLIC KEY-----
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=
-END NEBULA X25519 PUBLIC KEY-----`)
keyBundle := appendByteSlices(pubKey, shortKey, invalidBanner, invalidPem)
// Success test case
k, rest, err := UnmarshalX25519PublicKey(keyBundle)
assert.Equal(t, len(k), 32)
assert.Nil(t, err)
assert.Equal(t, rest, appendByteSlices(shortKey, invalidBanner, invalidPem))
// Fail due to short key
k, rest, err = UnmarshalX25519PublicKey(rest)
assert.Nil(t, k)
assert.Equal(t, rest, appendByteSlices(invalidBanner, invalidPem))
assert.EqualError(t, err, "key was not 32 bytes, is invalid X25519 public key")
// Fail due to invalid banner
k, rest, err = UnmarshalX25519PublicKey(rest)
assert.Nil(t, k)
assert.EqualError(t, err, "bytes did not contain a proper nebula X25519 public key banner")
assert.Equal(t, rest, invalidPem)
// Fail due to ivalid PEM format, because
// it's missing the requisite pre-encapsulation boundary.
k, rest, err = UnmarshalX25519PublicKey(rest)
assert.Nil(t, k)
assert.Equal(t, rest, invalidPem)
assert.EqualError(t, err, "input did not contain a valid PEM encoded block")
}
// Ensure that upgrading the protobuf library does not change how certificates
// are marshalled, since this would break signature verification
func TestMarshalingNebulaCertificateConsistency(t *testing.T) {
before := time.Date(2009, time.November, 10, 23, 0, 0, 0, time.UTC)
after := time.Date(2017, time.January, 18, 28, 40, 0, 0, time.UTC)
pubKey := []byte("1234567890abcedfghij1234567890ab")
nc := NebulaCertificate{
Details: NebulaCertificateDetails{
Name: "testing",
Ips: []*net.IPNet{
{IP: net.ParseIP("10.1.1.1"), Mask: net.IPMask(net.ParseIP("255.255.255.0"))},
{IP: net.ParseIP("10.1.1.2"), Mask: net.IPMask(net.ParseIP("255.255.0.0"))},
{IP: net.ParseIP("10.1.1.3"), Mask: net.IPMask(net.ParseIP("255.0.255.0"))},
},
Subnets: []*net.IPNet{
{IP: net.ParseIP("9.1.1.1"), Mask: net.IPMask(net.ParseIP("255.0.255.0"))},
{IP: net.ParseIP("9.1.1.2"), Mask: net.IPMask(net.ParseIP("255.255.255.0"))},
{IP: net.ParseIP("9.1.1.3"), Mask: net.IPMask(net.ParseIP("255.255.0.0"))},
},
Groups: []string{"test-group1", "test-group2", "test-group3"},
NotBefore: before,
NotAfter: after,
PublicKey: pubKey,
IsCA: false,
Issuer: "1234567890abcedfghij1234567890ab",
},
Signature: []byte("1234567890abcedfghij1234567890ab"),
}
b, err := nc.Marshal()
assert.Nil(t, err)
//t.Log("Cert size:", len(b))
assert.Equal(t, "0aa2010a0774657374696e67121b8182845080feffff0f828284508080fcff0f8382845080fe83f80f1a1b8182844880fe83f80f8282844880feffff0f838284488080fcff0f220b746573742d67726f757031220b746573742d67726f757032220b746573742d67726f75703328f0e0e7d70430a08681c4053a20313233343536373839306162636564666768696a3132333435363738393061624a081234567890abcedf1220313233343536373839306162636564666768696a313233343536373839306162", fmt.Sprintf("%x", b))
b, err = proto.Marshal(nc.getRawDetails())
assert.Nil(t, err)
//t.Log("Raw cert size:", len(b))
assert.Equal(t, "0a0774657374696e67121b8182845080feffff0f828284508080fcff0f8382845080fe83f80f1a1b8182844880fe83f80f8282844880feffff0f838284488080fcff0f220b746573742d67726f757031220b746573742d67726f757032220b746573742d67726f75703328f0e0e7d70430a08681c4053a20313233343536373839306162636564666768696a3132333435363738393061624a081234567890abcedf", fmt.Sprintf("%x", b))
}
func TestNebulaCertificate_Copy(t *testing.T) {
ca, _, caKey, err := newTestCaCert(time.Now(), time.Now().Add(10*time.Minute), []*net.IPNet{}, []*net.IPNet{}, []string{})
assert.Nil(t, err)
c, _, _, err := newTestCert(ca, caKey, time.Now(), time.Now().Add(5*time.Minute), []*net.IPNet{}, []*net.IPNet{}, []string{})
assert.Nil(t, err)
cc := c.Copy()
util.AssertDeepCopyEqual(t, c, cc)
}
func TestUnmarshalNebulaCertificate(t *testing.T) {
// Test that we don't panic with an invalid certificate (#332)
data := []byte("\x98\x00\x00")
_, err := UnmarshalNebulaCertificate(data)
assert.EqualError(t, err, "encoded Details was nil")
}
func newTestCaCert(before, after time.Time, ips, subnets []*net.IPNet, groups []string) (*NebulaCertificate, []byte, []byte, error) {
pub, priv, err := ed25519.GenerateKey(rand.Reader)
if before.IsZero() {
before = time.Now().Add(time.Second * -60).Round(time.Second)
}
if after.IsZero() {
after = time.Now().Add(time.Second * 60).Round(time.Second)
}
nc := &NebulaCertificate{
Details: NebulaCertificateDetails{
Name: "test ca",
NotBefore: time.Unix(before.Unix(), 0),
NotAfter: time.Unix(after.Unix(), 0),
PublicKey: pub,
IsCA: true,
InvertedGroups: make(map[string]struct{}),
},
}
if len(ips) > 0 {
nc.Details.Ips = ips
}
if len(subnets) > 0 {
nc.Details.Subnets = subnets
}
if len(groups) > 0 {
nc.Details.Groups = groups
}
err = nc.Sign(priv)
if err != nil {
return nil, nil, nil, err
}
return nc, pub, priv, nil
}
func newTestCert(ca *NebulaCertificate, key []byte, before, after time.Time, ips, subnets []*net.IPNet, groups []string) (*NebulaCertificate, []byte, []byte, error) {
issuer, err := ca.Sha256Sum()
if err != nil {
return nil, nil, nil, err
}
if before.IsZero() {
before = time.Now().Add(time.Second * -60).Round(time.Second)
}
if after.IsZero() {
after = time.Now().Add(time.Second * 60).Round(time.Second)
}
if len(groups) == 0 {
groups = []string{"test-group1", "test-group2", "test-group3"}
}
if len(ips) == 0 {
ips = []*net.IPNet{
{IP: net.ParseIP("10.1.1.1").To4(), Mask: net.IPMask(net.ParseIP("255.255.255.0").To4())},
{IP: net.ParseIP("10.1.1.2").To4(), Mask: net.IPMask(net.ParseIP("255.255.0.0").To4())},
{IP: net.ParseIP("10.1.1.3").To4(), Mask: net.IPMask(net.ParseIP("255.0.255.0").To4())},
}
}
if len(subnets) == 0 {
subnets = []*net.IPNet{
{IP: net.ParseIP("9.1.1.1").To4(), Mask: net.IPMask(net.ParseIP("255.0.255.0").To4())},
{IP: net.ParseIP("9.1.1.2").To4(), Mask: net.IPMask(net.ParseIP("255.255.255.0").To4())},
{IP: net.ParseIP("9.1.1.3").To4(), Mask: net.IPMask(net.ParseIP("255.255.0.0").To4())},
}
}
pub, rawPriv := x25519Keypair()
nc := &NebulaCertificate{
Details: NebulaCertificateDetails{
Name: "testing",
Ips: ips,
Subnets: subnets,
Groups: groups,
NotBefore: time.Unix(before.Unix(), 0),
NotAfter: time.Unix(after.Unix(), 0),
PublicKey: pub,
IsCA: false,
Issuer: issuer,
InvertedGroups: make(map[string]struct{}),
},
}
err = nc.Sign(key)
if err != nil {
return nil, nil, nil, err
}
return nc, pub, rawPriv, nil
}
func x25519Keypair() ([]byte, []byte) {
privkey := make([]byte, 32)
if _, err := io.ReadFull(rand.Reader, privkey); err != nil {
panic(err)
}
pubkey, err := curve25519.X25519(privkey, curve25519.Basepoint)
if err != nil {
panic(err)
}
return pubkey, privkey
}

View File

@@ -1,495 +0,0 @@
package cert
import (
"bytes"
"crypto/ecdh"
"crypto/ecdsa"
"crypto/ed25519"
"crypto/elliptic"
"crypto/sha256"
"encoding/binary"
"encoding/hex"
"encoding/json"
"encoding/pem"
"fmt"
"net"
"net/netip"
"time"
"golang.org/x/crypto/curve25519"
"google.golang.org/protobuf/proto"
)
const publicKeyLen = 32
type certificateV1 struct {
details detailsV1
signature []byte
}
type detailsV1 struct {
name string
networks []netip.Prefix
unsafeNetworks []netip.Prefix
groups []string
notBefore time.Time
notAfter time.Time
publicKey []byte
isCA bool
issuer string
curve Curve
}
type m = map[string]any
func (c *certificateV1) Version() Version {
return Version1
}
func (c *certificateV1) Curve() Curve {
return c.details.curve
}
func (c *certificateV1) Groups() []string {
return c.details.groups
}
func (c *certificateV1) IsCA() bool {
return c.details.isCA
}
func (c *certificateV1) Issuer() string {
return c.details.issuer
}
func (c *certificateV1) Name() string {
return c.details.name
}
func (c *certificateV1) Networks() []netip.Prefix {
return c.details.networks
}
func (c *certificateV1) NotAfter() time.Time {
return c.details.notAfter
}
func (c *certificateV1) NotBefore() time.Time {
return c.details.notBefore
}
func (c *certificateV1) PublicKey() []byte {
return c.details.publicKey
}
func (c *certificateV1) MarshalPublicKeyPEM() []byte {
return marshalCertPublicKeyToPEM(c)
}
func (c *certificateV1) Signature() []byte {
return c.signature
}
func (c *certificateV1) UnsafeNetworks() []netip.Prefix {
return c.details.unsafeNetworks
}
func (c *certificateV1) Fingerprint() (string, error) {
b, err := c.Marshal()
if err != nil {
return "", err
}
sum := sha256.Sum256(b)
return hex.EncodeToString(sum[:]), nil
}
func (c *certificateV1) CheckSignature(key []byte) bool {
b, err := proto.Marshal(c.getRawDetails())
if err != nil {
return false
}
switch c.details.curve {
case Curve_CURVE25519:
return ed25519.Verify(key, b, c.signature)
case Curve_P256:
pubKey, err := ecdsa.ParseUncompressedPublicKey(elliptic.P256(), key)
if err != nil {
return false
}
hashed := sha256.Sum256(b)
return ecdsa.VerifyASN1(pubKey, hashed[:], c.signature)
default:
return false
}
}
func (c *certificateV1) Expired(t time.Time) bool {
return c.details.notBefore.After(t) || c.details.notAfter.Before(t)
}
func (c *certificateV1) VerifyPrivateKey(curve Curve, key []byte) error {
if curve != c.details.curve {
return fmt.Errorf("curve in cert and private key supplied don't match")
}
if c.details.isCA {
switch curve {
case Curve_CURVE25519:
// the call to PublicKey below will panic slice bounds out of range otherwise
if len(key) != ed25519.PrivateKeySize {
return fmt.Errorf("key was not 64 bytes, is invalid ed25519 private key")
}
if !ed25519.PublicKey(c.details.publicKey).Equal(ed25519.PrivateKey(key).Public()) {
return fmt.Errorf("public key in cert and private key supplied don't match")
}
case Curve_P256:
privkey, err := ecdh.P256().NewPrivateKey(key)
if err != nil {
return fmt.Errorf("cannot parse private key as P256: %w", err)
}
pub := privkey.PublicKey().Bytes()
if !bytes.Equal(pub, c.details.publicKey) {
return fmt.Errorf("public key in cert and private key supplied don't match")
}
default:
return fmt.Errorf("invalid curve: %s", curve)
}
return nil
}
var pub []byte
switch curve {
case Curve_CURVE25519:
var err error
pub, err = curve25519.X25519(key, curve25519.Basepoint)
if err != nil {
return err
}
case Curve_P256:
privkey, err := ecdh.P256().NewPrivateKey(key)
if err != nil {
return err
}
pub = privkey.PublicKey().Bytes()
default:
return fmt.Errorf("invalid curve: %s", curve)
}
if !bytes.Equal(pub, c.details.publicKey) {
return fmt.Errorf("public key in cert and private key supplied don't match")
}
return nil
}
// getRawDetails marshals the raw details into protobuf ready struct
func (c *certificateV1) getRawDetails() *RawNebulaCertificateDetails {
rd := &RawNebulaCertificateDetails{
Name: c.details.name,
Groups: c.details.groups,
NotBefore: c.details.notBefore.Unix(),
NotAfter: c.details.notAfter.Unix(),
PublicKey: make([]byte, len(c.details.publicKey)),
IsCA: c.details.isCA,
Curve: c.details.curve,
}
for _, ipNet := range c.details.networks {
mask := net.CIDRMask(ipNet.Bits(), ipNet.Addr().BitLen())
rd.Ips = append(rd.Ips, addr2int(ipNet.Addr()), ip2int(mask))
}
for _, ipNet := range c.details.unsafeNetworks {
mask := net.CIDRMask(ipNet.Bits(), ipNet.Addr().BitLen())
rd.Subnets = append(rd.Subnets, addr2int(ipNet.Addr()), ip2int(mask))
}
copy(rd.PublicKey, c.details.publicKey[:])
// I know, this is terrible
rd.Issuer, _ = hex.DecodeString(c.details.issuer)
return rd
}
func (c *certificateV1) String() string {
b, err := json.MarshalIndent(c.marshalJSON(), "", "\t")
if err != nil {
return fmt.Sprintf("<error marshalling certificate: %v>", err)
}
return string(b)
}
func (c *certificateV1) MarshalForHandshakes() ([]byte, error) {
pubKey := c.details.publicKey
c.details.publicKey = nil
rawCertNoKey, err := c.Marshal()
if err != nil {
return nil, err
}
c.details.publicKey = pubKey
return rawCertNoKey, nil
}
func (c *certificateV1) Marshal() ([]byte, error) {
rc := RawNebulaCertificate{
Details: c.getRawDetails(),
Signature: c.signature,
}
return proto.Marshal(&rc)
}
func (c *certificateV1) MarshalPEM() ([]byte, error) {
b, err := c.Marshal()
if err != nil {
return nil, err
}
return pem.EncodeToMemory(&pem.Block{Type: CertificateBanner, Bytes: b}), nil
}
func (c *certificateV1) MarshalJSON() ([]byte, error) {
return json.Marshal(c.marshalJSON())
}
func (c *certificateV1) marshalJSON() m {
fp, _ := c.Fingerprint()
return m{
"version": Version1,
"details": m{
"name": c.details.name,
"networks": c.details.networks,
"unsafeNetworks": c.details.unsafeNetworks,
"groups": c.details.groups,
"notBefore": c.details.notBefore,
"notAfter": c.details.notAfter,
"publicKey": fmt.Sprintf("%x", c.details.publicKey),
"isCa": c.details.isCA,
"issuer": c.details.issuer,
"curve": c.details.curve.String(),
},
"fingerprint": fp,
"signature": fmt.Sprintf("%x", c.Signature()),
}
}
func (c *certificateV1) Copy() Certificate {
nc := &certificateV1{
details: detailsV1{
name: c.details.name,
notBefore: c.details.notBefore,
notAfter: c.details.notAfter,
publicKey: make([]byte, len(c.details.publicKey)),
isCA: c.details.isCA,
issuer: c.details.issuer,
curve: c.details.curve,
},
signature: make([]byte, len(c.signature)),
}
if c.details.groups != nil {
nc.details.groups = make([]string, len(c.details.groups))
copy(nc.details.groups, c.details.groups)
}
if c.details.networks != nil {
nc.details.networks = make([]netip.Prefix, len(c.details.networks))
copy(nc.details.networks, c.details.networks)
}
if c.details.unsafeNetworks != nil {
nc.details.unsafeNetworks = make([]netip.Prefix, len(c.details.unsafeNetworks))
copy(nc.details.unsafeNetworks, c.details.unsafeNetworks)
}
copy(nc.signature, c.signature)
copy(nc.details.publicKey, c.details.publicKey)
return nc
}
func (c *certificateV1) fromTBSCertificate(t *TBSCertificate) error {
c.details = detailsV1{
name: t.Name,
networks: t.Networks,
unsafeNetworks: t.UnsafeNetworks,
groups: t.Groups,
notBefore: t.NotBefore,
notAfter: t.NotAfter,
publicKey: t.PublicKey,
isCA: t.IsCA,
curve: t.Curve,
issuer: t.issuer,
}
return c.validate()
}
func (c *certificateV1) validate() error {
// Empty names are allowed
if len(c.details.publicKey) == 0 {
return ErrInvalidPublicKey
}
// Original v1 rules allowed multiple networks to be present but ignored all but the first one.
// Continue to allow this behavior
if !c.details.isCA && len(c.details.networks) == 0 {
return NewErrInvalidCertificateProperties("non-CA certificates must contain exactly one network")
}
for _, network := range c.details.networks {
if !network.IsValid() || !network.Addr().IsValid() {
return NewErrInvalidCertificateProperties("invalid network: %s", network)
}
if network.Addr().Is6() {
return NewErrInvalidCertificateProperties("certificate may not contain IPv6 networks: %v", network)
}
if network.Addr().IsUnspecified() {
return NewErrInvalidCertificateProperties("non-CA certificates must not use the zero address as a network: %s", network)
}
if network.Addr().Zone() != "" {
return NewErrInvalidCertificateProperties("networks may not contain zones: %s", network)
}
}
for _, network := range c.details.unsafeNetworks {
if !network.IsValid() || !network.Addr().IsValid() {
return NewErrInvalidCertificateProperties("invalid unsafe network: %s", network)
}
if network.Addr().Is6() {
return NewErrInvalidCertificateProperties("certificate may not contain IPv6 unsafe networks: %v", network)
}
if network.Addr().Zone() != "" {
return NewErrInvalidCertificateProperties("unsafe networks may not contain zones: %s", network)
}
}
// v1 doesn't bother with sort order or uniqueness of networks or unsafe networks.
// We can't modify the unmarshalled data because verification requires re-marshalling and a re-ordered
// unsafe networks would result in a different signature.
return nil
}
func (c *certificateV1) marshalForSigning() ([]byte, error) {
b, err := proto.Marshal(c.getRawDetails())
if err != nil {
return nil, err
}
return b, nil
}
func (c *certificateV1) setSignature(b []byte) error {
if len(b) == 0 {
return ErrEmptySignature
}
c.signature = b
return nil
}
// unmarshalCertificateV1 will unmarshal a protobuf byte representation of a nebula cert
// if the publicKey is provided here then it is not required to be present in `b`
func unmarshalCertificateV1(b []byte, publicKey []byte) (*certificateV1, error) {
if len(b) == 0 {
return nil, fmt.Errorf("nil byte array")
}
var rc RawNebulaCertificate
err := proto.Unmarshal(b, &rc)
if err != nil {
return nil, err
}
if rc.Details == nil {
return nil, fmt.Errorf("encoded Details was nil")
}
if len(rc.Details.Ips)%2 != 0 {
return nil, fmt.Errorf("encoded IPs should be in pairs, an odd number was found")
}
if len(rc.Details.Subnets)%2 != 0 {
return nil, fmt.Errorf("encoded Subnets should be in pairs, an odd number was found")
}
nc := certificateV1{
details: detailsV1{
name: rc.Details.Name,
groups: make([]string, len(rc.Details.Groups)),
networks: make([]netip.Prefix, len(rc.Details.Ips)/2),
unsafeNetworks: make([]netip.Prefix, len(rc.Details.Subnets)/2),
notBefore: time.Unix(rc.Details.NotBefore, 0),
notAfter: time.Unix(rc.Details.NotAfter, 0),
publicKey: make([]byte, len(rc.Details.PublicKey)),
isCA: rc.Details.IsCA,
curve: rc.Details.Curve,
},
signature: make([]byte, len(rc.Signature)),
}
copy(nc.signature, rc.Signature)
copy(nc.details.groups, rc.Details.Groups)
nc.details.issuer = hex.EncodeToString(rc.Details.Issuer)
if len(publicKey) > 0 {
nc.details.publicKey = publicKey
}
copy(nc.details.publicKey, rc.Details.PublicKey)
var ip netip.Addr
for i, rawIp := range rc.Details.Ips {
if i%2 == 0 {
ip = int2addr(rawIp)
} else {
ones, _ := net.IPMask(int2ip(rawIp)).Size()
nc.details.networks[i/2] = netip.PrefixFrom(ip, ones)
}
}
for i, rawIp := range rc.Details.Subnets {
if i%2 == 0 {
ip = int2addr(rawIp)
} else {
ones, _ := net.IPMask(int2ip(rawIp)).Size()
nc.details.unsafeNetworks[i/2] = netip.PrefixFrom(ip, ones)
}
}
err = nc.validate()
if err != nil {
return nil, err
}
return &nc, nil
}
func ip2int(ip []byte) uint32 {
if len(ip) == 16 {
return binary.BigEndian.Uint32(ip[12:16])
}
return binary.BigEndian.Uint32(ip)
}
func int2ip(nn uint32) net.IP {
ip := make(net.IP, net.IPv4len)
binary.BigEndian.PutUint32(ip, nn)
return ip
}
func addr2int(addr netip.Addr) uint32 {
b := addr.Unmap().As4()
return binary.BigEndian.Uint32(b[:])
}
func int2addr(nn uint32) netip.Addr {
ip := [4]byte{}
binary.BigEndian.PutUint32(ip[:], nn)
return netip.AddrFrom4(ip).Unmap()
}

View File

@@ -1,620 +0,0 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.34.2
// protoc v3.21.5
// source: cert_v1.proto
package cert
import (
protoreflect "google.golang.org/protobuf/reflect/protoreflect"
protoimpl "google.golang.org/protobuf/runtime/protoimpl"
reflect "reflect"
sync "sync"
)
const (
// Verify that this generated code is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion)
// Verify that runtime/protoimpl is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
)
type Curve int32
const (
Curve_CURVE25519 Curve = 0
Curve_P256 Curve = 1
)
// Enum value maps for Curve.
var (
Curve_name = map[int32]string{
0: "CURVE25519",
1: "P256",
}
Curve_value = map[string]int32{
"CURVE25519": 0,
"P256": 1,
}
)
func (x Curve) Enum() *Curve {
p := new(Curve)
*p = x
return p
}
func (x Curve) String() string {
return protoimpl.X.EnumStringOf(x.Descriptor(), protoreflect.EnumNumber(x))
}
func (Curve) Descriptor() protoreflect.EnumDescriptor {
return file_cert_v1_proto_enumTypes[0].Descriptor()
}
func (Curve) Type() protoreflect.EnumType {
return &file_cert_v1_proto_enumTypes[0]
}
func (x Curve) Number() protoreflect.EnumNumber {
return protoreflect.EnumNumber(x)
}
// Deprecated: Use Curve.Descriptor instead.
func (Curve) EnumDescriptor() ([]byte, []int) {
return file_cert_v1_proto_rawDescGZIP(), []int{0}
}
type RawNebulaCertificate struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Details *RawNebulaCertificateDetails `protobuf:"bytes,1,opt,name=Details,proto3" json:"Details,omitempty"`
Signature []byte `protobuf:"bytes,2,opt,name=Signature,proto3" json:"Signature,omitempty"`
}
func (x *RawNebulaCertificate) Reset() {
*x = RawNebulaCertificate{}
if protoimpl.UnsafeEnabled {
mi := &file_cert_v1_proto_msgTypes[0]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *RawNebulaCertificate) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*RawNebulaCertificate) ProtoMessage() {}
func (x *RawNebulaCertificate) ProtoReflect() protoreflect.Message {
mi := &file_cert_v1_proto_msgTypes[0]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use RawNebulaCertificate.ProtoReflect.Descriptor instead.
func (*RawNebulaCertificate) Descriptor() ([]byte, []int) {
return file_cert_v1_proto_rawDescGZIP(), []int{0}
}
func (x *RawNebulaCertificate) GetDetails() *RawNebulaCertificateDetails {
if x != nil {
return x.Details
}
return nil
}
func (x *RawNebulaCertificate) GetSignature() []byte {
if x != nil {
return x.Signature
}
return nil
}
type RawNebulaCertificateDetails struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Name string `protobuf:"bytes,1,opt,name=Name,proto3" json:"Name,omitempty"`
// Ips and Subnets are in big endian 32 bit pairs, 1st the ip, 2nd the mask
Ips []uint32 `protobuf:"varint,2,rep,packed,name=Ips,proto3" json:"Ips,omitempty"`
Subnets []uint32 `protobuf:"varint,3,rep,packed,name=Subnets,proto3" json:"Subnets,omitempty"`
Groups []string `protobuf:"bytes,4,rep,name=Groups,proto3" json:"Groups,omitempty"`
NotBefore int64 `protobuf:"varint,5,opt,name=NotBefore,proto3" json:"NotBefore,omitempty"`
NotAfter int64 `protobuf:"varint,6,opt,name=NotAfter,proto3" json:"NotAfter,omitempty"`
PublicKey []byte `protobuf:"bytes,7,opt,name=PublicKey,proto3" json:"PublicKey,omitempty"`
IsCA bool `protobuf:"varint,8,opt,name=IsCA,proto3" json:"IsCA,omitempty"`
// sha-256 of the issuer certificate, if this field is blank the cert is self-signed
Issuer []byte `protobuf:"bytes,9,opt,name=Issuer,proto3" json:"Issuer,omitempty"`
Curve Curve `protobuf:"varint,100,opt,name=curve,proto3,enum=cert.Curve" json:"curve,omitempty"`
}
func (x *RawNebulaCertificateDetails) Reset() {
*x = RawNebulaCertificateDetails{}
if protoimpl.UnsafeEnabled {
mi := &file_cert_v1_proto_msgTypes[1]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *RawNebulaCertificateDetails) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*RawNebulaCertificateDetails) ProtoMessage() {}
func (x *RawNebulaCertificateDetails) ProtoReflect() protoreflect.Message {
mi := &file_cert_v1_proto_msgTypes[1]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use RawNebulaCertificateDetails.ProtoReflect.Descriptor instead.
func (*RawNebulaCertificateDetails) Descriptor() ([]byte, []int) {
return file_cert_v1_proto_rawDescGZIP(), []int{1}
}
func (x *RawNebulaCertificateDetails) GetName() string {
if x != nil {
return x.Name
}
return ""
}
func (x *RawNebulaCertificateDetails) GetIps() []uint32 {
if x != nil {
return x.Ips
}
return nil
}
func (x *RawNebulaCertificateDetails) GetSubnets() []uint32 {
if x != nil {
return x.Subnets
}
return nil
}
func (x *RawNebulaCertificateDetails) GetGroups() []string {
if x != nil {
return x.Groups
}
return nil
}
func (x *RawNebulaCertificateDetails) GetNotBefore() int64 {
if x != nil {
return x.NotBefore
}
return 0
}
func (x *RawNebulaCertificateDetails) GetNotAfter() int64 {
if x != nil {
return x.NotAfter
}
return 0
}
func (x *RawNebulaCertificateDetails) GetPublicKey() []byte {
if x != nil {
return x.PublicKey
}
return nil
}
func (x *RawNebulaCertificateDetails) GetIsCA() bool {
if x != nil {
return x.IsCA
}
return false
}
func (x *RawNebulaCertificateDetails) GetIssuer() []byte {
if x != nil {
return x.Issuer
}
return nil
}
func (x *RawNebulaCertificateDetails) GetCurve() Curve {
if x != nil {
return x.Curve
}
return Curve_CURVE25519
}
type RawNebulaEncryptedData struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
EncryptionMetadata *RawNebulaEncryptionMetadata `protobuf:"bytes,1,opt,name=EncryptionMetadata,proto3" json:"EncryptionMetadata,omitempty"`
Ciphertext []byte `protobuf:"bytes,2,opt,name=Ciphertext,proto3" json:"Ciphertext,omitempty"`
}
func (x *RawNebulaEncryptedData) Reset() {
*x = RawNebulaEncryptedData{}
if protoimpl.UnsafeEnabled {
mi := &file_cert_v1_proto_msgTypes[2]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *RawNebulaEncryptedData) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*RawNebulaEncryptedData) ProtoMessage() {}
func (x *RawNebulaEncryptedData) ProtoReflect() protoreflect.Message {
mi := &file_cert_v1_proto_msgTypes[2]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use RawNebulaEncryptedData.ProtoReflect.Descriptor instead.
func (*RawNebulaEncryptedData) Descriptor() ([]byte, []int) {
return file_cert_v1_proto_rawDescGZIP(), []int{2}
}
func (x *RawNebulaEncryptedData) GetEncryptionMetadata() *RawNebulaEncryptionMetadata {
if x != nil {
return x.EncryptionMetadata
}
return nil
}
func (x *RawNebulaEncryptedData) GetCiphertext() []byte {
if x != nil {
return x.Ciphertext
}
return nil
}
type RawNebulaEncryptionMetadata struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
EncryptionAlgorithm string `protobuf:"bytes,1,opt,name=EncryptionAlgorithm,proto3" json:"EncryptionAlgorithm,omitempty"`
Argon2Parameters *RawNebulaArgon2Parameters `protobuf:"bytes,2,opt,name=Argon2Parameters,proto3" json:"Argon2Parameters,omitempty"`
}
func (x *RawNebulaEncryptionMetadata) Reset() {
*x = RawNebulaEncryptionMetadata{}
if protoimpl.UnsafeEnabled {
mi := &file_cert_v1_proto_msgTypes[3]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *RawNebulaEncryptionMetadata) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*RawNebulaEncryptionMetadata) ProtoMessage() {}
func (x *RawNebulaEncryptionMetadata) ProtoReflect() protoreflect.Message {
mi := &file_cert_v1_proto_msgTypes[3]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use RawNebulaEncryptionMetadata.ProtoReflect.Descriptor instead.
func (*RawNebulaEncryptionMetadata) Descriptor() ([]byte, []int) {
return file_cert_v1_proto_rawDescGZIP(), []int{3}
}
func (x *RawNebulaEncryptionMetadata) GetEncryptionAlgorithm() string {
if x != nil {
return x.EncryptionAlgorithm
}
return ""
}
func (x *RawNebulaEncryptionMetadata) GetArgon2Parameters() *RawNebulaArgon2Parameters {
if x != nil {
return x.Argon2Parameters
}
return nil
}
type RawNebulaArgon2Parameters struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Version int32 `protobuf:"varint,1,opt,name=version,proto3" json:"version,omitempty"` // rune in Go
Memory uint32 `protobuf:"varint,2,opt,name=memory,proto3" json:"memory,omitempty"`
Parallelism uint32 `protobuf:"varint,4,opt,name=parallelism,proto3" json:"parallelism,omitempty"` // uint8 in Go
Iterations uint32 `protobuf:"varint,3,opt,name=iterations,proto3" json:"iterations,omitempty"`
Salt []byte `protobuf:"bytes,5,opt,name=salt,proto3" json:"salt,omitempty"`
}
func (x *RawNebulaArgon2Parameters) Reset() {
*x = RawNebulaArgon2Parameters{}
if protoimpl.UnsafeEnabled {
mi := &file_cert_v1_proto_msgTypes[4]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *RawNebulaArgon2Parameters) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*RawNebulaArgon2Parameters) ProtoMessage() {}
func (x *RawNebulaArgon2Parameters) ProtoReflect() protoreflect.Message {
mi := &file_cert_v1_proto_msgTypes[4]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use RawNebulaArgon2Parameters.ProtoReflect.Descriptor instead.
func (*RawNebulaArgon2Parameters) Descriptor() ([]byte, []int) {
return file_cert_v1_proto_rawDescGZIP(), []int{4}
}
func (x *RawNebulaArgon2Parameters) GetVersion() int32 {
if x != nil {
return x.Version
}
return 0
}
func (x *RawNebulaArgon2Parameters) GetMemory() uint32 {
if x != nil {
return x.Memory
}
return 0
}
func (x *RawNebulaArgon2Parameters) GetParallelism() uint32 {
if x != nil {
return x.Parallelism
}
return 0
}
func (x *RawNebulaArgon2Parameters) GetIterations() uint32 {
if x != nil {
return x.Iterations
}
return 0
}
func (x *RawNebulaArgon2Parameters) GetSalt() []byte {
if x != nil {
return x.Salt
}
return nil
}
var File_cert_v1_proto protoreflect.FileDescriptor
var file_cert_v1_proto_rawDesc = []byte{
0x0a, 0x0d, 0x63, 0x65, 0x72, 0x74, 0x5f, 0x76, 0x31, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x12,
0x04, 0x63, 0x65, 0x72, 0x74, 0x22, 0x71, 0x0a, 0x14, 0x52, 0x61, 0x77, 0x4e, 0x65, 0x62, 0x75,
0x6c, 0x61, 0x43, 0x65, 0x72, 0x74, 0x69, 0x66, 0x69, 0x63, 0x61, 0x74, 0x65, 0x12, 0x3b, 0x0a,
0x07, 0x44, 0x65, 0x74, 0x61, 0x69, 0x6c, 0x73, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x21,
0x2e, 0x63, 0x65, 0x72, 0x74, 0x2e, 0x52, 0x61, 0x77, 0x4e, 0x65, 0x62, 0x75, 0x6c, 0x61, 0x43,
0x65, 0x72, 0x74, 0x69, 0x66, 0x69, 0x63, 0x61, 0x74, 0x65, 0x44, 0x65, 0x74, 0x61, 0x69, 0x6c,
0x73, 0x52, 0x07, 0x44, 0x65, 0x74, 0x61, 0x69, 0x6c, 0x73, 0x12, 0x1c, 0x0a, 0x09, 0x53, 0x69,
0x67, 0x6e, 0x61, 0x74, 0x75, 0x72, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0c, 0x52, 0x09, 0x53,
0x69, 0x67, 0x6e, 0x61, 0x74, 0x75, 0x72, 0x65, 0x22, 0x9c, 0x02, 0x0a, 0x1b, 0x52, 0x61, 0x77,
0x4e, 0x65, 0x62, 0x75, 0x6c, 0x61, 0x43, 0x65, 0x72, 0x74, 0x69, 0x66, 0x69, 0x63, 0x61, 0x74,
0x65, 0x44, 0x65, 0x74, 0x61, 0x69, 0x6c, 0x73, 0x12, 0x12, 0x0a, 0x04, 0x4e, 0x61, 0x6d, 0x65,
0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04, 0x4e, 0x61, 0x6d, 0x65, 0x12, 0x10, 0x0a, 0x03,
0x49, 0x70, 0x73, 0x18, 0x02, 0x20, 0x03, 0x28, 0x0d, 0x52, 0x03, 0x49, 0x70, 0x73, 0x12, 0x18,
0x0a, 0x07, 0x53, 0x75, 0x62, 0x6e, 0x65, 0x74, 0x73, 0x18, 0x03, 0x20, 0x03, 0x28, 0x0d, 0x52,
0x07, 0x53, 0x75, 0x62, 0x6e, 0x65, 0x74, 0x73, 0x12, 0x16, 0x0a, 0x06, 0x47, 0x72, 0x6f, 0x75,
0x70, 0x73, 0x18, 0x04, 0x20, 0x03, 0x28, 0x09, 0x52, 0x06, 0x47, 0x72, 0x6f, 0x75, 0x70, 0x73,
0x12, 0x1c, 0x0a, 0x09, 0x4e, 0x6f, 0x74, 0x42, 0x65, 0x66, 0x6f, 0x72, 0x65, 0x18, 0x05, 0x20,
0x01, 0x28, 0x03, 0x52, 0x09, 0x4e, 0x6f, 0x74, 0x42, 0x65, 0x66, 0x6f, 0x72, 0x65, 0x12, 0x1a,
0x0a, 0x08, 0x4e, 0x6f, 0x74, 0x41, 0x66, 0x74, 0x65, 0x72, 0x18, 0x06, 0x20, 0x01, 0x28, 0x03,
0x52, 0x08, 0x4e, 0x6f, 0x74, 0x41, 0x66, 0x74, 0x65, 0x72, 0x12, 0x1c, 0x0a, 0x09, 0x50, 0x75,
0x62, 0x6c, 0x69, 0x63, 0x4b, 0x65, 0x79, 0x18, 0x07, 0x20, 0x01, 0x28, 0x0c, 0x52, 0x09, 0x50,
0x75, 0x62, 0x6c, 0x69, 0x63, 0x4b, 0x65, 0x79, 0x12, 0x12, 0x0a, 0x04, 0x49, 0x73, 0x43, 0x41,
0x18, 0x08, 0x20, 0x01, 0x28, 0x08, 0x52, 0x04, 0x49, 0x73, 0x43, 0x41, 0x12, 0x16, 0x0a, 0x06,
0x49, 0x73, 0x73, 0x75, 0x65, 0x72, 0x18, 0x09, 0x20, 0x01, 0x28, 0x0c, 0x52, 0x06, 0x49, 0x73,
0x73, 0x75, 0x65, 0x72, 0x12, 0x21, 0x0a, 0x05, 0x63, 0x75, 0x72, 0x76, 0x65, 0x18, 0x64, 0x20,
0x01, 0x28, 0x0e, 0x32, 0x0b, 0x2e, 0x63, 0x65, 0x72, 0x74, 0x2e, 0x43, 0x75, 0x72, 0x76, 0x65,
0x52, 0x05, 0x63, 0x75, 0x72, 0x76, 0x65, 0x22, 0x8b, 0x01, 0x0a, 0x16, 0x52, 0x61, 0x77, 0x4e,
0x65, 0x62, 0x75, 0x6c, 0x61, 0x45, 0x6e, 0x63, 0x72, 0x79, 0x70, 0x74, 0x65, 0x64, 0x44, 0x61,
0x74, 0x61, 0x12, 0x51, 0x0a, 0x12, 0x45, 0x6e, 0x63, 0x72, 0x79, 0x70, 0x74, 0x69, 0x6f, 0x6e,
0x4d, 0x65, 0x74, 0x61, 0x64, 0x61, 0x74, 0x61, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x21,
0x2e, 0x63, 0x65, 0x72, 0x74, 0x2e, 0x52, 0x61, 0x77, 0x4e, 0x65, 0x62, 0x75, 0x6c, 0x61, 0x45,
0x6e, 0x63, 0x72, 0x79, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x4d, 0x65, 0x74, 0x61, 0x64, 0x61, 0x74,
0x61, 0x52, 0x12, 0x45, 0x6e, 0x63, 0x72, 0x79, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x4d, 0x65, 0x74,
0x61, 0x64, 0x61, 0x74, 0x61, 0x12, 0x1e, 0x0a, 0x0a, 0x43, 0x69, 0x70, 0x68, 0x65, 0x72, 0x74,
0x65, 0x78, 0x74, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0c, 0x52, 0x0a, 0x43, 0x69, 0x70, 0x68, 0x65,
0x72, 0x74, 0x65, 0x78, 0x74, 0x22, 0x9c, 0x01, 0x0a, 0x1b, 0x52, 0x61, 0x77, 0x4e, 0x65, 0x62,
0x75, 0x6c, 0x61, 0x45, 0x6e, 0x63, 0x72, 0x79, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x4d, 0x65, 0x74,
0x61, 0x64, 0x61, 0x74, 0x61, 0x12, 0x30, 0x0a, 0x13, 0x45, 0x6e, 0x63, 0x72, 0x79, 0x70, 0x74,
0x69, 0x6f, 0x6e, 0x41, 0x6c, 0x67, 0x6f, 0x72, 0x69, 0x74, 0x68, 0x6d, 0x18, 0x01, 0x20, 0x01,
0x28, 0x09, 0x52, 0x13, 0x45, 0x6e, 0x63, 0x72, 0x79, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x41, 0x6c,
0x67, 0x6f, 0x72, 0x69, 0x74, 0x68, 0x6d, 0x12, 0x4b, 0x0a, 0x10, 0x41, 0x72, 0x67, 0x6f, 0x6e,
0x32, 0x50, 0x61, 0x72, 0x61, 0x6d, 0x65, 0x74, 0x65, 0x72, 0x73, 0x18, 0x02, 0x20, 0x01, 0x28,
0x0b, 0x32, 0x1f, 0x2e, 0x63, 0x65, 0x72, 0x74, 0x2e, 0x52, 0x61, 0x77, 0x4e, 0x65, 0x62, 0x75,
0x6c, 0x61, 0x41, 0x72, 0x67, 0x6f, 0x6e, 0x32, 0x50, 0x61, 0x72, 0x61, 0x6d, 0x65, 0x74, 0x65,
0x72, 0x73, 0x52, 0x10, 0x41, 0x72, 0x67, 0x6f, 0x6e, 0x32, 0x50, 0x61, 0x72, 0x61, 0x6d, 0x65,
0x74, 0x65, 0x72, 0x73, 0x22, 0xa3, 0x01, 0x0a, 0x19, 0x52, 0x61, 0x77, 0x4e, 0x65, 0x62, 0x75,
0x6c, 0x61, 0x41, 0x72, 0x67, 0x6f, 0x6e, 0x32, 0x50, 0x61, 0x72, 0x61, 0x6d, 0x65, 0x74, 0x65,
0x72, 0x73, 0x12, 0x18, 0x0a, 0x07, 0x76, 0x65, 0x72, 0x73, 0x69, 0x6f, 0x6e, 0x18, 0x01, 0x20,
0x01, 0x28, 0x05, 0x52, 0x07, 0x76, 0x65, 0x72, 0x73, 0x69, 0x6f, 0x6e, 0x12, 0x16, 0x0a, 0x06,
0x6d, 0x65, 0x6d, 0x6f, 0x72, 0x79, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0d, 0x52, 0x06, 0x6d, 0x65,
0x6d, 0x6f, 0x72, 0x79, 0x12, 0x20, 0x0a, 0x0b, 0x70, 0x61, 0x72, 0x61, 0x6c, 0x6c, 0x65, 0x6c,
0x69, 0x73, 0x6d, 0x18, 0x04, 0x20, 0x01, 0x28, 0x0d, 0x52, 0x0b, 0x70, 0x61, 0x72, 0x61, 0x6c,
0x6c, 0x65, 0x6c, 0x69, 0x73, 0x6d, 0x12, 0x1e, 0x0a, 0x0a, 0x69, 0x74, 0x65, 0x72, 0x61, 0x74,
0x69, 0x6f, 0x6e, 0x73, 0x18, 0x03, 0x20, 0x01, 0x28, 0x0d, 0x52, 0x0a, 0x69, 0x74, 0x65, 0x72,
0x61, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x12, 0x12, 0x0a, 0x04, 0x73, 0x61, 0x6c, 0x74, 0x18, 0x05,
0x20, 0x01, 0x28, 0x0c, 0x52, 0x04, 0x73, 0x61, 0x6c, 0x74, 0x2a, 0x21, 0x0a, 0x05, 0x43, 0x75,
0x72, 0x76, 0x65, 0x12, 0x0e, 0x0a, 0x0a, 0x43, 0x55, 0x52, 0x56, 0x45, 0x32, 0x35, 0x35, 0x31,
0x39, 0x10, 0x00, 0x12, 0x08, 0x0a, 0x04, 0x50, 0x32, 0x35, 0x36, 0x10, 0x01, 0x42, 0x20, 0x5a,
0x1e, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x73, 0x6c, 0x61, 0x63,
0x6b, 0x68, 0x71, 0x2f, 0x6e, 0x65, 0x62, 0x75, 0x6c, 0x61, 0x2f, 0x63, 0x65, 0x72, 0x74, 0x62,
0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33,
}
var (
file_cert_v1_proto_rawDescOnce sync.Once
file_cert_v1_proto_rawDescData = file_cert_v1_proto_rawDesc
)
func file_cert_v1_proto_rawDescGZIP() []byte {
file_cert_v1_proto_rawDescOnce.Do(func() {
file_cert_v1_proto_rawDescData = protoimpl.X.CompressGZIP(file_cert_v1_proto_rawDescData)
})
return file_cert_v1_proto_rawDescData
}
var file_cert_v1_proto_enumTypes = make([]protoimpl.EnumInfo, 1)
var file_cert_v1_proto_msgTypes = make([]protoimpl.MessageInfo, 5)
var file_cert_v1_proto_goTypes = []any{
(Curve)(0), // 0: cert.Curve
(*RawNebulaCertificate)(nil), // 1: cert.RawNebulaCertificate
(*RawNebulaCertificateDetails)(nil), // 2: cert.RawNebulaCertificateDetails
(*RawNebulaEncryptedData)(nil), // 3: cert.RawNebulaEncryptedData
(*RawNebulaEncryptionMetadata)(nil), // 4: cert.RawNebulaEncryptionMetadata
(*RawNebulaArgon2Parameters)(nil), // 5: cert.RawNebulaArgon2Parameters
}
var file_cert_v1_proto_depIdxs = []int32{
2, // 0: cert.RawNebulaCertificate.Details:type_name -> cert.RawNebulaCertificateDetails
0, // 1: cert.RawNebulaCertificateDetails.curve:type_name -> cert.Curve
4, // 2: cert.RawNebulaEncryptedData.EncryptionMetadata:type_name -> cert.RawNebulaEncryptionMetadata
5, // 3: cert.RawNebulaEncryptionMetadata.Argon2Parameters:type_name -> cert.RawNebulaArgon2Parameters
4, // [4:4] is the sub-list for method output_type
4, // [4:4] is the sub-list for method input_type
4, // [4:4] is the sub-list for extension type_name
4, // [4:4] is the sub-list for extension extendee
0, // [0:4] is the sub-list for field type_name
}
func init() { file_cert_v1_proto_init() }
func file_cert_v1_proto_init() {
if File_cert_v1_proto != nil {
return
}
if !protoimpl.UnsafeEnabled {
file_cert_v1_proto_msgTypes[0].Exporter = func(v any, i int) any {
switch v := v.(*RawNebulaCertificate); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_cert_v1_proto_msgTypes[1].Exporter = func(v any, i int) any {
switch v := v.(*RawNebulaCertificateDetails); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_cert_v1_proto_msgTypes[2].Exporter = func(v any, i int) any {
switch v := v.(*RawNebulaEncryptedData); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_cert_v1_proto_msgTypes[3].Exporter = func(v any, i int) any {
switch v := v.(*RawNebulaEncryptionMetadata); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_cert_v1_proto_msgTypes[4].Exporter = func(v any, i int) any {
switch v := v.(*RawNebulaArgon2Parameters); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
}
type x struct{}
out := protoimpl.TypeBuilder{
File: protoimpl.DescBuilder{
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
RawDescriptor: file_cert_v1_proto_rawDesc,
NumEnums: 1,
NumMessages: 5,
NumExtensions: 0,
NumServices: 0,
},
GoTypes: file_cert_v1_proto_goTypes,
DependencyIndexes: file_cert_v1_proto_depIdxs,
EnumInfos: file_cert_v1_proto_enumTypes,
MessageInfos: file_cert_v1_proto_msgTypes,
}.Build()
File_cert_v1_proto = out.File
file_cert_v1_proto_rawDesc = nil
file_cert_v1_proto_goTypes = nil
file_cert_v1_proto_depIdxs = nil
}

View File

@@ -1,272 +0,0 @@
package cert
import (
"crypto/ed25519"
"fmt"
"net/netip"
"testing"
"time"
"github.com/slackhq/nebula/test"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"google.golang.org/protobuf/proto"
)
func TestCertificateV1_Marshal(t *testing.T) {
t.Parallel()
before := time.Now().Add(time.Second * -60).Round(time.Second)
after := time.Now().Add(time.Second * 60).Round(time.Second)
pubKey := []byte("1234567890abcedfghij1234567890ab")
nc := certificateV1{
details: detailsV1{
name: "testing",
networks: []netip.Prefix{
mustParsePrefixUnmapped("10.1.1.1/24"),
mustParsePrefixUnmapped("10.1.1.2/16"),
},
unsafeNetworks: []netip.Prefix{
mustParsePrefixUnmapped("9.1.1.2/24"),
mustParsePrefixUnmapped("9.1.1.3/16"),
},
groups: []string{"test-group1", "test-group2", "test-group3"},
notBefore: before,
notAfter: after,
publicKey: pubKey,
isCA: false,
issuer: "1234567890abcedfghij1234567890ab",
},
signature: []byte("1234567890abcedfghij1234567890ab"),
}
b, err := nc.Marshal()
require.NoError(t, err)
//t.Log("Cert size:", len(b))
nc2, err := unmarshalCertificateV1(b, nil)
require.NoError(t, err)
assert.Equal(t, Version1, nc.Version())
assert.Equal(t, Curve_CURVE25519, nc.Curve())
assert.Equal(t, nc.Signature(), nc2.Signature())
assert.Equal(t, nc.Name(), nc2.Name())
assert.Equal(t, nc.NotBefore(), nc2.NotBefore())
assert.Equal(t, nc.NotAfter(), nc2.NotAfter())
assert.Equal(t, nc.PublicKey(), nc2.PublicKey())
assert.Equal(t, nc.IsCA(), nc2.IsCA())
assert.Equal(t, nc.Networks(), nc2.Networks())
assert.Equal(t, nc.UnsafeNetworks(), nc2.UnsafeNetworks())
assert.Equal(t, nc.Groups(), nc2.Groups())
}
func TestCertificateV1_PublicKeyPem(t *testing.T) {
t.Parallel()
before := time.Now().Add(time.Second * -60).Round(time.Second)
after := time.Now().Add(time.Second * 60).Round(time.Second)
pubKey := ed25519.PublicKey("1234567890abcedfghij1234567890ab")
nc := certificateV1{
details: detailsV1{
name: "testing",
networks: []netip.Prefix{},
unsafeNetworks: []netip.Prefix{},
groups: []string{"test-group1", "test-group2", "test-group3"},
notBefore: before,
notAfter: after,
publicKey: pubKey,
isCA: false,
issuer: "1234567890abcedfghij1234567890ab",
},
signature: []byte("1234567890abcedfghij1234567890ab"),
}
assert.Equal(t, Version1, nc.Version())
assert.Equal(t, Curve_CURVE25519, nc.Curve())
pubPem := "-----BEGIN NEBULA X25519 PUBLIC KEY-----\nMTIzNDU2Nzg5MGFiY2VkZmdoaWoxMjM0NTY3ODkwYWI=\n-----END NEBULA X25519 PUBLIC KEY-----\n"
assert.Equal(t, string(nc.MarshalPublicKeyPEM()), pubPem)
assert.False(t, nc.IsCA())
nc.details.isCA = true
assert.Equal(t, Curve_CURVE25519, nc.Curve())
pubPem = "-----BEGIN NEBULA ED25519 PUBLIC KEY-----\nMTIzNDU2Nzg5MGFiY2VkZmdoaWoxMjM0NTY3ODkwYWI=\n-----END NEBULA ED25519 PUBLIC KEY-----\n"
assert.Equal(t, string(nc.MarshalPublicKeyPEM()), pubPem)
assert.True(t, nc.IsCA())
pubP256KeyPem := []byte(`-----BEGIN NEBULA P256 PUBLIC KEY-----
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAA=
-----END NEBULA P256 PUBLIC KEY-----
`)
pubP256Key, _, _, err := UnmarshalPublicKeyFromPEM(pubP256KeyPem)
require.NoError(t, err)
nc.details.curve = Curve_P256
nc.details.publicKey = pubP256Key
assert.Equal(t, Curve_P256, nc.Curve())
assert.Equal(t, string(nc.MarshalPublicKeyPEM()), string(pubP256KeyPem))
assert.True(t, nc.IsCA())
nc.details.isCA = false
assert.Equal(t, Curve_P256, nc.Curve())
assert.Equal(t, string(nc.MarshalPublicKeyPEM()), string(pubP256KeyPem))
assert.False(t, nc.IsCA())
}
func TestCertificateV1_Expired(t *testing.T) {
nc := certificateV1{
details: detailsV1{
notBefore: time.Now().Add(time.Second * -60).Round(time.Second),
notAfter: time.Now().Add(time.Second * 60).Round(time.Second),
},
}
assert.True(t, nc.Expired(time.Now().Add(time.Hour)))
assert.True(t, nc.Expired(time.Now().Add(-time.Hour)))
assert.False(t, nc.Expired(time.Now()))
}
func TestCertificateV1_MarshalJSON(t *testing.T) {
time.Local = time.UTC
pubKey := []byte("1234567890abcedfghij1234567890ab")
nc := certificateV1{
details: detailsV1{
name: "testing",
networks: []netip.Prefix{
mustParsePrefixUnmapped("10.1.1.1/24"),
mustParsePrefixUnmapped("10.1.1.2/16"),
},
unsafeNetworks: []netip.Prefix{
mustParsePrefixUnmapped("9.1.1.2/24"),
mustParsePrefixUnmapped("9.1.1.3/16"),
},
groups: []string{"test-group1", "test-group2", "test-group3"},
notBefore: time.Date(1, 0, 0, 1, 0, 0, 0, time.UTC),
notAfter: time.Date(1, 0, 0, 2, 0, 0, 0, time.UTC),
publicKey: pubKey,
isCA: false,
issuer: "1234567890abcedfghij1234567890ab",
},
signature: []byte("1234567890abcedfghij1234567890ab"),
}
b, err := nc.MarshalJSON()
require.NoError(t, err)
assert.JSONEq(
t,
"{\"details\":{\"curve\":\"CURVE25519\",\"groups\":[\"test-group1\",\"test-group2\",\"test-group3\"],\"isCa\":false,\"issuer\":\"1234567890abcedfghij1234567890ab\",\"name\":\"testing\",\"networks\":[\"10.1.1.1/24\",\"10.1.1.2/16\"],\"notAfter\":\"0000-11-30T02:00:00Z\",\"notBefore\":\"0000-11-30T01:00:00Z\",\"publicKey\":\"313233343536373839306162636564666768696a313233343536373839306162\",\"unsafeNetworks\":[\"9.1.1.2/24\",\"9.1.1.3/16\"]},\"fingerprint\":\"3944c53d4267a229295b56cb2d27d459164c010ac97d655063ba421e0670f4ba\",\"signature\":\"313233343536373839306162636564666768696a313233343536373839306162\",\"version\":1}",
string(b),
)
}
func TestCertificateV1_VerifyPrivateKey(t *testing.T) {
ca, _, caKey, _ := NewTestCaCert(Version1, Curve_CURVE25519, time.Time{}, time.Time{}, nil, nil, nil)
err := ca.VerifyPrivateKey(Curve_CURVE25519, caKey)
require.NoError(t, err)
_, _, caKey2, _ := NewTestCaCert(Version1, Curve_CURVE25519, time.Time{}, time.Time{}, nil, nil, nil)
require.NoError(t, err)
err = ca.VerifyPrivateKey(Curve_CURVE25519, caKey2)
require.Error(t, err)
c, _, priv, _ := NewTestCert(Version1, Curve_CURVE25519, ca, caKey, "test", time.Time{}, time.Time{}, nil, nil, nil)
rawPriv, b, curve, err := UnmarshalPrivateKeyFromPEM(priv)
require.NoError(t, err)
assert.Empty(t, b)
assert.Equal(t, Curve_CURVE25519, curve)
err = c.VerifyPrivateKey(Curve_CURVE25519, rawPriv)
require.NoError(t, err)
_, priv2 := X25519Keypair()
err = c.VerifyPrivateKey(Curve_CURVE25519, priv2)
require.Error(t, err)
}
func TestCertificateV1_VerifyPrivateKeyP256(t *testing.T) {
ca, _, caKey, _ := NewTestCaCert(Version1, Curve_P256, time.Time{}, time.Time{}, nil, nil, nil)
err := ca.VerifyPrivateKey(Curve_P256, caKey)
require.NoError(t, err)
_, _, caKey2, _ := NewTestCaCert(Version1, Curve_P256, time.Time{}, time.Time{}, nil, nil, nil)
require.NoError(t, err)
err = ca.VerifyPrivateKey(Curve_P256, caKey2)
require.Error(t, err)
c, _, priv, _ := NewTestCert(Version1, Curve_P256, ca, caKey, "test", time.Time{}, time.Time{}, nil, nil, nil)
rawPriv, b, curve, err := UnmarshalPrivateKeyFromPEM(priv)
require.NoError(t, err)
assert.Empty(t, b)
assert.Equal(t, Curve_P256, curve)
err = c.VerifyPrivateKey(Curve_P256, rawPriv)
require.NoError(t, err)
_, priv2 := P256Keypair()
err = c.VerifyPrivateKey(Curve_P256, priv2)
require.Error(t, err)
}
// Ensure that upgrading the protobuf library does not change how certificates
// are marshalled, since this would break signature verification
func TestMarshalingCertificateV1Consistency(t *testing.T) {
before := time.Date(1970, time.January, 1, 1, 1, 1, 1, time.UTC)
after := time.Date(9999, time.January, 1, 1, 1, 1, 1, time.UTC)
pubKey := []byte("1234567890abcedfghij1234567890ab")
nc := certificateV1{
details: detailsV1{
name: "testing",
networks: []netip.Prefix{
mustParsePrefixUnmapped("10.1.1.2/16"),
mustParsePrefixUnmapped("10.1.1.1/24"),
},
unsafeNetworks: []netip.Prefix{
mustParsePrefixUnmapped("9.1.1.3/16"),
mustParsePrefixUnmapped("9.1.1.2/24"),
},
groups: []string{"test-group1", "test-group2", "test-group3"},
notBefore: before,
notAfter: after,
publicKey: pubKey,
isCA: false,
issuer: "1234567890abcedfghij1234567890ab",
},
signature: []byte("1234567890abcedfghij1234567890ab"),
}
b, err := nc.Marshal()
require.NoError(t, err)
assert.Equal(t, "0a8e010a0774657374696e671212828284508080fcff0f8182845080feffff0f1a12838284488080fcff0f8282844880feffff0f220b746573742d67726f757031220b746573742d67726f757032220b746573742d67726f75703328cd1c30cdb8ccf0af073a20313233343536373839306162636564666768696a3132333435363738393061624a081234567890abcedf1220313233343536373839306162636564666768696a313233343536373839306162", fmt.Sprintf("%x", b))
b, err = proto.Marshal(nc.getRawDetails())
require.NoError(t, err)
assert.Equal(t, "0a0774657374696e671212828284508080fcff0f8182845080feffff0f1a12838284488080fcff0f8282844880feffff0f220b746573742d67726f757031220b746573742d67726f757032220b746573742d67726f75703328cd1c30cdb8ccf0af073a20313233343536373839306162636564666768696a3132333435363738393061624a081234567890abcedf", fmt.Sprintf("%x", b))
}
func TestCertificateV1_Copy(t *testing.T) {
ca, _, caKey, _ := NewTestCaCert(Version1, Curve_CURVE25519, time.Now(), time.Now().Add(10*time.Minute), nil, nil, nil)
c, _, _, _ := NewTestCert(Version1, Curve_CURVE25519, ca, caKey, "test", time.Now(), time.Now().Add(5*time.Minute), nil, nil, nil)
cc := c.Copy()
test.AssertDeepCopyEqual(t, c, cc)
}
func TestUnmarshalCertificateV1(t *testing.T) {
// Test that we don't panic with an invalid certificate (#332)
data := []byte("\x98\x00\x00")
_, err := unmarshalCertificateV1(data, nil)
require.EqualError(t, err, "encoded Details was nil")
}
func appendByteSlices(b ...[]byte) []byte {
retSlice := []byte{}
for _, v := range b {
retSlice = append(retSlice, v...)
}
return retSlice
}
func mustParsePrefixUnmapped(s string) netip.Prefix {
prefix := netip.MustParsePrefix(s)
return netip.PrefixFrom(prefix.Addr().Unmap(), prefix.Bits())
}

View File

@@ -1,37 +0,0 @@
Nebula DEFINITIONS AUTOMATIC TAGS ::= BEGIN
Name ::= UTF8String (SIZE (1..253))
Time ::= INTEGER (0..18446744073709551615) -- Seconds since unix epoch, uint64 maximum
Network ::= OCTET STRING (SIZE (5,17)) -- IP addresses are 4 or 16 bytes + 1 byte for the prefix length
Curve ::= ENUMERATED {
curve25519 (0),
p256 (1)
}
-- The maximum size of a certificate must not exceed 65536 bytes
Certificate ::= SEQUENCE {
details OCTET STRING,
curve Curve DEFAULT curve25519,
publicKey OCTET STRING,
-- signature(details + curve + publicKey) using the appropriate method for curve
signature OCTET STRING
}
Details ::= SEQUENCE {
name Name,
-- At least 1 ipv4 or ipv6 address must be present if isCA is false
networks SEQUENCE OF Network OPTIONAL,
unsafeNetworks SEQUENCE OF Network OPTIONAL,
groups SEQUENCE OF Name OPTIONAL,
isCA BOOLEAN DEFAULT false,
notBefore Time,
notAfter Time,
-- issuer is only required if isCA is false, if isCA is true then it must not be present
issuer OCTET STRING OPTIONAL,
...
-- New fields can be added below here
}
END

View File

@@ -1,736 +0,0 @@
package cert
import (
"bytes"
"crypto/ecdh"
"crypto/ecdsa"
"crypto/ed25519"
"crypto/elliptic"
"crypto/sha256"
"encoding/hex"
"encoding/json"
"encoding/pem"
"fmt"
"net/netip"
"slices"
"time"
"golang.org/x/crypto/cryptobyte"
"golang.org/x/crypto/cryptobyte/asn1"
"golang.org/x/crypto/curve25519"
)
const (
classConstructed = 0x20
classContextSpecific = 0x80
TagCertDetails = 0 | classConstructed | classContextSpecific
TagCertCurve = 1 | classContextSpecific
TagCertPublicKey = 2 | classContextSpecific
TagCertSignature = 3 | classContextSpecific
TagDetailsName = 0 | classContextSpecific
TagDetailsNetworks = 1 | classConstructed | classContextSpecific
TagDetailsUnsafeNetworks = 2 | classConstructed | classContextSpecific
TagDetailsGroups = 3 | classConstructed | classContextSpecific
TagDetailsIsCA = 4 | classContextSpecific
TagDetailsNotBefore = 5 | classContextSpecific
TagDetailsNotAfter = 6 | classContextSpecific
TagDetailsIssuer = 7 | classContextSpecific
)
const (
// MaxCertificateSize is the maximum length a valid certificate can be
MaxCertificateSize = 65536
// MaxNameLength is limited to a maximum realistic DNS domain name to help facilitate DNS systems
MaxNameLength = 253
// MaxNetworkLength is the maximum length a network value can be.
// 16 bytes for an ipv6 address + 1 byte for the prefix length
MaxNetworkLength = 17
)
type certificateV2 struct {
details detailsV2
// RawDetails contains the entire asn.1 DER encoded Details struct
// This is to benefit forwards compatibility in signature checking.
// signature(RawDetails + Curve + PublicKey) == Signature
rawDetails []byte
curve Curve
publicKey []byte
signature []byte
}
type detailsV2 struct {
name string
networks []netip.Prefix // MUST BE SORTED
unsafeNetworks []netip.Prefix // MUST BE SORTED
groups []string
isCA bool
notBefore time.Time
notAfter time.Time
issuer string
}
func (c *certificateV2) Version() Version {
return Version2
}
func (c *certificateV2) Curve() Curve {
return c.curve
}
func (c *certificateV2) Groups() []string {
return c.details.groups
}
func (c *certificateV2) IsCA() bool {
return c.details.isCA
}
func (c *certificateV2) Issuer() string {
return c.details.issuer
}
func (c *certificateV2) Name() string {
return c.details.name
}
func (c *certificateV2) Networks() []netip.Prefix {
return c.details.networks
}
func (c *certificateV2) NotAfter() time.Time {
return c.details.notAfter
}
func (c *certificateV2) NotBefore() time.Time {
return c.details.notBefore
}
func (c *certificateV2) PublicKey() []byte {
return c.publicKey
}
func (c *certificateV2) MarshalPublicKeyPEM() []byte {
return marshalCertPublicKeyToPEM(c)
}
func (c *certificateV2) Signature() []byte {
return c.signature
}
func (c *certificateV2) UnsafeNetworks() []netip.Prefix {
return c.details.unsafeNetworks
}
func (c *certificateV2) Fingerprint() (string, error) {
if len(c.rawDetails) == 0 {
return "", ErrMissingDetails
}
b := make([]byte, len(c.rawDetails)+1+len(c.publicKey)+len(c.signature))
copy(b, c.rawDetails)
b[len(c.rawDetails)] = byte(c.curve)
copy(b[len(c.rawDetails)+1:], c.publicKey)
copy(b[len(c.rawDetails)+1+len(c.publicKey):], c.signature)
sum := sha256.Sum256(b)
return hex.EncodeToString(sum[:]), nil
}
func (c *certificateV2) CheckSignature(key []byte) bool {
if len(c.rawDetails) == 0 {
return false
}
b := make([]byte, len(c.rawDetails)+1+len(c.publicKey))
copy(b, c.rawDetails)
b[len(c.rawDetails)] = byte(c.curve)
copy(b[len(c.rawDetails)+1:], c.publicKey)
switch c.curve {
case Curve_CURVE25519:
return ed25519.Verify(key, b, c.signature)
case Curve_P256:
pubKey, err := ecdsa.ParseUncompressedPublicKey(elliptic.P256(), key)
if err != nil {
return false
}
hashed := sha256.Sum256(b)
return ecdsa.VerifyASN1(pubKey, hashed[:], c.signature)
default:
return false
}
}
func (c *certificateV2) Expired(t time.Time) bool {
return c.details.notBefore.After(t) || c.details.notAfter.Before(t)
}
func (c *certificateV2) VerifyPrivateKey(curve Curve, key []byte) error {
if curve != c.curve {
return ErrPublicPrivateCurveMismatch
}
if c.details.isCA {
switch curve {
case Curve_CURVE25519:
// the call to PublicKey below will panic slice bounds out of range otherwise
if len(key) != ed25519.PrivateKeySize {
return ErrInvalidPrivateKey
}
if !ed25519.PublicKey(c.publicKey).Equal(ed25519.PrivateKey(key).Public()) {
return ErrPublicPrivateKeyMismatch
}
case Curve_P256:
privkey, err := ecdh.P256().NewPrivateKey(key)
if err != nil {
return ErrInvalidPrivateKey
}
pub := privkey.PublicKey().Bytes()
if !bytes.Equal(pub, c.publicKey) {
return ErrPublicPrivateKeyMismatch
}
default:
return fmt.Errorf("invalid curve: %s", curve)
}
return nil
}
var pub []byte
switch curve {
case Curve_CURVE25519:
var err error
pub, err = curve25519.X25519(key, curve25519.Basepoint)
if err != nil {
return ErrInvalidPrivateKey
}
case Curve_P256:
privkey, err := ecdh.P256().NewPrivateKey(key)
if err != nil {
return ErrInvalidPrivateKey
}
pub = privkey.PublicKey().Bytes()
default:
return fmt.Errorf("invalid curve: %s", curve)
}
if !bytes.Equal(pub, c.publicKey) {
return ErrPublicPrivateKeyMismatch
}
return nil
}
func (c *certificateV2) String() string {
mb, err := c.marshalJSON()
if err != nil {
return fmt.Sprintf("<error marshalling certificate: %v>", err)
}
b, err := json.MarshalIndent(mb, "", "\t")
if err != nil {
return fmt.Sprintf("<error marshalling certificate: %v>", err)
}
return string(b)
}
func (c *certificateV2) MarshalForHandshakes() ([]byte, error) {
if c.rawDetails == nil {
return nil, ErrEmptyRawDetails
}
var b cryptobyte.Builder
// Outermost certificate
b.AddASN1(asn1.SEQUENCE, func(b *cryptobyte.Builder) {
// Add the cert details which is already marshalled
b.AddBytes(c.rawDetails)
// Skipping the curve and public key since those come across in a different part of the handshake
// Add the signature
b.AddASN1(TagCertSignature, func(b *cryptobyte.Builder) {
b.AddBytes(c.signature)
})
})
return b.Bytes()
}
func (c *certificateV2) Marshal() ([]byte, error) {
if c.rawDetails == nil {
return nil, ErrEmptyRawDetails
}
var b cryptobyte.Builder
// Outermost certificate
b.AddASN1(asn1.SEQUENCE, func(b *cryptobyte.Builder) {
// Add the cert details which is already marshalled
b.AddBytes(c.rawDetails)
// Add the curve only if its not the default value
if c.curve != Curve_CURVE25519 {
b.AddASN1(TagCertCurve, func(b *cryptobyte.Builder) {
b.AddBytes([]byte{byte(c.curve)})
})
}
// Add the public key if it is not empty
if c.publicKey != nil {
b.AddASN1(TagCertPublicKey, func(b *cryptobyte.Builder) {
b.AddBytes(c.publicKey)
})
}
// Add the signature
b.AddASN1(TagCertSignature, func(b *cryptobyte.Builder) {
b.AddBytes(c.signature)
})
})
return b.Bytes()
}
func (c *certificateV2) MarshalPEM() ([]byte, error) {
b, err := c.Marshal()
if err != nil {
return nil, err
}
return pem.EncodeToMemory(&pem.Block{Type: CertificateV2Banner, Bytes: b}), nil
}
func (c *certificateV2) MarshalJSON() ([]byte, error) {
b, err := c.marshalJSON()
if err != nil {
return nil, err
}
return json.Marshal(b)
}
func (c *certificateV2) marshalJSON() (m, error) {
fp, err := c.Fingerprint()
if err != nil {
return nil, err
}
return m{
"details": m{
"name": c.details.name,
"networks": c.details.networks,
"unsafeNetworks": c.details.unsafeNetworks,
"groups": c.details.groups,
"notBefore": c.details.notBefore,
"notAfter": c.details.notAfter,
"isCa": c.details.isCA,
"issuer": c.details.issuer,
},
"version": Version2,
"publicKey": fmt.Sprintf("%x", c.publicKey),
"curve": c.curve.String(),
"fingerprint": fp,
"signature": fmt.Sprintf("%x", c.Signature()),
}, nil
}
func (c *certificateV2) Copy() Certificate {
nc := &certificateV2{
details: detailsV2{
name: c.details.name,
notBefore: c.details.notBefore,
notAfter: c.details.notAfter,
isCA: c.details.isCA,
issuer: c.details.issuer,
},
curve: c.curve,
publicKey: make([]byte, len(c.publicKey)),
signature: make([]byte, len(c.signature)),
rawDetails: make([]byte, len(c.rawDetails)),
}
if c.details.groups != nil {
nc.details.groups = make([]string, len(c.details.groups))
copy(nc.details.groups, c.details.groups)
}
if c.details.networks != nil {
nc.details.networks = make([]netip.Prefix, len(c.details.networks))
copy(nc.details.networks, c.details.networks)
}
if c.details.unsafeNetworks != nil {
nc.details.unsafeNetworks = make([]netip.Prefix, len(c.details.unsafeNetworks))
copy(nc.details.unsafeNetworks, c.details.unsafeNetworks)
}
copy(nc.rawDetails, c.rawDetails)
copy(nc.signature, c.signature)
copy(nc.publicKey, c.publicKey)
return nc
}
func (c *certificateV2) fromTBSCertificate(t *TBSCertificate) error {
c.details = detailsV2{
name: t.Name,
networks: t.Networks,
unsafeNetworks: t.UnsafeNetworks,
groups: t.Groups,
isCA: t.IsCA,
notBefore: t.NotBefore,
notAfter: t.NotAfter,
issuer: t.issuer,
}
c.curve = t.Curve
c.publicKey = t.PublicKey
return c.validate()
}
func (c *certificateV2) validate() error {
// Empty names are allowed
if len(c.publicKey) == 0 {
return ErrInvalidPublicKey
}
if !c.details.isCA && len(c.details.networks) == 0 {
return NewErrInvalidCertificateProperties("non-CA certificate must contain at least 1 network")
}
hasV4Networks := false
hasV6Networks := false
for _, network := range c.details.networks {
if !network.IsValid() || !network.Addr().IsValid() {
return NewErrInvalidCertificateProperties("invalid network: %s", network)
}
if network.Addr().IsUnspecified() {
return NewErrInvalidCertificateProperties("non-CA certificates must not use the zero address as a network: %s", network)
}
if network.Addr().Zone() != "" {
return NewErrInvalidCertificateProperties("networks may not contain zones: %s", network)
}
if network.Addr().Is4In6() {
return NewErrInvalidCertificateProperties("4in6 networks are not allowed: %s", network)
}
hasV4Networks = hasV4Networks || network.Addr().Is4()
hasV6Networks = hasV6Networks || network.Addr().Is6()
}
slices.SortFunc(c.details.networks, comparePrefix)
err := findDuplicatePrefix(c.details.networks)
if err != nil {
return err
}
for _, network := range c.details.unsafeNetworks {
if !network.IsValid() || !network.Addr().IsValid() {
return NewErrInvalidCertificateProperties("invalid unsafe network: %s", network)
}
if network.Addr().Zone() != "" {
return NewErrInvalidCertificateProperties("unsafe networks may not contain zones: %s", network)
}
if !c.details.isCA {
if network.Addr().Is6() {
if !hasV6Networks {
return NewErrInvalidCertificateProperties("IPv6 unsafe networks require an IPv6 address assignment: %s", network)
}
} else if network.Addr().Is4() {
if !hasV4Networks {
return NewErrInvalidCertificateProperties("IPv4 unsafe networks require an IPv4 address assignment: %s", network)
}
}
}
}
slices.SortFunc(c.details.unsafeNetworks, comparePrefix)
err = findDuplicatePrefix(c.details.unsafeNetworks)
if err != nil {
return err
}
return nil
}
func (c *certificateV2) marshalForSigning() ([]byte, error) {
d, err := c.details.Marshal()
if err != nil {
return nil, fmt.Errorf("marshalling certificate details failed: %w", err)
}
c.rawDetails = d
b := make([]byte, len(c.rawDetails)+1+len(c.publicKey))
copy(b, c.rawDetails)
b[len(c.rawDetails)] = byte(c.curve)
copy(b[len(c.rawDetails)+1:], c.publicKey)
return b, nil
}
func (c *certificateV2) setSignature(b []byte) error {
if len(b) == 0 {
return ErrEmptySignature
}
c.signature = b
return nil
}
func (d *detailsV2) Marshal() ([]byte, error) {
var b cryptobyte.Builder
var err error
// Details are a structure
b.AddASN1(TagCertDetails, func(b *cryptobyte.Builder) {
// Add the name
b.AddASN1(TagDetailsName, func(b *cryptobyte.Builder) {
b.AddBytes([]byte(d.name))
})
// Add the networks if any exist
if len(d.networks) > 0 {
b.AddASN1(TagDetailsNetworks, func(b *cryptobyte.Builder) {
for _, n := range d.networks {
sb, innerErr := n.MarshalBinary()
if innerErr != nil {
// MarshalBinary never returns an error
err = fmt.Errorf("unable to marshal network: %w", innerErr)
return
}
b.AddASN1OctetString(sb)
}
})
}
// Add the unsafe networks if any exist
if len(d.unsafeNetworks) > 0 {
b.AddASN1(TagDetailsUnsafeNetworks, func(b *cryptobyte.Builder) {
for _, n := range d.unsafeNetworks {
sb, innerErr := n.MarshalBinary()
if innerErr != nil {
// MarshalBinary never returns an error
err = fmt.Errorf("unable to marshal unsafe network: %w", innerErr)
return
}
b.AddASN1OctetString(sb)
}
})
}
// Add groups if any exist
if len(d.groups) > 0 {
b.AddASN1(TagDetailsGroups, func(b *cryptobyte.Builder) {
for _, group := range d.groups {
b.AddASN1(asn1.UTF8String, func(b *cryptobyte.Builder) {
b.AddBytes([]byte(group))
})
}
})
}
// Add IsCA only if true
if d.isCA {
b.AddASN1(TagDetailsIsCA, func(b *cryptobyte.Builder) {
b.AddUint8(0xff)
})
}
// Add not before
b.AddASN1Int64WithTag(d.notBefore.Unix(), TagDetailsNotBefore)
// Add not after
b.AddASN1Int64WithTag(d.notAfter.Unix(), TagDetailsNotAfter)
// Add the issuer if present
if d.issuer != "" {
issuerBytes, innerErr := hex.DecodeString(d.issuer)
if innerErr != nil {
err = fmt.Errorf("failed to decode issuer: %w", innerErr)
return
}
b.AddASN1(TagDetailsIssuer, func(b *cryptobyte.Builder) {
b.AddBytes(issuerBytes)
})
}
})
if err != nil {
return nil, err
}
return b.Bytes()
}
func unmarshalCertificateV2(b []byte, publicKey []byte, curve Curve) (*certificateV2, error) {
l := len(b)
if l == 0 || l > MaxCertificateSize {
return nil, ErrBadFormat
}
input := cryptobyte.String(b)
// Open the envelope
if !input.ReadASN1(&input, asn1.SEQUENCE) || input.Empty() {
return nil, ErrBadFormat
}
// Grab the cert details, we need to preserve the tag and length
var rawDetails cryptobyte.String
if !input.ReadASN1Element(&rawDetails, TagCertDetails) || rawDetails.Empty() {
return nil, ErrBadFormat
}
//Maybe grab the curve
var rawCurve byte
if !readOptionalASN1Byte(&input, &rawCurve, TagCertCurve, byte(curve)) {
return nil, ErrBadFormat
}
curve = Curve(rawCurve)
// Maybe grab the public key
var rawPublicKey cryptobyte.String
if len(publicKey) > 0 {
rawPublicKey = publicKey
} else if !input.ReadOptionalASN1(&rawPublicKey, nil, TagCertPublicKey) {
return nil, ErrBadFormat
}
if len(rawPublicKey) == 0 {
return nil, ErrBadFormat
}
// Grab the signature
var rawSignature cryptobyte.String
if !input.ReadASN1(&rawSignature, TagCertSignature) || rawSignature.Empty() {
return nil, ErrBadFormat
}
// Finally unmarshal the details
details, err := unmarshalDetails(rawDetails)
if err != nil {
return nil, err
}
c := &certificateV2{
details: details,
rawDetails: rawDetails,
curve: curve,
publicKey: rawPublicKey,
signature: rawSignature,
}
err = c.validate()
if err != nil {
return nil, err
}
return c, nil
}
func unmarshalDetails(b cryptobyte.String) (detailsV2, error) {
// Open the envelope
if !b.ReadASN1(&b, TagCertDetails) || b.Empty() {
return detailsV2{}, ErrBadFormat
}
// Read the name
var name cryptobyte.String
if !b.ReadASN1(&name, TagDetailsName) || name.Empty() || len(name) > MaxNameLength {
return detailsV2{}, ErrBadFormat
}
// Read the network addresses
var subString cryptobyte.String
var found bool
if !b.ReadOptionalASN1(&subString, &found, TagDetailsNetworks) {
return detailsV2{}, ErrBadFormat
}
var networks []netip.Prefix
var val cryptobyte.String
if found {
for !subString.Empty() {
if !subString.ReadASN1(&val, asn1.OCTET_STRING) || val.Empty() || len(val) > MaxNetworkLength {
return detailsV2{}, ErrBadFormat
}
var n netip.Prefix
if err := n.UnmarshalBinary(val); err != nil {
return detailsV2{}, ErrBadFormat
}
networks = append(networks, n)
}
}
// Read out any unsafe networks
if !b.ReadOptionalASN1(&subString, &found, TagDetailsUnsafeNetworks) {
return detailsV2{}, ErrBadFormat
}
var unsafeNetworks []netip.Prefix
if found {
for !subString.Empty() {
if !subString.ReadASN1(&val, asn1.OCTET_STRING) || val.Empty() || len(val) > MaxNetworkLength {
return detailsV2{}, ErrBadFormat
}
var n netip.Prefix
if err := n.UnmarshalBinary(val); err != nil {
return detailsV2{}, ErrBadFormat
}
unsafeNetworks = append(unsafeNetworks, n)
}
}
// Read out any groups
if !b.ReadOptionalASN1(&subString, &found, TagDetailsGroups) {
return detailsV2{}, ErrBadFormat
}
var groups []string
if found {
for !subString.Empty() {
if !subString.ReadASN1(&val, asn1.UTF8String) || val.Empty() {
return detailsV2{}, ErrBadFormat
}
groups = append(groups, string(val))
}
}
// Read out IsCA
var isCa bool
if !readOptionalASN1Boolean(&b, &isCa, TagDetailsIsCA, false) {
return detailsV2{}, ErrBadFormat
}
// Read not before and not after
var notBefore int64
if !b.ReadASN1Int64WithTag(&notBefore, TagDetailsNotBefore) {
return detailsV2{}, ErrBadFormat
}
var notAfter int64
if !b.ReadASN1Int64WithTag(&notAfter, TagDetailsNotAfter) {
return detailsV2{}, ErrBadFormat
}
// Read issuer
var issuer cryptobyte.String
if !b.ReadOptionalASN1(&issuer, nil, TagDetailsIssuer) {
return detailsV2{}, ErrBadFormat
}
return detailsV2{
name: string(name),
networks: networks,
unsafeNetworks: unsafeNetworks,
groups: groups,
isCA: isCa,
notBefore: time.Unix(notBefore, 0),
notAfter: time.Unix(notAfter, 0),
issuer: hex.EncodeToString(issuer),
}, nil
}

View File

@@ -1,320 +0,0 @@
package cert
import (
"crypto/ed25519"
"crypto/rand"
"encoding/hex"
"net/netip"
"slices"
"testing"
"time"
"github.com/slackhq/nebula/test"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestCertificateV2_Marshal(t *testing.T) {
t.Parallel()
before := time.Now().Add(time.Second * -60).Round(time.Second)
after := time.Now().Add(time.Second * 60).Round(time.Second)
pubKey := []byte("1234567890abcedfghij1234567890ab")
nc := certificateV2{
details: detailsV2{
name: "testing",
networks: []netip.Prefix{
mustParsePrefixUnmapped("10.1.1.2/16"),
mustParsePrefixUnmapped("10.1.1.1/24"),
},
unsafeNetworks: []netip.Prefix{
mustParsePrefixUnmapped("9.1.1.3/16"),
mustParsePrefixUnmapped("9.1.1.2/24"),
},
groups: []string{"test-group1", "test-group2", "test-group3"},
notBefore: before,
notAfter: after,
isCA: false,
issuer: "1234567890abcdef1234567890abcdef",
},
signature: []byte("1234567890abcdef1234567890abcdef"),
publicKey: pubKey,
}
db, err := nc.details.Marshal()
require.NoError(t, err)
nc.rawDetails = db
b, err := nc.Marshal()
require.NoError(t, err)
//t.Log("Cert size:", len(b))
nc2, err := unmarshalCertificateV2(b, nil, Curve_CURVE25519)
require.NoError(t, err)
assert.Equal(t, Version2, nc.Version())
assert.Equal(t, Curve_CURVE25519, nc.Curve())
assert.Equal(t, nc.Signature(), nc2.Signature())
assert.Equal(t, nc.Name(), nc2.Name())
assert.Equal(t, nc.NotBefore(), nc2.NotBefore())
assert.Equal(t, nc.NotAfter(), nc2.NotAfter())
assert.Equal(t, nc.PublicKey(), nc2.PublicKey())
assert.Equal(t, nc.IsCA(), nc2.IsCA())
assert.Equal(t, nc.Issuer(), nc2.Issuer())
// unmarshalling will sort networks and unsafeNetworks, we need to do the same
// but first make sure it fails
assert.NotEqual(t, nc.Networks(), nc2.Networks())
assert.NotEqual(t, nc.UnsafeNetworks(), nc2.UnsafeNetworks())
slices.SortFunc(nc.details.networks, comparePrefix)
slices.SortFunc(nc.details.unsafeNetworks, comparePrefix)
assert.Equal(t, nc.Networks(), nc2.Networks())
assert.Equal(t, nc.UnsafeNetworks(), nc2.UnsafeNetworks())
assert.Equal(t, nc.Groups(), nc2.Groups())
}
func TestCertificateV2_PublicKeyPem(t *testing.T) {
t.Parallel()
before := time.Now().Add(time.Second * -60).Round(time.Second)
after := time.Now().Add(time.Second * 60).Round(time.Second)
pubKey := ed25519.PublicKey("1234567890abcedfghij1234567890ab")
nc := certificateV2{
details: detailsV2{
name: "testing",
networks: []netip.Prefix{},
unsafeNetworks: []netip.Prefix{},
groups: []string{"test-group1", "test-group2", "test-group3"},
notBefore: before,
notAfter: after,
isCA: false,
issuer: "1234567890abcedfghij1234567890ab",
},
publicKey: pubKey,
signature: []byte("1234567890abcedfghij1234567890ab"),
}
assert.Equal(t, Version2, nc.Version())
assert.Equal(t, Curve_CURVE25519, nc.Curve())
pubPem := "-----BEGIN NEBULA X25519 PUBLIC KEY-----\nMTIzNDU2Nzg5MGFiY2VkZmdoaWoxMjM0NTY3ODkwYWI=\n-----END NEBULA X25519 PUBLIC KEY-----\n"
assert.Equal(t, string(nc.MarshalPublicKeyPEM()), pubPem)
assert.False(t, nc.IsCA())
nc.details.isCA = true
assert.Equal(t, Curve_CURVE25519, nc.Curve())
pubPem = "-----BEGIN NEBULA ED25519 PUBLIC KEY-----\nMTIzNDU2Nzg5MGFiY2VkZmdoaWoxMjM0NTY3ODkwYWI=\n-----END NEBULA ED25519 PUBLIC KEY-----\n"
assert.Equal(t, string(nc.MarshalPublicKeyPEM()), pubPem)
assert.True(t, nc.IsCA())
pubP256KeyPem := []byte(`-----BEGIN NEBULA P256 PUBLIC KEY-----
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAA=
-----END NEBULA P256 PUBLIC KEY-----
`)
pubP256Key, _, _, err := UnmarshalPublicKeyFromPEM(pubP256KeyPem)
require.NoError(t, err)
nc.curve = Curve_P256
nc.publicKey = pubP256Key
assert.Equal(t, Curve_P256, nc.Curve())
assert.Equal(t, string(nc.MarshalPublicKeyPEM()), string(pubP256KeyPem))
assert.True(t, nc.IsCA())
nc.details.isCA = false
assert.Equal(t, Curve_P256, nc.Curve())
assert.Equal(t, string(nc.MarshalPublicKeyPEM()), string(pubP256KeyPem))
assert.False(t, nc.IsCA())
}
func TestCertificateV2_Expired(t *testing.T) {
nc := certificateV2{
details: detailsV2{
notBefore: time.Now().Add(time.Second * -60).Round(time.Second),
notAfter: time.Now().Add(time.Second * 60).Round(time.Second),
},
}
assert.True(t, nc.Expired(time.Now().Add(time.Hour)))
assert.True(t, nc.Expired(time.Now().Add(-time.Hour)))
assert.False(t, nc.Expired(time.Now()))
}
func TestCertificateV2_MarshalJSON(t *testing.T) {
time.Local = time.UTC
pubKey := []byte("1234567890abcedf1234567890abcedf")
nc := certificateV2{
details: detailsV2{
name: "testing",
networks: []netip.Prefix{
mustParsePrefixUnmapped("10.1.1.1/24"),
mustParsePrefixUnmapped("10.1.1.2/16"),
},
unsafeNetworks: []netip.Prefix{
mustParsePrefixUnmapped("9.1.1.2/24"),
mustParsePrefixUnmapped("9.1.1.3/16"),
},
groups: []string{"test-group1", "test-group2", "test-group3"},
notBefore: time.Date(1, 0, 0, 1, 0, 0, 0, time.UTC),
notAfter: time.Date(1, 0, 0, 2, 0, 0, 0, time.UTC),
isCA: false,
issuer: "1234567890abcedf1234567890abcedf",
},
publicKey: pubKey,
signature: []byte("1234567890abcedf1234567890abcedf1234567890abcedf1234567890abcedf"),
}
b, err := nc.MarshalJSON()
require.ErrorIs(t, err, ErrMissingDetails)
rd, err := nc.details.Marshal()
require.NoError(t, err)
nc.rawDetails = rd
b, err = nc.MarshalJSON()
require.NoError(t, err)
assert.JSONEq(
t,
"{\"curve\":\"CURVE25519\",\"details\":{\"groups\":[\"test-group1\",\"test-group2\",\"test-group3\"],\"isCa\":false,\"issuer\":\"1234567890abcedf1234567890abcedf\",\"name\":\"testing\",\"networks\":[\"10.1.1.1/24\",\"10.1.1.2/16\"],\"notAfter\":\"0000-11-30T02:00:00Z\",\"notBefore\":\"0000-11-30T01:00:00Z\",\"unsafeNetworks\":[\"9.1.1.2/24\",\"9.1.1.3/16\"]},\"fingerprint\":\"152d9a7400c1e001cb76cffd035215ebb351f69eeb797f7f847dd086e15e56dd\",\"publicKey\":\"3132333435363738393061626365646631323334353637383930616263656466\",\"signature\":\"31323334353637383930616263656466313233343536373839306162636564663132333435363738393061626365646631323334353637383930616263656466\",\"version\":2}",
string(b),
)
}
func TestCertificateV2_VerifyPrivateKey(t *testing.T) {
ca, _, caKey, _ := NewTestCaCert(Version2, Curve_CURVE25519, time.Time{}, time.Time{}, nil, nil, nil)
err := ca.VerifyPrivateKey(Curve_CURVE25519, caKey)
require.NoError(t, err)
err = ca.VerifyPrivateKey(Curve_CURVE25519, caKey[:16])
require.ErrorIs(t, err, ErrInvalidPrivateKey)
_, caKey2, err := ed25519.GenerateKey(rand.Reader)
require.NoError(t, err)
err = ca.VerifyPrivateKey(Curve_CURVE25519, caKey2)
require.ErrorIs(t, err, ErrPublicPrivateKeyMismatch)
c, _, priv, _ := NewTestCert(Version2, Curve_CURVE25519, ca, caKey, "test", time.Time{}, time.Time{}, nil, nil, nil)
rawPriv, b, curve, err := UnmarshalPrivateKeyFromPEM(priv)
require.NoError(t, err)
assert.Empty(t, b)
assert.Equal(t, Curve_CURVE25519, curve)
err = c.VerifyPrivateKey(Curve_CURVE25519, rawPriv)
require.NoError(t, err)
_, priv2 := X25519Keypair()
err = c.VerifyPrivateKey(Curve_P256, priv2)
require.ErrorIs(t, err, ErrPublicPrivateCurveMismatch)
err = c.VerifyPrivateKey(Curve_CURVE25519, priv2)
require.ErrorIs(t, err, ErrPublicPrivateKeyMismatch)
err = c.VerifyPrivateKey(Curve_CURVE25519, priv2[:16])
require.ErrorIs(t, err, ErrInvalidPrivateKey)
ac, ok := c.(*certificateV2)
require.True(t, ok)
ac.curve = Curve(99)
err = c.VerifyPrivateKey(Curve(99), priv2)
require.EqualError(t, err, "invalid curve: 99")
ca2, _, caKey2, _ := NewTestCaCert(Version2, Curve_P256, time.Time{}, time.Time{}, nil, nil, nil)
err = ca.VerifyPrivateKey(Curve_CURVE25519, caKey)
require.NoError(t, err)
err = ca2.VerifyPrivateKey(Curve_P256, caKey2[:16])
require.ErrorIs(t, err, ErrInvalidPrivateKey)
c, _, priv, _ = NewTestCert(Version2, Curve_P256, ca2, caKey2, "test", time.Time{}, time.Time{}, nil, nil, nil)
rawPriv, b, curve, err = UnmarshalPrivateKeyFromPEM(priv)
err = c.VerifyPrivateKey(Curve_P256, priv[:16])
require.ErrorIs(t, err, ErrInvalidPrivateKey)
err = c.VerifyPrivateKey(Curve_P256, priv)
require.ErrorIs(t, err, ErrInvalidPrivateKey)
aCa, ok := ca2.(*certificateV2)
require.True(t, ok)
aCa.curve = Curve(99)
err = aCa.VerifyPrivateKey(Curve(99), priv2)
require.EqualError(t, err, "invalid curve: 99")
}
func TestCertificateV2_VerifyPrivateKeyP256(t *testing.T) {
ca, _, caKey, _ := NewTestCaCert(Version2, Curve_P256, time.Time{}, time.Time{}, nil, nil, nil)
err := ca.VerifyPrivateKey(Curve_P256, caKey)
require.NoError(t, err)
_, _, caKey2, _ := NewTestCaCert(Version2, Curve_P256, time.Time{}, time.Time{}, nil, nil, nil)
require.NoError(t, err)
err = ca.VerifyPrivateKey(Curve_P256, caKey2)
require.Error(t, err)
c, _, priv, _ := NewTestCert(Version2, Curve_P256, ca, caKey, "test", time.Time{}, time.Time{}, nil, nil, nil)
rawPriv, b, curve, err := UnmarshalPrivateKeyFromPEM(priv)
require.NoError(t, err)
assert.Empty(t, b)
assert.Equal(t, Curve_P256, curve)
err = c.VerifyPrivateKey(Curve_P256, rawPriv)
require.NoError(t, err)
_, priv2 := P256Keypair()
err = c.VerifyPrivateKey(Curve_P256, priv2)
require.Error(t, err)
}
func TestCertificateV2_Copy(t *testing.T) {
ca, _, caKey, _ := NewTestCaCert(Version2, Curve_CURVE25519, time.Now(), time.Now().Add(10*time.Minute), nil, nil, nil)
c, _, _, _ := NewTestCert(Version2, Curve_CURVE25519, ca, caKey, "test", time.Now(), time.Now().Add(5*time.Minute), nil, nil, nil)
cc := c.Copy()
test.AssertDeepCopyEqual(t, c, cc)
}
func TestUnmarshalCertificateV2(t *testing.T) {
data := []byte("\x98\x00\x00")
_, err := unmarshalCertificateV2(data, nil, Curve_CURVE25519)
require.EqualError(t, err, "bad wire format")
}
func TestCertificateV2_marshalForSigningStability(t *testing.T) {
before := time.Date(1996, time.May, 5, 0, 0, 0, 0, time.UTC)
after := before.Add(time.Second * 60).Round(time.Second)
pubKey := []byte("1234567890abcedfghij1234567890ab")
nc := certificateV2{
details: detailsV2{
name: "testing",
networks: []netip.Prefix{
mustParsePrefixUnmapped("10.1.1.2/16"),
mustParsePrefixUnmapped("10.1.1.1/24"),
},
unsafeNetworks: []netip.Prefix{
mustParsePrefixUnmapped("9.1.1.3/16"),
mustParsePrefixUnmapped("9.1.1.2/24"),
},
groups: []string{"test-group1", "test-group2", "test-group3"},
notBefore: before,
notAfter: after,
isCA: false,
issuer: "1234567890abcdef1234567890abcdef",
},
signature: []byte("1234567890abcdef1234567890abcdef"),
publicKey: pubKey,
}
const expectedRawDetailsStr = "a070800774657374696e67a10e04050a0101021004050a01010118a20e0405090101031004050901010218a3270c0b746573742d67726f7570310c0b746573742d67726f7570320c0b746573742d67726f7570338504318bef808604318befbc87101234567890abcdef1234567890abcdef"
expectedRawDetails, err := hex.DecodeString(expectedRawDetailsStr)
require.NoError(t, err)
db, err := nc.details.Marshal()
require.NoError(t, err)
assert.Equal(t, expectedRawDetails, db)
expectedForSigning, err := hex.DecodeString(expectedRawDetailsStr + "00313233343536373839306162636564666768696a313233343536373839306162")
b, err := nc.marshalForSigning()
require.NoError(t, err)
assert.Equal(t, expectedForSigning, b)
}

View File

@@ -1,300 +0,0 @@
package cert
import (
"crypto/aes"
"crypto/cipher"
"crypto/ed25519"
"crypto/rand"
"encoding/pem"
"fmt"
"io"
"math"
"golang.org/x/crypto/argon2"
"google.golang.org/protobuf/proto"
)
type NebulaEncryptedData struct {
EncryptionMetadata NebulaEncryptionMetadata
Ciphertext []byte
}
type NebulaEncryptionMetadata struct {
EncryptionAlgorithm string
Argon2Parameters Argon2Parameters
}
// Argon2Parameters KDF factors
type Argon2Parameters struct {
version rune
Memory uint32 // KiB
Parallelism uint8
Iterations uint32
salt []byte
}
// NewArgon2Parameters Returns a new Argon2Parameters object with current version set
func NewArgon2Parameters(memory uint32, parallelism uint8, iterations uint32) *Argon2Parameters {
return &Argon2Parameters{
version: argon2.Version,
Memory: memory, // KiB
Parallelism: parallelism,
Iterations: iterations,
}
}
// Encrypts data using AES-256-GCM and the Argon2id key derivation function
func aes256Encrypt(passphrase []byte, kdfParams *Argon2Parameters, data []byte) ([]byte, error) {
key, err := aes256DeriveKey(passphrase, kdfParams)
if err != nil {
return nil, err
}
// this should never happen, but since this dictates how our calls into the
// aes package behave and could be catastraphic, let's sanity check this
if len(key) != 32 {
return nil, fmt.Errorf("invalid AES-256 key length (%d) - cowardly refusing to encrypt", len(key))
}
block, err := aes.NewCipher(key)
if err != nil {
return nil, err
}
gcm, err := cipher.NewGCM(block)
if err != nil {
return nil, err
}
nonce := make([]byte, gcm.NonceSize())
if _, err := io.ReadFull(rand.Reader, nonce); err != nil {
return nil, err
}
ciphertext := gcm.Seal(nil, nonce, data, nil)
blob := joinNonceCiphertext(nonce, ciphertext)
return blob, nil
}
// Decrypts data using AES-256-GCM and the Argon2id key derivation function
// Expects the data to include an Argon2id parameter string before the encrypted data
func aes256Decrypt(passphrase []byte, kdfParams *Argon2Parameters, data []byte) ([]byte, error) {
key, err := aes256DeriveKey(passphrase, kdfParams)
if err != nil {
return nil, err
}
block, err := aes.NewCipher(key)
if err != nil {
return nil, err
}
gcm, err := cipher.NewGCM(block)
if err != nil {
return nil, err
}
nonce, ciphertext, err := splitNonceCiphertext(data, gcm.NonceSize())
if err != nil {
return nil, err
}
plaintext, err := gcm.Open(nil, nonce, ciphertext, nil)
if err != nil {
return nil, fmt.Errorf("invalid passphrase or corrupt private key")
}
return plaintext, nil
}
func aes256DeriveKey(passphrase []byte, params *Argon2Parameters) ([]byte, error) {
if params.salt == nil {
params.salt = make([]byte, 32)
if _, err := rand.Read(params.salt); err != nil {
return nil, err
}
}
// keySize of 32 bytes will result in AES-256 encryption
key, err := deriveKey(passphrase, 32, params)
if err != nil {
return nil, err
}
return key, nil
}
// Derives a key from a passphrase using Argon2id
func deriveKey(passphrase []byte, keySize uint32, params *Argon2Parameters) ([]byte, error) {
if params.version != argon2.Version {
return nil, fmt.Errorf("incompatible Argon2 version: %d", params.version)
}
if params.salt == nil {
return nil, fmt.Errorf("salt must be set in argon2Parameters")
} else if len(params.salt) < 16 {
return nil, fmt.Errorf("salt must be at least 128 bits")
}
key := argon2.IDKey(passphrase, params.salt, params.Iterations, params.Memory, params.Parallelism, keySize)
return key, nil
}
// Prepends nonce to ciphertext
func joinNonceCiphertext(nonce []byte, ciphertext []byte) []byte {
return append(nonce, ciphertext...)
}
// Splits nonce from ciphertext
func splitNonceCiphertext(blob []byte, nonceSize int) ([]byte, []byte, error) {
if len(blob) <= nonceSize {
return nil, nil, fmt.Errorf("invalid ciphertext blob - blob shorter than nonce length")
}
return blob[:nonceSize], blob[nonceSize:], nil
}
// EncryptAndMarshalSigningPrivateKey is a simple helper to encrypt and PEM encode a private key
func EncryptAndMarshalSigningPrivateKey(curve Curve, b []byte, passphrase []byte, kdfParams *Argon2Parameters) ([]byte, error) {
ciphertext, err := aes256Encrypt(passphrase, kdfParams, b)
if err != nil {
return nil, err
}
b, err = proto.Marshal(&RawNebulaEncryptedData{
EncryptionMetadata: &RawNebulaEncryptionMetadata{
EncryptionAlgorithm: "AES-256-GCM",
Argon2Parameters: &RawNebulaArgon2Parameters{
Version: kdfParams.version,
Memory: kdfParams.Memory,
Parallelism: uint32(kdfParams.Parallelism),
Iterations: kdfParams.Iterations,
Salt: kdfParams.salt,
},
},
Ciphertext: ciphertext,
})
if err != nil {
return nil, err
}
switch curve {
case Curve_CURVE25519:
return pem.EncodeToMemory(&pem.Block{Type: EncryptedEd25519PrivateKeyBanner, Bytes: b}), nil
case Curve_P256:
return pem.EncodeToMemory(&pem.Block{Type: EncryptedECDSAP256PrivateKeyBanner, Bytes: b}), nil
default:
return nil, fmt.Errorf("invalid curve: %v", curve)
}
}
// UnmarshalNebulaEncryptedData will unmarshal a protobuf byte representation of a nebula cert into its
// protobuf-generated struct.
func UnmarshalNebulaEncryptedData(b []byte) (*NebulaEncryptedData, error) {
if len(b) == 0 {
return nil, fmt.Errorf("nil byte array")
}
var rned RawNebulaEncryptedData
err := proto.Unmarshal(b, &rned)
if err != nil {
return nil, err
}
if rned.EncryptionMetadata == nil {
return nil, fmt.Errorf("encoded EncryptionMetadata was nil")
}
if rned.EncryptionMetadata.Argon2Parameters == nil {
return nil, fmt.Errorf("encoded Argon2Parameters was nil")
}
params, err := unmarshalArgon2Parameters(rned.EncryptionMetadata.Argon2Parameters)
if err != nil {
return nil, err
}
ned := NebulaEncryptedData{
EncryptionMetadata: NebulaEncryptionMetadata{
EncryptionAlgorithm: rned.EncryptionMetadata.EncryptionAlgorithm,
Argon2Parameters: *params,
},
Ciphertext: rned.Ciphertext,
}
return &ned, nil
}
func unmarshalArgon2Parameters(params *RawNebulaArgon2Parameters) (*Argon2Parameters, error) {
if params.Version < math.MinInt32 || params.Version > math.MaxInt32 {
return nil, fmt.Errorf("Argon2Parameters Version must be at least %d and no more than %d", math.MinInt32, math.MaxInt32)
}
if params.Memory <= 0 || params.Memory > math.MaxUint32 {
return nil, fmt.Errorf("Argon2Parameters Memory must be be greater than 0 and no more than %d KiB", uint32(math.MaxUint32))
}
if params.Parallelism <= 0 || params.Parallelism > math.MaxUint8 {
return nil, fmt.Errorf("Argon2Parameters Parallelism must be be greater than 0 and no more than %d", math.MaxUint8)
}
if params.Iterations <= 0 || params.Iterations > math.MaxUint32 {
return nil, fmt.Errorf("-argon-iterations must be be greater than 0 and no more than %d", uint32(math.MaxUint32))
}
return &Argon2Parameters{
version: params.Version,
Memory: params.Memory,
Parallelism: uint8(params.Parallelism),
Iterations: params.Iterations,
salt: params.Salt,
}, nil
}
// DecryptAndUnmarshalSigningPrivateKey will try to pem decode and decrypt an Ed25519/ECDSA private key with
// the given passphrase, returning any other bytes b or an error on failure
func DecryptAndUnmarshalSigningPrivateKey(passphrase, b []byte) (Curve, []byte, []byte, error) {
var curve Curve
k, r := pem.Decode(b)
if k == nil {
return curve, nil, r, fmt.Errorf("input did not contain a valid PEM encoded block")
}
switch k.Type {
case EncryptedEd25519PrivateKeyBanner:
curve = Curve_CURVE25519
case EncryptedECDSAP256PrivateKeyBanner:
curve = Curve_P256
default:
return curve, nil, r, fmt.Errorf("bytes did not contain a proper nebula encrypted Ed25519/ECDSA private key banner")
}
ned, err := UnmarshalNebulaEncryptedData(k.Bytes)
if err != nil {
return curve, nil, r, err
}
var bytes []byte
switch ned.EncryptionMetadata.EncryptionAlgorithm {
case "AES-256-GCM":
bytes, err = aes256Decrypt(passphrase, &ned.EncryptionMetadata.Argon2Parameters, ned.Ciphertext)
if err != nil {
return curve, nil, r, err
}
default:
return curve, nil, r, fmt.Errorf("unsupported encryption algorithm: %s", ned.EncryptionMetadata.EncryptionAlgorithm)
}
switch curve {
case Curve_CURVE25519:
if len(bytes) != ed25519.PrivateKeySize {
return curve, nil, r, fmt.Errorf("key was not %d bytes, is invalid ed25519 private key", ed25519.PrivateKeySize)
}
case Curve_P256:
if len(bytes) != 32 {
return curve, nil, r, fmt.Errorf("key was not 32 bytes, is invalid ECDSA P256 private key")
}
}
return curve, bytes, r, nil
}

View File

@@ -1,113 +0,0 @@
package cert
import (
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"golang.org/x/crypto/argon2"
)
func TestNewArgon2Parameters(t *testing.T) {
p := NewArgon2Parameters(64*1024, 4, 3)
assert.Equal(t, &Argon2Parameters{
version: argon2.Version,
Memory: 64 * 1024,
Parallelism: 4,
Iterations: 3,
}, p)
p = NewArgon2Parameters(2*1024*1024, 2, 1)
assert.Equal(t, &Argon2Parameters{
version: argon2.Version,
Memory: 2 * 1024 * 1024,
Parallelism: 2,
Iterations: 1,
}, p)
}
func TestDecryptAndUnmarshalSigningPrivateKey(t *testing.T) {
passphrase := []byte("DO NOT USE")
privKey := []byte(`# A good key
-----BEGIN NEBULA ED25519 ENCRYPTED PRIVATE KEY-----
CjsKC0FFUy0yNTYtR0NNEiwIExCAgAQYAyAEKiCPoDfGQiosxNPTbPn5EsMlc2MI
c0Bt4oz6gTrFQhX3aBJcimhHKeAuhyTGvllD0Z19fe+DFPcLH3h5VrdjVfIAajg0
KrbV3n9UHif/Au5skWmquNJzoW1E4MTdRbvpti6o+WdQ49DxjBFhx0YH8LBqrbPU
0BGkUHmIO7daP24=
-----END NEBULA ED25519 ENCRYPTED PRIVATE KEY-----
`)
shortKey := []byte(`# A key which, once decrypted, is too short
-----BEGIN NEBULA ED25519 ENCRYPTED PRIVATE KEY-----
CjsKC0FFUy0yNTYtR0NNEiwIExCAgAQYAyAEKiAVJwdfl3r+eqi/vF6S7OMdpjfo
hAzmTCRnr58Su4AqmBJbCv3zleYCEKYJP6UI3S8ekLMGISsgO4hm5leukCCyqT0Z
cQ76yrberpzkJKoPLGisX8f+xdy4aXSZl7oEYWQte1+vqbtl/eY9PGZhxUQdcyq7
hqzIyrRqfUgVuA==
-----END NEBULA ED25519 ENCRYPTED PRIVATE KEY-----
`)
invalidBanner := []byte(`# Invalid banner (not encrypted)
-----BEGIN NEBULA ED25519 PRIVATE KEY-----
bWRp2CTVFhW9HD/qCd28ltDgK3w8VXSeaEYczDWos8sMUBqDb9jP3+NYwcS4lURG
XgLvodMXZJuaFPssp+WwtA==
-----END NEBULA ED25519 PRIVATE KEY-----
`)
invalidPem := []byte(`# Not a valid PEM format
-BEGIN NEBULA ED25519 ENCRYPTED PRIVATE KEY-----
CjwKC0FFUy0yNTYtR0NNEi0IExCAgIABGAEgBCognnjujd67Vsv99p22wfAjQaDT
oCMW1mdjkU3gACKNW4MSXOWR9Sts4C81yk1RUku2gvGKs3TB9LYoklLsIizSYOLl
+Vs//O1T0I1Xbml2XBAROsb/VSoDln/6LMqR4B6fn6B3GOsLBBqRI8daDl9lRMPB
qrlJ69wer3ZUHFXA
-END NEBULA ED25519 ENCRYPTED PRIVATE KEY-----
`)
keyBundle := appendByteSlices(privKey, shortKey, invalidBanner, invalidPem)
// Success test case
curve, k, rest, err := DecryptAndUnmarshalSigningPrivateKey(passphrase, keyBundle)
require.NoError(t, err)
assert.Equal(t, Curve_CURVE25519, curve)
assert.Len(t, k, 64)
assert.Equal(t, rest, appendByteSlices(shortKey, invalidBanner, invalidPem))
// Fail due to short key
curve, k, rest, err = DecryptAndUnmarshalSigningPrivateKey(passphrase, rest)
require.EqualError(t, err, "key was not 64 bytes, is invalid ed25519 private key")
assert.Nil(t, k)
assert.Equal(t, rest, appendByteSlices(invalidBanner, invalidPem))
// Fail due to invalid banner
curve, k, rest, err = DecryptAndUnmarshalSigningPrivateKey(passphrase, rest)
require.EqualError(t, err, "bytes did not contain a proper nebula encrypted Ed25519/ECDSA private key banner")
assert.Nil(t, k)
assert.Equal(t, rest, invalidPem)
// Fail due to ivalid PEM format, because
// it's missing the requisite pre-encapsulation boundary.
curve, k, rest, err = DecryptAndUnmarshalSigningPrivateKey(passphrase, rest)
require.EqualError(t, err, "input did not contain a valid PEM encoded block")
assert.Nil(t, k)
assert.Equal(t, rest, invalidPem)
// Fail due to invalid passphrase
curve, k, rest, err = DecryptAndUnmarshalSigningPrivateKey([]byte("invalid passphrase"), privKey)
require.EqualError(t, err, "invalid passphrase or corrupt private key")
assert.Nil(t, k)
assert.Equal(t, []byte{}, rest)
}
func TestEncryptAndMarshalSigningPrivateKey(t *testing.T) {
// Having proved that decryption works correctly above, we can test the
// encryption function produces a value which can be decrypted
passphrase := []byte("passphrase")
bytes := []byte("AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA")
kdfParams := NewArgon2Parameters(64*1024, 4, 3)
key, err := EncryptAndMarshalSigningPrivateKey(Curve_CURVE25519, bytes, passphrase, kdfParams)
require.NoError(t, err)
// Verify the "key" can be decrypted successfully
curve, k, rest, err := DecryptAndUnmarshalSigningPrivateKey(passphrase, key)
assert.Len(t, k, 64)
assert.Equal(t, Curve_CURVE25519, curve)
assert.Equal(t, []byte{}, rest)
require.NoError(t, err)
// EncryptAndMarshalEd25519PrivateKey does not create any errors itself
}

View File

@@ -1,50 +0,0 @@
package cert
import (
"errors"
"fmt"
)
var (
ErrBadFormat = errors.New("bad wire format")
ErrRootExpired = errors.New("root certificate is expired")
ErrExpired = errors.New("certificate is expired")
ErrNotCA = errors.New("certificate is not a CA")
ErrNotSelfSigned = errors.New("certificate is not self-signed")
ErrBlockListed = errors.New("certificate is in the block list")
ErrFingerprintMismatch = errors.New("certificate fingerprint did not match")
ErrSignatureMismatch = errors.New("certificate signature did not match")
ErrInvalidPublicKey = errors.New("invalid public key")
ErrInvalidPrivateKey = errors.New("invalid private key")
ErrPublicPrivateCurveMismatch = errors.New("public key does not match private key curve")
ErrPublicPrivateKeyMismatch = errors.New("public key and private key are not a pair")
ErrPrivateKeyEncrypted = errors.New("private key must be decrypted")
ErrCaNotFound = errors.New("could not find ca for the certificate")
ErrUnknownVersion = errors.New("certificate version unrecognized")
ErrInvalidPEMBlock = errors.New("input did not contain a valid PEM encoded block")
ErrInvalidPEMCertificateBanner = errors.New("bytes did not contain a proper certificate banner")
ErrInvalidPEMX25519PublicKeyBanner = errors.New("bytes did not contain a proper X25519 public key banner")
ErrInvalidPEMX25519PrivateKeyBanner = errors.New("bytes did not contain a proper X25519 private key banner")
ErrInvalidPEMEd25519PublicKeyBanner = errors.New("bytes did not contain a proper Ed25519 public key banner")
ErrInvalidPEMEd25519PrivateKeyBanner = errors.New("bytes did not contain a proper Ed25519 private key banner")
ErrNoPeerStaticKey = errors.New("no peer static key was present")
ErrNoPayload = errors.New("provided payload was empty")
ErrMissingDetails = errors.New("certificate did not contain details")
ErrEmptySignature = errors.New("empty signature")
ErrEmptyRawDetails = errors.New("empty rawDetails not allowed")
)
type ErrInvalidCertificateProperties struct {
str string
}
func NewErrInvalidCertificateProperties(format string, a ...any) error {
return &ErrInvalidCertificateProperties{fmt.Sprintf(format, a...)}
}
func (e *ErrInvalidCertificateProperties) Error() string {
return e.str
}

View File

@@ -1,141 +0,0 @@
package cert
import (
"crypto/ecdh"
"crypto/ecdsa"
"crypto/elliptic"
"crypto/rand"
"io"
"net/netip"
"time"
"golang.org/x/crypto/curve25519"
"golang.org/x/crypto/ed25519"
)
// NewTestCaCert will create a new ca certificate
func NewTestCaCert(version Version, curve Curve, before, after time.Time, networks, unsafeNetworks []netip.Prefix, groups []string) (Certificate, []byte, []byte, []byte) {
var err error
var pub, priv []byte
switch curve {
case Curve_CURVE25519:
pub, priv, err = ed25519.GenerateKey(rand.Reader)
case Curve_P256:
privk, err := ecdsa.GenerateKey(elliptic.P256(), rand.Reader)
if err != nil {
panic(err)
}
pub = elliptic.Marshal(elliptic.P256(), privk.PublicKey.X, privk.PublicKey.Y)
priv = privk.D.FillBytes(make([]byte, 32))
default:
// There is no default to allow the underlying lib to respond with an error
}
if before.IsZero() {
before = time.Now().Add(time.Second * -60).Round(time.Second)
}
if after.IsZero() {
after = time.Now().Add(time.Second * 60).Round(time.Second)
}
t := &TBSCertificate{
Curve: curve,
Version: version,
Name: "test ca",
NotBefore: time.Unix(before.Unix(), 0),
NotAfter: time.Unix(after.Unix(), 0),
PublicKey: pub,
Networks: networks,
UnsafeNetworks: unsafeNetworks,
Groups: groups,
IsCA: true,
}
c, err := t.Sign(nil, curve, priv)
if err != nil {
panic(err)
}
pem, err := c.MarshalPEM()
if err != nil {
panic(err)
}
return c, pub, priv, pem
}
// NewTestCert will generate a signed certificate with the provided details.
// Expiry times are defaulted if you do not pass them in
func NewTestCert(v Version, curve Curve, ca Certificate, key []byte, name string, before, after time.Time, networks, unsafeNetworks []netip.Prefix, groups []string) (Certificate, []byte, []byte, []byte) {
if before.IsZero() {
before = time.Now().Add(time.Second * -60).Round(time.Second)
}
if after.IsZero() {
after = time.Now().Add(time.Second * 60).Round(time.Second)
}
if len(networks) == 0 {
networks = []netip.Prefix{netip.MustParsePrefix("10.0.0.123/8")}
}
var pub, priv []byte
switch curve {
case Curve_CURVE25519:
pub, priv = X25519Keypair()
case Curve_P256:
pub, priv = P256Keypair()
default:
panic("unknown curve")
}
nc := &TBSCertificate{
Version: v,
Curve: curve,
Name: name,
Networks: networks,
UnsafeNetworks: unsafeNetworks,
Groups: groups,
NotBefore: time.Unix(before.Unix(), 0),
NotAfter: time.Unix(after.Unix(), 0),
PublicKey: pub,
IsCA: false,
}
c, err := nc.Sign(ca, ca.Curve(), key)
if err != nil {
panic(err)
}
pem, err := c.MarshalPEM()
if err != nil {
panic(err)
}
return c, pub, MarshalPrivateKeyToPEM(curve, priv), pem
}
func X25519Keypair() ([]byte, []byte) {
privkey := make([]byte, 32)
if _, err := io.ReadFull(rand.Reader, privkey); err != nil {
panic(err)
}
pubkey, err := curve25519.X25519(privkey, curve25519.Basepoint)
if err != nil {
panic(err)
}
return pubkey, privkey
}
func P256Keypair() ([]byte, []byte) {
privkey, err := ecdh.P256().GenerateKey(rand.Reader)
if err != nil {
panic(err)
}
pubkey := privkey.PublicKey()
return pubkey.Bytes(), privkey.Bytes()
}

View File

@@ -1,191 +0,0 @@
package cert
import (
"encoding/pem"
"fmt"
"golang.org/x/crypto/ed25519"
)
const ( //cert banners
CertificateBanner = "NEBULA CERTIFICATE"
CertificateV2Banner = "NEBULA CERTIFICATE V2"
)
const ( //key-agreement-key banners
X25519PrivateKeyBanner = "NEBULA X25519 PRIVATE KEY"
X25519PublicKeyBanner = "NEBULA X25519 PUBLIC KEY"
P256PrivateKeyBanner = "NEBULA P256 PRIVATE KEY"
P256PublicKeyBanner = "NEBULA P256 PUBLIC KEY"
)
/* including "ECDSA" in the P256 banners is a clue that these keys should be used only for signing */
const ( //signing key banners
EncryptedECDSAP256PrivateKeyBanner = "NEBULA ECDSA P256 ENCRYPTED PRIVATE KEY"
ECDSAP256PrivateKeyBanner = "NEBULA ECDSA P256 PRIVATE KEY"
ECDSAP256PublicKeyBanner = "NEBULA ECDSA P256 PUBLIC KEY"
EncryptedEd25519PrivateKeyBanner = "NEBULA ED25519 ENCRYPTED PRIVATE KEY"
Ed25519PrivateKeyBanner = "NEBULA ED25519 PRIVATE KEY"
Ed25519PublicKeyBanner = "NEBULA ED25519 PUBLIC KEY"
)
// UnmarshalCertificateFromPEM will try to unmarshal the first pem block in a byte array, returning any non consumed
// data or an error on failure
func UnmarshalCertificateFromPEM(b []byte) (Certificate, []byte, error) {
p, r := pem.Decode(b)
if p == nil {
return nil, r, ErrInvalidPEMBlock
}
var c Certificate
var err error
switch p.Type {
// Implementations must validate the resulting certificate contains valid information
case CertificateBanner:
c, err = unmarshalCertificateV1(p.Bytes, nil)
case CertificateV2Banner:
c, err = unmarshalCertificateV2(p.Bytes, nil, Curve_CURVE25519)
default:
return nil, r, ErrInvalidPEMCertificateBanner
}
if err != nil {
return nil, r, err
}
return c, r, nil
}
func marshalCertPublicKeyToPEM(c Certificate) []byte {
if c.IsCA() {
return MarshalSigningPublicKeyToPEM(c.Curve(), c.PublicKey())
} else {
return MarshalPublicKeyToPEM(c.Curve(), c.PublicKey())
}
}
// MarshalPublicKeyToPEM returns a PEM representation of a public key used for ECDH.
// if your public key came from a certificate, prefer Certificate.PublicKeyPEM() if possible, to avoid mistakes!
func MarshalPublicKeyToPEM(curve Curve, b []byte) []byte {
switch curve {
case Curve_CURVE25519:
return pem.EncodeToMemory(&pem.Block{Type: X25519PublicKeyBanner, Bytes: b})
case Curve_P256:
return pem.EncodeToMemory(&pem.Block{Type: P256PublicKeyBanner, Bytes: b})
default:
return nil
}
}
// MarshalSigningPublicKeyToPEM returns a PEM representation of a public key used for signing.
// if your public key came from a certificate, prefer Certificate.PublicKeyPEM() if possible, to avoid mistakes!
func MarshalSigningPublicKeyToPEM(curve Curve, b []byte) []byte {
switch curve {
case Curve_CURVE25519:
return pem.EncodeToMemory(&pem.Block{Type: Ed25519PublicKeyBanner, Bytes: b})
case Curve_P256:
return pem.EncodeToMemory(&pem.Block{Type: P256PublicKeyBanner, Bytes: b})
default:
return nil
}
}
func UnmarshalPublicKeyFromPEM(b []byte) ([]byte, []byte, Curve, error) {
k, r := pem.Decode(b)
if k == nil {
return nil, r, 0, fmt.Errorf("input did not contain a valid PEM encoded block")
}
var expectedLen int
var curve Curve
switch k.Type {
case X25519PublicKeyBanner, Ed25519PublicKeyBanner:
expectedLen = 32
curve = Curve_CURVE25519
case P256PublicKeyBanner, ECDSAP256PublicKeyBanner:
// Uncompressed
expectedLen = 65
curve = Curve_P256
default:
return nil, r, 0, fmt.Errorf("bytes did not contain a proper public key banner")
}
if len(k.Bytes) != expectedLen {
return nil, r, 0, fmt.Errorf("key was not %d bytes, is invalid %s public key", expectedLen, curve)
}
return k.Bytes, r, curve, nil
}
func MarshalPrivateKeyToPEM(curve Curve, b []byte) []byte {
switch curve {
case Curve_CURVE25519:
return pem.EncodeToMemory(&pem.Block{Type: X25519PrivateKeyBanner, Bytes: b})
case Curve_P256:
return pem.EncodeToMemory(&pem.Block{Type: P256PrivateKeyBanner, Bytes: b})
default:
return nil
}
}
func MarshalSigningPrivateKeyToPEM(curve Curve, b []byte) []byte {
switch curve {
case Curve_CURVE25519:
return pem.EncodeToMemory(&pem.Block{Type: Ed25519PrivateKeyBanner, Bytes: b})
case Curve_P256:
return pem.EncodeToMemory(&pem.Block{Type: ECDSAP256PrivateKeyBanner, Bytes: b})
default:
return nil
}
}
// UnmarshalPrivateKeyFromPEM will try to unmarshal the first pem block in a byte array, returning any non
// consumed data or an error on failure
func UnmarshalPrivateKeyFromPEM(b []byte) ([]byte, []byte, Curve, error) {
k, r := pem.Decode(b)
if k == nil {
return nil, r, 0, fmt.Errorf("input did not contain a valid PEM encoded block")
}
var expectedLen int
var curve Curve
switch k.Type {
case X25519PrivateKeyBanner:
expectedLen = 32
curve = Curve_CURVE25519
case P256PrivateKeyBanner:
expectedLen = 32
curve = Curve_P256
default:
return nil, r, 0, fmt.Errorf("bytes did not contain a proper private key banner")
}
if len(k.Bytes) != expectedLen {
return nil, r, 0, fmt.Errorf("key was not %d bytes, is invalid %s private key", expectedLen, curve)
}
return k.Bytes, r, curve, nil
}
func UnmarshalSigningPrivateKeyFromPEM(b []byte) ([]byte, []byte, Curve, error) {
k, r := pem.Decode(b)
if k == nil {
return nil, r, 0, fmt.Errorf("input did not contain a valid PEM encoded block")
}
var curve Curve
switch k.Type {
case EncryptedEd25519PrivateKeyBanner:
return nil, nil, Curve_CURVE25519, ErrPrivateKeyEncrypted
case EncryptedECDSAP256PrivateKeyBanner:
return nil, nil, Curve_P256, ErrPrivateKeyEncrypted
case Ed25519PrivateKeyBanner:
curve = Curve_CURVE25519
if len(k.Bytes) != ed25519.PrivateKeySize {
return nil, r, 0, fmt.Errorf("key was not %d bytes, is invalid Ed25519 private key", ed25519.PrivateKeySize)
}
case ECDSAP256PrivateKeyBanner:
curve = Curve_P256
if len(k.Bytes) != 32 {
return nil, r, 0, fmt.Errorf("key was not 32 bytes, is invalid ECDSA P256 private key")
}
default:
return nil, r, 0, fmt.Errorf("bytes did not contain a proper Ed25519/ECDSA private key banner")
}
return k.Bytes, r, curve, nil
}

View File

@@ -1,308 +0,0 @@
package cert
import (
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestUnmarshalCertificateFromPEM(t *testing.T) {
goodCert := []byte(`
# A good cert
-----BEGIN NEBULA CERTIFICATE-----
CkAKDm5lYnVsYSByb290IGNhKJfap9AFMJfg1+YGOiCUQGByMuNRhIlQBOyzXWbL
vcKBwDhov900phEfJ5DN3kABEkDCq5R8qBiu8sl54yVfgRcQXEDt3cHr8UTSLszv
bzBEr00kERQxxTzTsH8cpYEgRoipvmExvg8WP8NdAJEYJosB
-----END NEBULA CERTIFICATE-----
`)
badBanner := []byte(`# A bad banner
-----BEGIN NOT A NEBULA CERTIFICATE-----
CkAKDm5lYnVsYSByb290IGNhKJfap9AFMJfg1+YGOiCUQGByMuNRhIlQBOyzXWbL
vcKBwDhov900phEfJ5DN3kABEkDCq5R8qBiu8sl54yVfgRcQXEDt3cHr8UTSLszv
bzBEr00kERQxxTzTsH8cpYEgRoipvmExvg8WP8NdAJEYJosB
-----END NOT A NEBULA CERTIFICATE-----
`)
invalidPem := []byte(`# Not a valid PEM format
-BEGIN NEBULA CERTIFICATE-----
CkAKDm5lYnVsYSByb290IGNhKJfap9AFMJfg1+YGOiCUQGByMuNRhIlQBOyzXWbL
vcKBwDhov900phEfJ5DN3kABEkDCq5R8qBiu8sl54yVfgRcQXEDt3cHr8UTSLszv
bzBEr00kERQxxTzTsH8cpYEgRoipvmExvg8WP8NdAJEYJosB
-END NEBULA CERTIFICATE----`)
certBundle := appendByteSlices(goodCert, badBanner, invalidPem)
// Success test case
cert, rest, err := UnmarshalCertificateFromPEM(certBundle)
assert.NotNil(t, cert)
assert.Equal(t, rest, append(badBanner, invalidPem...))
require.NoError(t, err)
// Fail due to invalid banner.
cert, rest, err = UnmarshalCertificateFromPEM(rest)
assert.Nil(t, cert)
assert.Equal(t, rest, invalidPem)
require.EqualError(t, err, "bytes did not contain a proper certificate banner")
// Fail due to ivalid PEM format, because
// it's missing the requisite pre-encapsulation boundary.
cert, rest, err = UnmarshalCertificateFromPEM(rest)
assert.Nil(t, cert)
assert.Equal(t, rest, invalidPem)
require.EqualError(t, err, "input did not contain a valid PEM encoded block")
}
func TestUnmarshalSigningPrivateKeyFromPEM(t *testing.T) {
privKey := []byte(`# A good key
-----BEGIN NEBULA ED25519 PRIVATE KEY-----
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA==
-----END NEBULA ED25519 PRIVATE KEY-----
`)
privP256Key := []byte(`# A good key
-----BEGIN NEBULA ECDSA P256 PRIVATE KEY-----
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=
-----END NEBULA ECDSA P256 PRIVATE KEY-----
`)
shortKey := []byte(`# A short key
-----BEGIN NEBULA ED25519 PRIVATE KEY-----
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
-----END NEBULA ED25519 PRIVATE KEY-----
`)
invalidBanner := []byte(`# Invalid banner
-----BEGIN NOT A NEBULA PRIVATE KEY-----
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA==
-----END NOT A NEBULA PRIVATE KEY-----
`)
invalidPem := []byte(`# Not a valid PEM format
-BEGIN NEBULA ED25519 PRIVATE KEY-----
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA==
-END NEBULA ED25519 PRIVATE KEY-----`)
keyBundle := appendByteSlices(privKey, privP256Key, shortKey, invalidBanner, invalidPem)
// Success test case
k, rest, curve, err := UnmarshalSigningPrivateKeyFromPEM(keyBundle)
assert.Len(t, k, 64)
assert.Equal(t, rest, appendByteSlices(privP256Key, shortKey, invalidBanner, invalidPem))
assert.Equal(t, Curve_CURVE25519, curve)
require.NoError(t, err)
// Success test case
k, rest, curve, err = UnmarshalSigningPrivateKeyFromPEM(rest)
assert.Len(t, k, 32)
assert.Equal(t, rest, appendByteSlices(shortKey, invalidBanner, invalidPem))
assert.Equal(t, Curve_P256, curve)
require.NoError(t, err)
// Fail due to short key
k, rest, curve, err = UnmarshalSigningPrivateKeyFromPEM(rest)
assert.Nil(t, k)
assert.Equal(t, rest, appendByteSlices(invalidBanner, invalidPem))
require.EqualError(t, err, "key was not 64 bytes, is invalid Ed25519 private key")
// Fail due to invalid banner
k, rest, curve, err = UnmarshalSigningPrivateKeyFromPEM(rest)
assert.Nil(t, k)
assert.Equal(t, rest, invalidPem)
require.EqualError(t, err, "bytes did not contain a proper Ed25519/ECDSA private key banner")
// Fail due to ivalid PEM format, because
// it's missing the requisite pre-encapsulation boundary.
k, rest, curve, err = UnmarshalSigningPrivateKeyFromPEM(rest)
assert.Nil(t, k)
assert.Equal(t, rest, invalidPem)
require.EqualError(t, err, "input did not contain a valid PEM encoded block")
}
func TestUnmarshalPrivateKeyFromPEM(t *testing.T) {
privKey := []byte(`# A good key
-----BEGIN NEBULA X25519 PRIVATE KEY-----
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=
-----END NEBULA X25519 PRIVATE KEY-----
`)
privP256Key := []byte(`# A good key
-----BEGIN NEBULA P256 PRIVATE KEY-----
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=
-----END NEBULA P256 PRIVATE KEY-----
`)
shortKey := []byte(`# A short key
-----BEGIN NEBULA X25519 PRIVATE KEY-----
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA==
-----END NEBULA X25519 PRIVATE KEY-----
`)
invalidBanner := []byte(`# Invalid banner
-----BEGIN NOT A NEBULA PRIVATE KEY-----
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=
-----END NOT A NEBULA PRIVATE KEY-----
`)
invalidPem := []byte(`# Not a valid PEM format
-BEGIN NEBULA X25519 PRIVATE KEY-----
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=
-END NEBULA X25519 PRIVATE KEY-----`)
keyBundle := appendByteSlices(privKey, privP256Key, shortKey, invalidBanner, invalidPem)
// Success test case
k, rest, curve, err := UnmarshalPrivateKeyFromPEM(keyBundle)
assert.Len(t, k, 32)
assert.Equal(t, rest, appendByteSlices(privP256Key, shortKey, invalidBanner, invalidPem))
assert.Equal(t, Curve_CURVE25519, curve)
require.NoError(t, err)
// Success test case
k, rest, curve, err = UnmarshalPrivateKeyFromPEM(rest)
assert.Len(t, k, 32)
assert.Equal(t, rest, appendByteSlices(shortKey, invalidBanner, invalidPem))
assert.Equal(t, Curve_P256, curve)
require.NoError(t, err)
// Fail due to short key
k, rest, curve, err = UnmarshalPrivateKeyFromPEM(rest)
assert.Nil(t, k)
assert.Equal(t, rest, appendByteSlices(invalidBanner, invalidPem))
require.EqualError(t, err, "key was not 32 bytes, is invalid CURVE25519 private key")
// Fail due to invalid banner
k, rest, curve, err = UnmarshalPrivateKeyFromPEM(rest)
assert.Nil(t, k)
assert.Equal(t, rest, invalidPem)
require.EqualError(t, err, "bytes did not contain a proper private key banner")
// Fail due to ivalid PEM format, because
// it's missing the requisite pre-encapsulation boundary.
k, rest, curve, err = UnmarshalPrivateKeyFromPEM(rest)
assert.Nil(t, k)
assert.Equal(t, rest, invalidPem)
require.EqualError(t, err, "input did not contain a valid PEM encoded block")
}
func TestUnmarshalPublicKeyFromPEM(t *testing.T) {
t.Parallel()
pubKey := []byte(`# A good key
-----BEGIN NEBULA ED25519 PUBLIC KEY-----
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=
-----END NEBULA ED25519 PUBLIC KEY-----
`)
shortKey := []byte(`# A short key
-----BEGIN NEBULA ED25519 PUBLIC KEY-----
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA==
-----END NEBULA ED25519 PUBLIC KEY-----
`)
invalidBanner := []byte(`# Invalid banner
-----BEGIN NOT A NEBULA PUBLIC KEY-----
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=
-----END NOT A NEBULA PUBLIC KEY-----
`)
invalidPem := []byte(`# Not a valid PEM format
-BEGIN NEBULA ED25519 PUBLIC KEY-----
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=
-END NEBULA ED25519 PUBLIC KEY-----`)
keyBundle := appendByteSlices(pubKey, shortKey, invalidBanner, invalidPem)
// Success test case
k, rest, curve, err := UnmarshalPublicKeyFromPEM(keyBundle)
assert.Len(t, k, 32)
assert.Equal(t, Curve_CURVE25519, curve)
require.NoError(t, err)
assert.Equal(t, rest, appendByteSlices(shortKey, invalidBanner, invalidPem))
// Fail due to short key
k, rest, curve, err = UnmarshalPublicKeyFromPEM(rest)
assert.Nil(t, k)
assert.Equal(t, Curve_CURVE25519, curve)
assert.Equal(t, rest, appendByteSlices(invalidBanner, invalidPem))
require.EqualError(t, err, "key was not 32 bytes, is invalid CURVE25519 public key")
// Fail due to invalid banner
k, rest, curve, err = UnmarshalPublicKeyFromPEM(rest)
assert.Nil(t, k)
assert.Equal(t, Curve_CURVE25519, curve)
require.EqualError(t, err, "bytes did not contain a proper public key banner")
assert.Equal(t, rest, invalidPem)
// Fail due to ivalid PEM format, because
// it's missing the requisite pre-encapsulation boundary.
k, rest, curve, err = UnmarshalPublicKeyFromPEM(rest)
assert.Nil(t, k)
assert.Equal(t, Curve_CURVE25519, curve)
assert.Equal(t, rest, invalidPem)
require.EqualError(t, err, "input did not contain a valid PEM encoded block")
}
func TestUnmarshalX25519PublicKey(t *testing.T) {
t.Parallel()
pubKey := []byte(`# A good key
-----BEGIN NEBULA X25519 PUBLIC KEY-----
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=
-----END NEBULA X25519 PUBLIC KEY-----
`)
pubP256Key := []byte(`# A good key
-----BEGIN NEBULA P256 PUBLIC KEY-----
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAA=
-----END NEBULA P256 PUBLIC KEY-----
`)
oldPubP256Key := []byte(`# A good key
-----BEGIN NEBULA ECDSA P256 PUBLIC KEY-----
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAA=
-----END NEBULA ECDSA P256 PUBLIC KEY-----
`)
shortKey := []byte(`# A short key
-----BEGIN NEBULA X25519 PUBLIC KEY-----
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA==
-----END NEBULA X25519 PUBLIC KEY-----
`)
invalidBanner := []byte(`# Invalid banner
-----BEGIN NOT A NEBULA PUBLIC KEY-----
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=
-----END NOT A NEBULA PUBLIC KEY-----
`)
invalidPem := []byte(`# Not a valid PEM format
-BEGIN NEBULA X25519 PUBLIC KEY-----
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=
-END NEBULA X25519 PUBLIC KEY-----`)
keyBundle := appendByteSlices(pubKey, pubP256Key, oldPubP256Key, shortKey, invalidBanner, invalidPem)
// Success test case
k, rest, curve, err := UnmarshalPublicKeyFromPEM(keyBundle)
assert.Len(t, k, 32)
require.NoError(t, err)
assert.Equal(t, rest, appendByteSlices(pubP256Key, oldPubP256Key, shortKey, invalidBanner, invalidPem))
assert.Equal(t, Curve_CURVE25519, curve)
// Success test case
k, rest, curve, err = UnmarshalPublicKeyFromPEM(rest)
assert.Len(t, k, 65)
require.NoError(t, err)
assert.Equal(t, rest, appendByteSlices(oldPubP256Key, shortKey, invalidBanner, invalidPem))
assert.Equal(t, Curve_P256, curve)
// Success test case
k, rest, curve, err = UnmarshalPublicKeyFromPEM(rest)
assert.Len(t, k, 65)
require.NoError(t, err)
assert.Equal(t, rest, appendByteSlices(shortKey, invalidBanner, invalidPem))
assert.Equal(t, Curve_P256, curve)
// Fail due to short key
k, rest, curve, err = UnmarshalPublicKeyFromPEM(rest)
assert.Nil(t, k)
assert.Equal(t, rest, appendByteSlices(invalidBanner, invalidPem))
require.EqualError(t, err, "key was not 32 bytes, is invalid CURVE25519 public key")
// Fail due to invalid banner
k, rest, curve, err = UnmarshalPublicKeyFromPEM(rest)
assert.Nil(t, k)
require.EqualError(t, err, "bytes did not contain a proper public key banner")
assert.Equal(t, rest, invalidPem)
// Fail due to ivalid PEM format, because
// it's missing the requisite pre-encapsulation boundary.
k, rest, curve, err = UnmarshalPublicKeyFromPEM(rest)
assert.Nil(t, k)
assert.Equal(t, rest, invalidPem)
require.EqualError(t, err, "input did not contain a valid PEM encoded block")
}

View File

@@ -1,161 +0,0 @@
package cert
import (
"crypto/ecdsa"
"crypto/ed25519"
"crypto/elliptic"
"crypto/rand"
"crypto/sha256"
"fmt"
"net/netip"
"time"
)
// TBSCertificate represents a certificate intended to be signed.
// It is invalid to use this structure as a Certificate.
type TBSCertificate struct {
Version Version
Name string
Networks []netip.Prefix
UnsafeNetworks []netip.Prefix
Groups []string
IsCA bool
NotBefore time.Time
NotAfter time.Time
PublicKey []byte
Curve Curve
issuer string
}
type beingSignedCertificate interface {
// fromTBSCertificate copies the values from the TBSCertificate to this versions internal representation
// Implementations must validate the resulting certificate contains valid information
fromTBSCertificate(*TBSCertificate) error
// marshalForSigning returns the bytes that should be signed
marshalForSigning() ([]byte, error)
// setSignature sets the signature for the certificate that has just been signed. The signature must not be blank.
setSignature([]byte) error
}
type SignerLambda func(certBytes []byte) ([]byte, error)
// Sign will create a sealed certificate using details provided by the TBSCertificate as long as those
// details do not violate constraints of the signing certificate.
// If the TBSCertificate is a CA then signer must be nil.
func (t *TBSCertificate) Sign(signer Certificate, curve Curve, key []byte) (Certificate, error) {
switch t.Curve {
case Curve_CURVE25519:
pk := ed25519.PrivateKey(key)
sp := func(certBytes []byte) ([]byte, error) {
sig := ed25519.Sign(pk, certBytes)
return sig, nil
}
return t.SignWith(signer, curve, sp)
case Curve_P256:
pk, err := ecdsa.ParseRawPrivateKey(elliptic.P256(), key)
if err != nil {
return nil, err
}
sp := func(certBytes []byte) ([]byte, error) {
// We need to hash first for ECDSA
// - https://pkg.go.dev/crypto/ecdsa#SignASN1
hashed := sha256.Sum256(certBytes)
return ecdsa.SignASN1(rand.Reader, pk, hashed[:])
}
return t.SignWith(signer, curve, sp)
default:
return nil, fmt.Errorf("invalid curve: %s", t.Curve)
}
}
// SignWith does the same thing as sign, but uses the function in `sp` to calculate the signature.
// You should only use SignWith if you do not have direct access to your private key.
func (t *TBSCertificate) SignWith(signer Certificate, curve Curve, sp SignerLambda) (Certificate, error) {
if curve != t.Curve {
return nil, fmt.Errorf("curve in cert and private key supplied don't match")
}
if signer != nil {
if t.IsCA {
return nil, fmt.Errorf("can not sign a CA certificate with another")
}
err := checkCAConstraints(signer, t.NotBefore, t.NotAfter, t.Groups, t.Networks, t.UnsafeNetworks)
if err != nil {
return nil, err
}
issuer, err := signer.Fingerprint()
if err != nil {
return nil, fmt.Errorf("error computing issuer: %v", err)
}
t.issuer = issuer
} else {
if !t.IsCA {
return nil, fmt.Errorf("self signed certificates must have IsCA set to true")
}
}
var c beingSignedCertificate
switch t.Version {
case Version1:
c = &certificateV1{}
err := c.fromTBSCertificate(t)
if err != nil {
return nil, err
}
case Version2:
c = &certificateV2{}
err := c.fromTBSCertificate(t)
if err != nil {
return nil, err
}
default:
return nil, fmt.Errorf("unknown cert version %d", t.Version)
}
certBytes, err := c.marshalForSigning()
if err != nil {
return nil, err
}
sig, err := sp(certBytes)
if err != nil {
return nil, err
}
err = c.setSignature(sig)
if err != nil {
return nil, err
}
sc, ok := c.(Certificate)
if !ok {
return nil, fmt.Errorf("invalid certificate")
}
return sc, nil
}
func comparePrefix(a, b netip.Prefix) int {
addr := a.Addr().Compare(b.Addr())
if addr == 0 {
return a.Bits() - b.Bits()
}
return addr
}
// findDuplicatePrefix returns an error if there is a duplicate prefix in the pre-sorted input slice sortedPrefixes
func findDuplicatePrefix(sortedPrefixes []netip.Prefix) error {
if len(sortedPrefixes) < 2 {
return nil
}
for i := 1; i < len(sortedPrefixes); i++ {
if comparePrefix(sortedPrefixes[i], sortedPrefixes[i-1]) == 0 {
return NewErrInvalidCertificateProperties("duplicate network detected: %v", sortedPrefixes[i])
}
}
return nil
}

View File

@@ -1,91 +0,0 @@
package cert
import (
"crypto/ecdsa"
"crypto/ed25519"
"crypto/elliptic"
"crypto/rand"
"net/netip"
"testing"
"time"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestCertificateV1_Sign(t *testing.T) {
before := time.Now().Add(time.Second * -60).Round(time.Second)
after := time.Now().Add(time.Second * 60).Round(time.Second)
pubKey := []byte("1234567890abcedfghij1234567890ab")
tbs := TBSCertificate{
Version: Version1,
Name: "testing",
Networks: []netip.Prefix{
mustParsePrefixUnmapped("10.1.1.1/24"),
mustParsePrefixUnmapped("10.1.1.2/16"),
},
UnsafeNetworks: []netip.Prefix{
mustParsePrefixUnmapped("9.1.1.2/24"),
mustParsePrefixUnmapped("9.1.1.3/24"),
},
Groups: []string{"test-group1", "test-group2", "test-group3"},
NotBefore: before,
NotAfter: after,
PublicKey: pubKey,
IsCA: false,
}
pub, priv, err := ed25519.GenerateKey(rand.Reader)
c, err := tbs.Sign(&certificateV1{details: detailsV1{notBefore: before, notAfter: after}}, Curve_CURVE25519, priv)
require.NoError(t, err)
assert.NotNil(t, c)
assert.True(t, c.CheckSignature(pub))
b, err := c.Marshal()
require.NoError(t, err)
uc, err := unmarshalCertificateV1(b, nil)
require.NoError(t, err)
assert.NotNil(t, uc)
}
func TestCertificateV1_SignP256(t *testing.T) {
before := time.Now().Add(time.Second * -60).Round(time.Second)
after := time.Now().Add(time.Second * 60).Round(time.Second)
pubKey := []byte("01234567890abcedfghij1234567890ab1234567890abcedfghij1234567890ab")
tbs := TBSCertificate{
Version: Version1,
Name: "testing",
Networks: []netip.Prefix{
mustParsePrefixUnmapped("10.1.1.1/24"),
mustParsePrefixUnmapped("10.1.1.2/16"),
},
UnsafeNetworks: []netip.Prefix{
mustParsePrefixUnmapped("9.1.1.2/24"),
mustParsePrefixUnmapped("9.1.1.3/16"),
},
Groups: []string{"test-group1", "test-group2", "test-group3"},
NotBefore: before,
NotAfter: after,
PublicKey: pubKey,
IsCA: false,
Curve: Curve_P256,
}
priv, err := ecdsa.GenerateKey(elliptic.P256(), rand.Reader)
require.NoError(t, err)
pub := elliptic.Marshal(elliptic.P256(), priv.PublicKey.X, priv.PublicKey.Y)
rawPriv := priv.D.FillBytes(make([]byte, 32))
c, err := tbs.Sign(&certificateV1{details: detailsV1{notBefore: before, notAfter: after}}, Curve_P256, rawPriv)
require.NoError(t, err)
assert.NotNil(t, c)
assert.True(t, c.CheckSignature(pub))
b, err := c.Marshal()
require.NoError(t, err)
uc, err := unmarshalCertificateV1(b, nil)
require.NoError(t, err)
assert.NotNil(t, uc)
}

View File

@@ -1,138 +0,0 @@
package cert_test
import (
"crypto/ecdh"
"crypto/ecdsa"
"crypto/elliptic"
"crypto/rand"
"io"
"net/netip"
"time"
"github.com/slackhq/nebula/cert"
"golang.org/x/crypto/curve25519"
"golang.org/x/crypto/ed25519"
)
// NewTestCaCert will create a new ca certificate
func NewTestCaCert(version cert.Version, curve cert.Curve, before, after time.Time, networks, unsafeNetworks []netip.Prefix, groups []string) (cert.Certificate, []byte, []byte, []byte) {
var err error
var pub, priv []byte
switch curve {
case cert.Curve_CURVE25519:
pub, priv, err = ed25519.GenerateKey(rand.Reader)
case cert.Curve_P256:
privk, err := ecdsa.GenerateKey(elliptic.P256(), rand.Reader)
if err != nil {
panic(err)
}
pub = elliptic.Marshal(elliptic.P256(), privk.PublicKey.X, privk.PublicKey.Y)
priv = privk.D.FillBytes(make([]byte, 32))
default:
// There is no default to allow the underlying lib to respond with an error
}
if before.IsZero() {
before = time.Now().Add(time.Second * -60).Round(time.Second)
}
if after.IsZero() {
after = time.Now().Add(time.Second * 60).Round(time.Second)
}
t := &cert.TBSCertificate{
Curve: curve,
Version: version,
Name: "test ca",
NotBefore: time.Unix(before.Unix(), 0),
NotAfter: time.Unix(after.Unix(), 0),
PublicKey: pub,
Networks: networks,
UnsafeNetworks: unsafeNetworks,
Groups: groups,
IsCA: true,
}
c, err := t.Sign(nil, curve, priv)
if err != nil {
panic(err)
}
pem, err := c.MarshalPEM()
if err != nil {
panic(err)
}
return c, pub, priv, pem
}
// NewTestCert will generate a signed certificate with the provided details.
// Expiry times are defaulted if you do not pass them in
func NewTestCert(v cert.Version, curve cert.Curve, ca cert.Certificate, key []byte, name string, before, after time.Time, networks, unsafeNetworks []netip.Prefix, groups []string) (cert.Certificate, []byte, []byte, []byte) {
if before.IsZero() {
before = time.Now().Add(time.Second * -60).Round(time.Second)
}
if after.IsZero() {
after = time.Now().Add(time.Second * 60).Round(time.Second)
}
var pub, priv []byte
switch curve {
case cert.Curve_CURVE25519:
pub, priv = X25519Keypair()
case cert.Curve_P256:
pub, priv = P256Keypair()
default:
panic("unknown curve")
}
nc := &cert.TBSCertificate{
Version: v,
Curve: curve,
Name: name,
Networks: networks,
UnsafeNetworks: unsafeNetworks,
Groups: groups,
NotBefore: time.Unix(before.Unix(), 0),
NotAfter: time.Unix(after.Unix(), 0),
PublicKey: pub,
IsCA: false,
}
c, err := nc.Sign(ca, ca.Curve(), key)
if err != nil {
panic(err)
}
pem, err := c.MarshalPEM()
if err != nil {
panic(err)
}
return c, pub, cert.MarshalPrivateKeyToPEM(curve, priv), pem
}
func X25519Keypair() ([]byte, []byte) {
privkey := make([]byte, 32)
if _, err := io.ReadFull(rand.Reader, privkey); err != nil {
panic(err)
}
pubkey, err := curve25519.X25519(privkey, curve25519.Basepoint)
if err != nil {
panic(err)
}
return pubkey, privkey
}
func P256Keypair() ([]byte, []byte) {
privkey, err := ecdh.P256().GenerateKey(rand.Reader)
if err != nil {
panic(err)
}
pubkey := privkey.PublicKey()
return pubkey.Bytes(), privkey.Bytes()
}

10
cidr/parse.go Normal file
View File

@@ -0,0 +1,10 @@
package cidr
import "net"
// Parse is a convenience function that returns only the IPNet
// This function ignores errors since it is primarily a test helper, the result could be nil
func Parse(s string) *net.IPNet {
_, c, _ := net.ParseCIDR(s)
return c
}

145
cidr/tree4.go Normal file
View File

@@ -0,0 +1,145 @@
package cidr
import (
"net"
"github.com/slackhq/nebula/iputil"
)
type Node struct {
left *Node
right *Node
parent *Node
value interface{}
}
type Tree4 struct {
root *Node
}
const (
startbit = iputil.VpnIp(0x80000000)
)
func NewTree4() *Tree4 {
tree := new(Tree4)
tree.root = &Node{}
return tree
}
func (tree *Tree4) AddCIDR(cidr *net.IPNet, val interface{}) {
bit := startbit
node := tree.root
next := tree.root
ip := iputil.Ip2VpnIp(cidr.IP)
mask := iputil.Ip2VpnIp(cidr.Mask)
// Find our last ancestor in the tree
for bit&mask != 0 {
if ip&bit != 0 {
next = node.right
} else {
next = node.left
}
if next == nil {
break
}
bit = bit >> 1
node = next
}
// We already have this range so update the value
if next != nil {
node.value = val
return
}
// Build up the rest of the tree we don't already have
for bit&mask != 0 {
next = &Node{}
next.parent = node
if ip&bit != 0 {
node.right = next
} else {
node.left = next
}
bit >>= 1
node = next
}
// Final node marks our cidr, set the value
node.value = val
}
// Finds the first match, which may be the least specific
func (tree *Tree4) Contains(ip iputil.VpnIp) (value interface{}) {
bit := startbit
node := tree.root
for node != nil {
if node.value != nil {
return node.value
}
if ip&bit != 0 {
node = node.right
} else {
node = node.left
}
bit >>= 1
}
return value
}
// Finds the most specific match
func (tree *Tree4) MostSpecificContains(ip iputil.VpnIp) (value interface{}) {
bit := startbit
node := tree.root
for node != nil {
if node.value != nil {
value = node.value
}
if ip&bit != 0 {
node = node.right
} else {
node = node.left
}
bit >>= 1
}
return value
}
// Finds the most specific match
func (tree *Tree4) Match(ip iputil.VpnIp) (value interface{}) {
bit := startbit
node := tree.root
lastNode := node
for node != nil {
lastNode = node
if ip&bit != 0 {
node = node.right
} else {
node = node.left
}
bit >>= 1
}
if bit == 0 && lastNode != nil {
value = lastNode.value
}
return value
}

153
cidr/tree4_test.go Normal file
View File

@@ -0,0 +1,153 @@
package cidr
import (
"net"
"testing"
"github.com/slackhq/nebula/iputil"
"github.com/stretchr/testify/assert"
)
func TestCIDRTree_Contains(t *testing.T) {
tree := NewTree4()
tree.AddCIDR(Parse("1.0.0.0/8"), "1")
tree.AddCIDR(Parse("2.1.0.0/16"), "2")
tree.AddCIDR(Parse("3.1.1.0/24"), "3")
tree.AddCIDR(Parse("4.1.1.0/24"), "4a")
tree.AddCIDR(Parse("4.1.1.1/32"), "4b")
tree.AddCIDR(Parse("4.1.2.1/32"), "4c")
tree.AddCIDR(Parse("254.0.0.0/4"), "5")
tests := []struct {
Result interface{}
IP string
}{
{"1", "1.0.0.0"},
{"1", "1.255.255.255"},
{"2", "2.1.0.0"},
{"2", "2.1.255.255"},
{"3", "3.1.1.0"},
{"3", "3.1.1.255"},
{"4a", "4.1.1.255"},
{"4a", "4.1.1.1"},
{"5", "240.0.0.0"},
{"5", "255.255.255.255"},
{nil, "239.0.0.0"},
{nil, "4.1.2.2"},
}
for _, tt := range tests {
assert.Equal(t, tt.Result, tree.Contains(iputil.Ip2VpnIp(net.ParseIP(tt.IP))))
}
tree = NewTree4()
tree.AddCIDR(Parse("1.1.1.1/0"), "cool")
assert.Equal(t, "cool", tree.Contains(iputil.Ip2VpnIp(net.ParseIP("0.0.0.0"))))
assert.Equal(t, "cool", tree.Contains(iputil.Ip2VpnIp(net.ParseIP("255.255.255.255"))))
}
func TestCIDRTree_MostSpecificContains(t *testing.T) {
tree := NewTree4()
tree.AddCIDR(Parse("1.0.0.0/8"), "1")
tree.AddCIDR(Parse("2.1.0.0/16"), "2")
tree.AddCIDR(Parse("3.1.1.0/24"), "3")
tree.AddCIDR(Parse("4.1.1.0/24"), "4a")
tree.AddCIDR(Parse("4.1.1.0/30"), "4b")
tree.AddCIDR(Parse("4.1.1.1/32"), "4c")
tree.AddCIDR(Parse("254.0.0.0/4"), "5")
tests := []struct {
Result interface{}
IP string
}{
{"1", "1.0.0.0"},
{"1", "1.255.255.255"},
{"2", "2.1.0.0"},
{"2", "2.1.255.255"},
{"3", "3.1.1.0"},
{"3", "3.1.1.255"},
{"4a", "4.1.1.255"},
{"4b", "4.1.1.2"},
{"4c", "4.1.1.1"},
{"5", "240.0.0.0"},
{"5", "255.255.255.255"},
{nil, "239.0.0.0"},
{nil, "4.1.2.2"},
}
for _, tt := range tests {
assert.Equal(t, tt.Result, tree.MostSpecificContains(iputil.Ip2VpnIp(net.ParseIP(tt.IP))))
}
tree = NewTree4()
tree.AddCIDR(Parse("1.1.1.1/0"), "cool")
assert.Equal(t, "cool", tree.MostSpecificContains(iputil.Ip2VpnIp(net.ParseIP("0.0.0.0"))))
assert.Equal(t, "cool", tree.MostSpecificContains(iputil.Ip2VpnIp(net.ParseIP("255.255.255.255"))))
}
func TestCIDRTree_Match(t *testing.T) {
tree := NewTree4()
tree.AddCIDR(Parse("4.1.1.0/32"), "1a")
tree.AddCIDR(Parse("4.1.1.1/32"), "1b")
tests := []struct {
Result interface{}
IP string
}{
{"1a", "4.1.1.0"},
{"1b", "4.1.1.1"},
}
for _, tt := range tests {
assert.Equal(t, tt.Result, tree.Match(iputil.Ip2VpnIp(net.ParseIP(tt.IP))))
}
tree = NewTree4()
tree.AddCIDR(Parse("1.1.1.1/0"), "cool")
assert.Equal(t, "cool", tree.Contains(iputil.Ip2VpnIp(net.ParseIP("0.0.0.0"))))
assert.Equal(t, "cool", tree.Contains(iputil.Ip2VpnIp(net.ParseIP("255.255.255.255"))))
}
func BenchmarkCIDRTree_Contains(b *testing.B) {
tree := NewTree4()
tree.AddCIDR(Parse("1.1.0.0/16"), "1")
tree.AddCIDR(Parse("1.2.1.1/32"), "1")
tree.AddCIDR(Parse("192.2.1.1/32"), "1")
tree.AddCIDR(Parse("172.2.1.1/32"), "1")
ip := iputil.Ip2VpnIp(net.ParseIP("1.2.1.1"))
b.Run("found", func(b *testing.B) {
for i := 0; i < b.N; i++ {
tree.Contains(ip)
}
})
ip = iputil.Ip2VpnIp(net.ParseIP("1.2.1.255"))
b.Run("not found", func(b *testing.B) {
for i := 0; i < b.N; i++ {
tree.Contains(ip)
}
})
}
func BenchmarkCIDRTree_Match(b *testing.B) {
tree := NewTree4()
tree.AddCIDR(Parse("1.1.0.0/16"), "1")
tree.AddCIDR(Parse("1.2.1.1/32"), "1")
tree.AddCIDR(Parse("192.2.1.1/32"), "1")
tree.AddCIDR(Parse("172.2.1.1/32"), "1")
ip := iputil.Ip2VpnIp(net.ParseIP("1.2.1.1"))
b.Run("found", func(b *testing.B) {
for i := 0; i < b.N; i++ {
tree.Match(ip)
}
})
ip = iputil.Ip2VpnIp(net.ParseIP("1.2.1.255"))
b.Run("not found", func(b *testing.B) {
for i := 0; i < b.N; i++ {
tree.Match(ip)
}
})
}

185
cidr/tree6.go Normal file
View File

@@ -0,0 +1,185 @@
package cidr
import (
"net"
"github.com/slackhq/nebula/iputil"
)
const startbit6 = uint64(1 << 63)
type Tree6 struct {
root4 *Node
root6 *Node
}
func NewTree6() *Tree6 {
tree := new(Tree6)
tree.root4 = &Node{}
tree.root6 = &Node{}
return tree
}
func (tree *Tree6) AddCIDR(cidr *net.IPNet, val interface{}) {
var node, next *Node
cidrIP, ipv4 := isIPV4(cidr.IP)
if ipv4 {
node = tree.root4
next = tree.root4
} else {
node = tree.root6
next = tree.root6
}
for i := 0; i < len(cidrIP); i += 4 {
ip := iputil.Ip2VpnIp(cidrIP[i : i+4])
mask := iputil.Ip2VpnIp(cidr.Mask[i : i+4])
bit := startbit
// Find our last ancestor in the tree
for bit&mask != 0 {
if ip&bit != 0 {
next = node.right
} else {
next = node.left
}
if next == nil {
break
}
bit = bit >> 1
node = next
}
// Build up the rest of the tree we don't already have
for bit&mask != 0 {
next = &Node{}
next.parent = node
if ip&bit != 0 {
node.right = next
} else {
node.left = next
}
bit >>= 1
node = next
}
}
// Final node marks our cidr, set the value
node.value = val
}
// Finds the most specific match
func (tree *Tree6) MostSpecificContains(ip net.IP) (value interface{}) {
var node *Node
wholeIP, ipv4 := isIPV4(ip)
if ipv4 {
node = tree.root4
} else {
node = tree.root6
}
for i := 0; i < len(wholeIP); i += 4 {
ip := iputil.Ip2VpnIp(wholeIP[i : i+4])
bit := startbit
for node != nil {
if node.value != nil {
value = node.value
}
if bit == 0 {
break
}
if ip&bit != 0 {
node = node.right
} else {
node = node.left
}
bit >>= 1
}
}
return value
}
func (tree *Tree6) MostSpecificContainsIpV4(ip iputil.VpnIp) (value interface{}) {
bit := startbit
node := tree.root4
for node != nil {
if node.value != nil {
value = node.value
}
if ip&bit != 0 {
node = node.right
} else {
node = node.left
}
bit >>= 1
}
return value
}
func (tree *Tree6) MostSpecificContainsIpV6(hi, lo uint64) (value interface{}) {
ip := hi
node := tree.root6
for i := 0; i < 2; i++ {
bit := startbit6
for node != nil {
if node.value != nil {
value = node.value
}
if bit == 0 {
break
}
if ip&bit != 0 {
node = node.right
} else {
node = node.left
}
bit >>= 1
}
ip = lo
}
return value
}
func isIPV4(ip net.IP) (net.IP, bool) {
if len(ip) == net.IPv4len {
return ip, true
}
if len(ip) == net.IPv6len && isZeros(ip[0:10]) && ip[10] == 0xff && ip[11] == 0xff {
return ip[12:16], true
}
return ip, false
}
func isZeros(p net.IP) bool {
for i := 0; i < len(p); i++ {
if p[i] != 0 {
return false
}
}
return true
}

81
cidr/tree6_test.go Normal file
View File

@@ -0,0 +1,81 @@
package cidr
import (
"encoding/binary"
"net"
"testing"
"github.com/stretchr/testify/assert"
)
func TestCIDR6Tree_MostSpecificContains(t *testing.T) {
tree := NewTree6()
tree.AddCIDR(Parse("1.0.0.0/8"), "1")
tree.AddCIDR(Parse("2.1.0.0/16"), "2")
tree.AddCIDR(Parse("3.1.1.0/24"), "3")
tree.AddCIDR(Parse("4.1.1.1/24"), "4a")
tree.AddCIDR(Parse("4.1.1.1/30"), "4b")
tree.AddCIDR(Parse("4.1.1.1/32"), "4c")
tree.AddCIDR(Parse("254.0.0.0/4"), "5")
tree.AddCIDR(Parse("1:2:0:4:5:0:0:0/64"), "6a")
tree.AddCIDR(Parse("1:2:0:4:5:0:0:0/80"), "6b")
tree.AddCIDR(Parse("1:2:0:4:5:0:0:0/96"), "6c")
tests := []struct {
Result interface{}
IP string
}{
{"1", "1.0.0.0"},
{"1", "1.255.255.255"},
{"2", "2.1.0.0"},
{"2", "2.1.255.255"},
{"3", "3.1.1.0"},
{"3", "3.1.1.255"},
{"4a", "4.1.1.255"},
{"4b", "4.1.1.2"},
{"4c", "4.1.1.1"},
{"5", "240.0.0.0"},
{"5", "255.255.255.255"},
{"6a", "1:2:0:4:1:1:1:1"},
{"6b", "1:2:0:4:5:1:1:1"},
{"6c", "1:2:0:4:5:0:0:0"},
{nil, "239.0.0.0"},
{nil, "4.1.2.2"},
}
for _, tt := range tests {
assert.Equal(t, tt.Result, tree.MostSpecificContains(net.ParseIP(tt.IP)))
}
tree = NewTree6()
tree.AddCIDR(Parse("1.1.1.1/0"), "cool")
tree.AddCIDR(Parse("::/0"), "cool6")
assert.Equal(t, "cool", tree.MostSpecificContains(net.ParseIP("0.0.0.0")))
assert.Equal(t, "cool", tree.MostSpecificContains(net.ParseIP("255.255.255.255")))
assert.Equal(t, "cool6", tree.MostSpecificContains(net.ParseIP("::")))
assert.Equal(t, "cool6", tree.MostSpecificContains(net.ParseIP("1:2:3:4:5:6:7:8")))
}
func TestCIDR6Tree_MostSpecificContainsIpV6(t *testing.T) {
tree := NewTree6()
tree.AddCIDR(Parse("1:2:0:4:5:0:0:0/64"), "6a")
tree.AddCIDR(Parse("1:2:0:4:5:0:0:0/80"), "6b")
tree.AddCIDR(Parse("1:2:0:4:5:0:0:0/96"), "6c")
tests := []struct {
Result interface{}
IP string
}{
{"6a", "1:2:0:4:1:1:1:1"},
{"6b", "1:2:0:4:5:1:1:1"},
{"6c", "1:2:0:4:5:0:0:0"},
}
for _, tt := range tests {
ip := net.ParseIP(tt.IP)
hi := binary.BigEndian.Uint64(ip[:8])
lo := binary.BigEndian.Uint64(ip[8:])
assert.Equal(t, tt.Result, tree.MostSpecificContainsIpV6(hi, lo))
}
}

View File

@@ -1,191 +0,0 @@
package main
import (
"encoding/binary"
"errors"
"flag"
"fmt"
"log"
"net"
"net/netip"
"time"
"unsafe"
"golang.org/x/sys/unix"
)
const (
// UDP_SEGMENT enables GSO segmentation
UDP_SEGMENT = 103
// Maximum GSO segment size (typical MTU - headers)
maxGSOSize = 1400
)
func main() {
destAddr := flag.String("dest", "10.4.0.16:4202", "Destination address")
gsoSize := flag.Int("gso", 1400, "GSO segment size")
totalSize := flag.Int("size", 14000, "Total payload size to send")
count := flag.Int("count", 1, "Number of packets to send")
flag.Parse()
if *gsoSize > maxGSOSize {
log.Fatalf("GSO size %d exceeds maximum %d", *gsoSize, maxGSOSize)
}
// Resolve destination address
_, err := net.ResolveUDPAddr("udp", *destAddr)
if err != nil {
log.Fatalf("Failed to resolve address: %v", err)
}
// Create a raw UDP socket with GSO support
fd, err := unix.Socket(unix.AF_INET, unix.SOCK_DGRAM, unix.IPPROTO_UDP)
if err != nil {
log.Fatalf("Failed to create socket: %v", err)
}
defer unix.Close(fd)
// Bind to a local address
localAddr := &unix.SockaddrInet4{
Port: 0, // Let the system choose a port
}
if err := unix.Bind(fd, localAddr); err != nil {
log.Fatalf("Failed to bind socket: %v", err)
}
fmt.Printf("Sending UDP packets with GSO enabled\n")
fmt.Printf("Destination: %s\n", *destAddr)
fmt.Printf("GSO segment size: %d bytes\n", *gsoSize)
fmt.Printf("Total payload size: %d bytes\n", *totalSize)
fmt.Printf("Number of packets: %d\n\n", *count)
// Create payload
payload := make([]byte, *totalSize)
for i := range payload {
payload[i] = byte(i % 256)
}
dest := netip.MustParseAddrPort(*destAddr)
//if err := unix.SetsockoptInt(fd, unix.SOL_UDP, unix.UDP_SEGMENT, 1400); err != nil {
// panic(err)
//}
for i := 0; i < *count; i++ {
err := WriteBatch(fd, payload, dest, uint16(*gsoSize), true)
if err != nil {
log.Printf("Send error on packet %d: %v", i, err)
continue
}
if (i+1)%100 == 0 || i == *count-1 {
fmt.Printf("Sent %d packets\n", i+1)
}
}
fmt.Printf("now, let's send without the correct ctrl header\n")
time.Sleep(time.Second)
for i := 0; i < *count; i++ {
err := WriteBatch(fd, payload, dest, uint16(*gsoSize), false)
if err != nil {
log.Printf("Send error on packet %d: %v", i, err)
continue
}
if (i+1)%100 == 0 || i == *count-1 {
fmt.Printf("Sent %d packets\n", i+1)
}
}
}
func WriteBatch(fd int, payload []byte, addr netip.AddrPort, segSize uint16, withHeader bool) error {
msgs := make([]rawMessage, 0, 1)
iovs := make([]iovec, 0, 1)
names := make([][unix.SizeofSockaddrInet6]byte, 0, 1)
sent := 0
pkts := []BatchPacket{
{
Payload: payload,
Addr: addr,
},
}
for _, pkt := range pkts {
if len(pkt.Payload) == 0 {
sent++
continue
}
msgs = append(msgs, rawMessage{})
iovs = append(iovs, iovec{})
names = append(names, [unix.SizeofSockaddrInet6]byte{})
idx := len(msgs) - 1
msg := &msgs[idx]
iov := &iovs[idx]
name := &names[idx]
setIovecSlice(iov, pkt.Payload)
msg.Hdr.Iov = iov
msg.Hdr.Iovlen = 1
if withHeader {
setRawMessageControl(msg, buildGSOControlMessage(segSize)) //
} else {
setRawMessageControl(msg, nil) //
}
msg.Hdr.Flags = 0
nameLen, err := encodeSockaddr(name[:], pkt.Addr)
if err != nil {
return err
}
msg.Hdr.Name = &name[0]
msg.Hdr.Namelen = nameLen
}
if len(msgs) == 0 {
return errors.New("nothing to write")
}
offset := 0
for offset < len(msgs) {
n, _, errno := unix.Syscall6(
unix.SYS_SENDMMSG,
uintptr(fd),
uintptr(unsafe.Pointer(&msgs[offset])),
uintptr(len(msgs)-offset),
0,
0,
0,
)
if errno != 0 {
if errno == unix.EINTR {
continue
}
return &net.OpError{Op: "sendmmsg", Err: errno}
}
if n == 0 {
break
}
offset += int(n)
}
return nil
}
func buildGSOControlMessage(segSize uint16) []byte {
control := make([]byte, unix.CmsgSpace(2))
hdr := (*unix.Cmsghdr)(unsafe.Pointer(&control[0]))
hdr.Level = unix.SOL_UDP
hdr.Type = unix.UDP_SEGMENT
setCmsgLen(hdr, unix.CmsgLen(2))
binary.NativeEndian.PutUint16(control[unix.CmsgLen(0):unix.CmsgLen(0)+2], uint16(segSize))
return control
}

View File

@@ -1,85 +0,0 @@
package main
import (
"encoding/binary"
"fmt"
"net/netip"
"unsafe"
"golang.org/x/sys/unix"
)
type iovec struct {
Base *byte
Len uint64
}
type msghdr struct {
Name *byte
Namelen uint32
Pad0 [4]byte
Iov *iovec
Iovlen uint64
Control *byte
Controllen uint64
Flags int32
Pad1 [4]byte
}
type rawMessage struct {
Hdr msghdr
Len uint32
Pad0 [4]byte
}
type BatchPacket struct {
Payload []byte
Addr netip.AddrPort
}
func encodeSockaddr(dst []byte, addr netip.AddrPort) (uint32, error) {
if addr.Addr().Is4() {
if !addr.Addr().Is4() {
return 0, fmt.Errorf("Listener is IPv4, but writing to IPv6 remote")
}
var sa unix.RawSockaddrInet4
sa.Family = unix.AF_INET
sa.Addr = addr.Addr().As4()
binary.BigEndian.PutUint16((*[2]byte)(unsafe.Pointer(&sa.Port))[:], addr.Port())
size := unix.SizeofSockaddrInet4
copy(dst[:size], (*(*[unix.SizeofSockaddrInet4]byte)(unsafe.Pointer(&sa)))[:])
return uint32(size), nil
}
var sa unix.RawSockaddrInet6
sa.Family = unix.AF_INET6
sa.Addr = addr.Addr().As16()
binary.BigEndian.PutUint16((*[2]byte)(unsafe.Pointer(&sa.Port))[:], addr.Port())
size := unix.SizeofSockaddrInet6
copy(dst[:size], (*(*[unix.SizeofSockaddrInet6]byte)(unsafe.Pointer(&sa)))[:])
return uint32(size), nil
}
func setRawMessageControl(msg *rawMessage, buf []byte) {
if len(buf) == 0 {
msg.Hdr.Control = nil
msg.Hdr.Controllen = 0
return
}
msg.Hdr.Control = &buf[0]
msg.Hdr.Controllen = uint64(len(buf))
}
func setCmsgLen(h *unix.Cmsghdr, l int) {
h.Len = uint64(l)
}
func setIovecSlice(iov *iovec, b []byte) {
if len(b) == 0 {
iov.Base = nil
iov.Len = 0
return
}
iov.Base = &b[0]
iov.Len = uint64(len(b))
}

View File

@@ -1,112 +1,63 @@
package main package main
import ( import (
"crypto/ecdsa"
"crypto/elliptic"
"crypto/rand" "crypto/rand"
"flag" "flag"
"fmt" "fmt"
"io" "io"
"math" "io/ioutil"
"net/netip" "net"
"os" "os"
"strings" "strings"
"time" "time"
"github.com/skip2/go-qrcode" "github.com/skip2/go-qrcode"
"github.com/slackhq/nebula/cert" "github.com/slackhq/nebula/cert"
"github.com/slackhq/nebula/pkclient"
"golang.org/x/crypto/ed25519" "golang.org/x/crypto/ed25519"
) )
type caFlags struct { type caFlags struct {
set *flag.FlagSet set *flag.FlagSet
name *string name *string
duration *time.Duration duration *time.Duration
outKeyPath *string outKeyPath *string
outCertPath *string outCertPath *string
outQRPath *string outQRPath *string
groups *string groups *string
networks *string ips *string
unsafeNetworks *string subnets *string
argonMemory *uint
argonIterations *uint
argonParallelism *uint
encryption *bool
version *uint
curve *string
p11url *string
// Deprecated options
ips *string
subnets *string
} }
func newCaFlags() *caFlags { func newCaFlags() *caFlags {
cf := caFlags{set: flag.NewFlagSet("ca", flag.ContinueOnError)} cf := caFlags{set: flag.NewFlagSet("ca", flag.ContinueOnError)}
cf.set.Usage = func() {} cf.set.Usage = func() {}
cf.name = cf.set.String("name", "", "Required: name of the certificate authority") cf.name = cf.set.String("name", "", "Required: name of the certificate authority")
cf.version = cf.set.Uint("version", uint(cert.Version2), "Optional: version of the certificate format to use")
cf.duration = cf.set.Duration("duration", time.Duration(time.Hour*8760), "Optional: amount of time the certificate should be valid for. Valid time units are seconds: \"s\", minutes: \"m\", hours: \"h\"") cf.duration = cf.set.Duration("duration", time.Duration(time.Hour*8760), "Optional: amount of time the certificate should be valid for. Valid time units are seconds: \"s\", minutes: \"m\", hours: \"h\"")
cf.outKeyPath = cf.set.String("out-key", "ca.key", "Optional: path to write the private key to") cf.outKeyPath = cf.set.String("out-key", "ca.key", "Optional: path to write the private key to")
cf.outCertPath = cf.set.String("out-crt", "ca.crt", "Optional: path to write the certificate to") cf.outCertPath = cf.set.String("out-crt", "ca.crt", "Optional: path to write the certificate to")
cf.outQRPath = cf.set.String("out-qr", "", "Optional: output a qr code image (png) of the certificate") cf.outQRPath = cf.set.String("out-qr", "", "Optional: output a qr code image (png) of the certificate")
cf.groups = cf.set.String("groups", "", "Optional: comma separated list of groups. This will limit which groups subordinate certs can use") cf.groups = cf.set.String("groups", "", "Optional: comma separated list of groups. This will limit which groups subordinate certs can use")
cf.networks = cf.set.String("networks", "", "Optional: comma separated list of ip address and network in CIDR notation. This will limit which ip addresses and networks subordinate certs can use in networks") cf.ips = cf.set.String("ips", "", "Optional: comma separated list of ip and network in CIDR notation. This will limit which ip addresses and networks subordinate certs can use")
cf.unsafeNetworks = cf.set.String("unsafe-networks", "", "Optional: comma separated list of ip address and network in CIDR notation. This will limit which ip addresses and networks subordinate certs can use in unsafe networks") cf.subnets = cf.set.String("subnets", "", "Optional: comma separated list of ip and network in CIDR notation. This will limit which subnet addresses and networks subordinate certs can use")
cf.argonMemory = cf.set.Uint("argon-memory", 2*1024*1024, "Optional: Argon2 memory parameter (in KiB) used for encrypted private key passphrase")
cf.argonParallelism = cf.set.Uint("argon-parallelism", 4, "Optional: Argon2 parallelism parameter used for encrypted private key passphrase")
cf.argonIterations = cf.set.Uint("argon-iterations", 1, "Optional: Argon2 iterations parameter used for encrypted private key passphrase")
cf.encryption = cf.set.Bool("encrypt", false, "Optional: prompt for passphrase and write out-key in an encrypted format")
cf.curve = cf.set.String("curve", "25519", "EdDSA/ECDSA Curve (25519, P256)")
cf.p11url = p11Flag(cf.set)
cf.ips = cf.set.String("ips", "", "Deprecated, see -networks")
cf.subnets = cf.set.String("subnets", "", "Deprecated, see -unsafe-networks")
return &cf return &cf
} }
func parseArgonParameters(memory uint, parallelism uint, iterations uint) (*cert.Argon2Parameters, error) { func ca(args []string, out io.Writer, errOut io.Writer) error {
if memory <= 0 || memory > math.MaxUint32 {
return nil, newHelpErrorf("-argon-memory must be be greater than 0 and no more than %d KiB", uint32(math.MaxUint32))
}
if parallelism <= 0 || parallelism > math.MaxUint8 {
return nil, newHelpErrorf("-argon-parallelism must be be greater than 0 and no more than %d", math.MaxUint8)
}
if iterations <= 0 || iterations > math.MaxUint32 {
return nil, newHelpErrorf("-argon-iterations must be be greater than 0 and no more than %d", uint32(math.MaxUint32))
}
return cert.NewArgon2Parameters(uint32(memory), uint8(parallelism), uint32(iterations)), nil
}
func ca(args []string, out io.Writer, errOut io.Writer, pr PasswordReader) error {
cf := newCaFlags() cf := newCaFlags()
err := cf.set.Parse(args) err := cf.set.Parse(args)
if err != nil { if err != nil {
return err return err
} }
isP11 := len(*cf.p11url) > 0
if err := mustFlagString("name", cf.name); err != nil { if err := mustFlagString("name", cf.name); err != nil {
return err return err
} }
if !isP11 { if err := mustFlagString("out-key", cf.outKeyPath); err != nil {
if err = mustFlagString("out-key", cf.outKeyPath); err != nil { return err
return err
}
} }
if err := mustFlagString("out-crt", cf.outCertPath); err != nil { if err := mustFlagString("out-crt", cf.outCertPath); err != nil {
return err return err
} }
var kdfParams *cert.Argon2Parameters
if !isP11 && *cf.encryption {
if kdfParams, err = parseArgonParameters(*cf.argonMemory, *cf.argonParallelism, *cf.argonIterations); err != nil {
return err
}
}
if *cf.duration <= 0 { if *cf.duration <= 0 {
return &helpError{"-duration must be greater than 0"} return &helpError{"-duration must be greater than 0"}
@@ -122,187 +73,78 @@ func ca(args []string, out io.Writer, errOut io.Writer, pr PasswordReader) error
} }
} }
version := cert.Version(*cf.version) var ips []*net.IPNet
if version != cert.Version1 && version != cert.Version2 { if *cf.ips != "" {
return newHelpErrorf("-version must be either %v or %v", cert.Version1, cert.Version2) for _, rs := range strings.Split(*cf.ips, ",") {
}
var networks []netip.Prefix
if *cf.networks == "" && *cf.ips != "" {
// Pull up deprecated -ips flag if needed
*cf.networks = *cf.ips
}
if *cf.networks != "" {
for _, rs := range strings.Split(*cf.networks, ",") {
rs := strings.Trim(rs, " ") rs := strings.Trim(rs, " ")
if rs != "" { if rs != "" {
n, err := netip.ParsePrefix(rs) ip, ipNet, err := net.ParseCIDR(rs)
if err != nil { if err != nil {
return newHelpErrorf("invalid -networks definition: %s", rs) return newHelpErrorf("invalid ip definition: %s", err)
} }
if version == cert.Version1 && !n.Addr().Is4() {
return newHelpErrorf("invalid -networks definition: v1 certificates can only be ipv4, have %s", rs) ipNet.IP = ip
} ips = append(ips, ipNet)
networks = append(networks, n)
} }
} }
} }
var unsafeNetworks []netip.Prefix var subnets []*net.IPNet
if *cf.unsafeNetworks == "" && *cf.subnets != "" { if *cf.subnets != "" {
// Pull up deprecated -subnets flag if needed for _, rs := range strings.Split(*cf.subnets, ",") {
*cf.unsafeNetworks = *cf.subnets
}
if *cf.unsafeNetworks != "" {
for _, rs := range strings.Split(*cf.unsafeNetworks, ",") {
rs := strings.Trim(rs, " ") rs := strings.Trim(rs, " ")
if rs != "" { if rs != "" {
n, err := netip.ParsePrefix(rs) _, s, err := net.ParseCIDR(rs)
if err != nil { if err != nil {
return newHelpErrorf("invalid -unsafe-networks definition: %s", rs) return newHelpErrorf("invalid subnet definition: %s", err)
} }
if version == cert.Version1 && !n.Addr().Is4() { subnets = append(subnets, s)
return newHelpErrorf("invalid -unsafe-networks definition: v1 certificates can only be ipv4, have %s", rs)
}
unsafeNetworks = append(unsafeNetworks, n)
} }
} }
} }
var passphrase []byte pub, rawPriv, err := ed25519.GenerateKey(rand.Reader)
if !isP11 && *cf.encryption { if err != nil {
for i := 0; i < 5; i++ { return fmt.Errorf("error while generating ed25519 keys: %s", err)
out.Write([]byte("Enter passphrase: "))
passphrase, err = pr.ReadPassword()
if err == ErrNoTerminal {
return fmt.Errorf("out-key must be encrypted interactively")
} else if err != nil {
return fmt.Errorf("error reading passphrase: %s", err)
}
if len(passphrase) > 0 {
break
}
}
if len(passphrase) == 0 {
return fmt.Errorf("no passphrase specified, remove -encrypt flag to write out-key in plaintext")
}
} }
var curve cert.Curve nc := cert.NebulaCertificate{
var pub, rawPriv []byte Details: cert.NebulaCertificateDetails{
var p11Client *pkclient.PKClient Name: *cf.name,
Groups: groups,
if isP11 { Ips: ips,
switch *cf.curve { Subnets: subnets,
case "P256": NotBefore: time.Now(),
curve = cert.Curve_P256 NotAfter: time.Now().Add(*cf.duration),
default: PublicKey: pub,
return fmt.Errorf("invalid curve for PKCS#11: %s", *cf.curve) IsCA: true,
} },
p11Client, err = pkclient.FromUrl(*cf.p11url)
if err != nil {
return fmt.Errorf("error while creating PKCS#11 client: %w", err)
}
defer func(client *pkclient.PKClient) {
_ = client.Close()
}(p11Client)
pub, err = p11Client.GetPubKey()
if err != nil {
return fmt.Errorf("error while getting public key with PKCS#11: %w", err)
}
} else {
switch *cf.curve {
case "25519", "X25519", "Curve25519", "CURVE25519":
curve = cert.Curve_CURVE25519
pub, rawPriv, err = ed25519.GenerateKey(rand.Reader)
if err != nil {
return fmt.Errorf("error while generating ed25519 keys: %s", err)
}
case "P256":
var key *ecdsa.PrivateKey
curve = cert.Curve_P256
key, err = ecdsa.GenerateKey(elliptic.P256(), rand.Reader)
if err != nil {
return fmt.Errorf("error while generating ecdsa keys: %s", err)
}
// ecdh.PrivateKey lets us get at the encoded bytes, even though
// we aren't using ECDH here.
eKey, err := key.ECDH()
if err != nil {
return fmt.Errorf("error while converting ecdsa key: %s", err)
}
rawPriv = eKey.Bytes()
pub = eKey.PublicKey().Bytes()
default:
return fmt.Errorf("invalid curve: %s", *cf.curve)
}
} }
t := &cert.TBSCertificate{ if _, err := os.Stat(*cf.outKeyPath); err == nil {
Version: version, return fmt.Errorf("refusing to overwrite existing CA key: %s", *cf.outKeyPath)
Name: *cf.name,
Groups: groups,
Networks: networks,
UnsafeNetworks: unsafeNetworks,
NotBefore: time.Now(),
NotAfter: time.Now().Add(*cf.duration),
PublicKey: pub,
IsCA: true,
Curve: curve,
}
if !isP11 {
if _, err := os.Stat(*cf.outKeyPath); err == nil {
return fmt.Errorf("refusing to overwrite existing CA key: %s", *cf.outKeyPath)
}
} }
if _, err := os.Stat(*cf.outCertPath); err == nil { if _, err := os.Stat(*cf.outCertPath); err == nil {
return fmt.Errorf("refusing to overwrite existing CA cert: %s", *cf.outCertPath) return fmt.Errorf("refusing to overwrite existing CA cert: %s", *cf.outCertPath)
} }
var c cert.Certificate err = nc.Sign(rawPriv)
var b []byte if err != nil {
return fmt.Errorf("error while signing: %s", err)
if isP11 {
c, err = t.SignWith(nil, curve, p11Client.SignASN1)
if err != nil {
return fmt.Errorf("error while signing with PKCS#11: %w", err)
}
} else {
c, err = t.Sign(nil, curve, rawPriv)
if err != nil {
return fmt.Errorf("error while signing: %s", err)
}
if *cf.encryption {
b, err = cert.EncryptAndMarshalSigningPrivateKey(curve, rawPriv, passphrase, kdfParams)
if err != nil {
return fmt.Errorf("error while encrypting out-key: %s", err)
}
} else {
b = cert.MarshalSigningPrivateKeyToPEM(curve, rawPriv)
}
err = os.WriteFile(*cf.outKeyPath, b, 0600)
if err != nil {
return fmt.Errorf("error while writing out-key: %s", err)
}
} }
b, err = c.MarshalPEM() err = ioutil.WriteFile(*cf.outKeyPath, cert.MarshalEd25519PrivateKey(rawPriv), 0600)
if err != nil {
return fmt.Errorf("error while writing out-key: %s", err)
}
b, err := nc.MarshalToPEM()
if err != nil { if err != nil {
return fmt.Errorf("error while marshalling certificate: %s", err) return fmt.Errorf("error while marshalling certificate: %s", err)
} }
err = os.WriteFile(*cf.outCertPath, b, 0600) err = ioutil.WriteFile(*cf.outCertPath, b, 0600)
if err != nil { if err != nil {
return fmt.Errorf("error while writing out-crt: %s", err) return fmt.Errorf("error while writing out-crt: %s", err)
} }
@@ -313,7 +155,7 @@ func ca(args []string, out io.Writer, errOut io.Writer, pr PasswordReader) error
return fmt.Errorf("error while generating qr code: %s", err) return fmt.Errorf("error while generating qr code: %s", err)
} }
err = os.WriteFile(*cf.outQRPath, b, 0600) err = ioutil.WriteFile(*cf.outQRPath, b, 0600)
if err != nil { if err != nil {
return fmt.Errorf("error while writing out-qr: %s", err) return fmt.Errorf("error while writing out-qr: %s", err)
} }

View File

@@ -5,18 +5,17 @@ package main
import ( import (
"bytes" "bytes"
"encoding/pem" "io/ioutil"
"errors"
"os" "os"
"strings"
"testing" "testing"
"time" "time"
"github.com/slackhq/nebula/cert" "github.com/slackhq/nebula/cert"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
) )
//TODO: test file permissions
func Test_caSummary(t *testing.T) { func Test_caSummary(t *testing.T) {
assert.Equal(t, "ca <flags>: create a self signed certificate authority", caSummary()) assert.Equal(t, "ca <flags>: create a self signed certificate authority", caSummary())
} }
@@ -27,39 +26,22 @@ func Test_caHelp(t *testing.T) {
assert.Equal( assert.Equal(
t, t,
"Usage of "+os.Args[0]+" ca <flags>: create a self signed certificate authority\n"+ "Usage of "+os.Args[0]+" ca <flags>: create a self signed certificate authority\n"+
" -argon-iterations uint\n"+
" \tOptional: Argon2 iterations parameter used for encrypted private key passphrase (default 1)\n"+
" -argon-memory uint\n"+
" \tOptional: Argon2 memory parameter (in KiB) used for encrypted private key passphrase (default 2097152)\n"+
" -argon-parallelism uint\n"+
" \tOptional: Argon2 parallelism parameter used for encrypted private key passphrase (default 4)\n"+
" -curve string\n"+
" \tEdDSA/ECDSA Curve (25519, P256) (default \"25519\")\n"+
" -duration duration\n"+ " -duration duration\n"+
" \tOptional: amount of time the certificate should be valid for. Valid time units are seconds: \"s\", minutes: \"m\", hours: \"h\" (default 8760h0m0s)\n"+ " \tOptional: amount of time the certificate should be valid for. Valid time units are seconds: \"s\", minutes: \"m\", hours: \"h\" (default 8760h0m0s)\n"+
" -encrypt\n"+
" \tOptional: prompt for passphrase and write out-key in an encrypted format\n"+
" -groups string\n"+ " -groups string\n"+
" \tOptional: comma separated list of groups. This will limit which groups subordinate certs can use\n"+ " \tOptional: comma separated list of groups. This will limit which groups subordinate certs can use\n"+
" -ips string\n"+ " -ips string\n"+
" Deprecated, see -networks\n"+ " \tOptional: comma separated list of ip and network in CIDR notation. This will limit which ip addresses and networks subordinate certs can use\n"+
" -name string\n"+ " -name string\n"+
" \tRequired: name of the certificate authority\n"+ " \tRequired: name of the certificate authority\n"+
" -networks string\n"+
" \tOptional: comma separated list of ip address and network in CIDR notation. This will limit which ip addresses and networks subordinate certs can use in networks\n"+
" -out-crt string\n"+ " -out-crt string\n"+
" \tOptional: path to write the certificate to (default \"ca.crt\")\n"+ " \tOptional: path to write the certificate to (default \"ca.crt\")\n"+
" -out-key string\n"+ " -out-key string\n"+
" \tOptional: path to write the private key to (default \"ca.key\")\n"+ " \tOptional: path to write the private key to (default \"ca.key\")\n"+
" -out-qr string\n"+ " -out-qr string\n"+
" \tOptional: output a qr code image (png) of the certificate\n"+ " \tOptional: output a qr code image (png) of the certificate\n"+
optionalPkcs11String(" -pkcs11 string\n \tOptional: PKCS#11 URI to an existing private key\n")+
" -subnets string\n"+ " -subnets string\n"+
" \tDeprecated, see -unsafe-networks\n"+ " \tOptional: comma separated list of ip and network in CIDR notation. This will limit which subnet addresses and networks subordinate certs can use\n",
" -unsafe-networks string\n"+
" \tOptional: comma separated list of ip address and network in CIDR notation. This will limit which ip addresses and networks subordinate certs can use in unsafe networks\n"+
" -version uint\n"+
" \tOptional: version of the certificate format to use (default 2)\n",
ob.String(), ob.String(),
) )
} }
@@ -68,171 +50,92 @@ func Test_ca(t *testing.T) {
ob := &bytes.Buffer{} ob := &bytes.Buffer{}
eb := &bytes.Buffer{} eb := &bytes.Buffer{}
nopw := &StubPasswordReader{
password: []byte(""),
err: nil,
}
errpw := &StubPasswordReader{
password: []byte(""),
err: errors.New("stub error"),
}
passphrase := []byte("DO NOT USE THIS KEY")
testpw := &StubPasswordReader{
password: passphrase,
err: nil,
}
pwPromptOb := "Enter passphrase: "
// required args // required args
assertHelpError(t, ca( assertHelpError(t, ca([]string{"-out-key", "nope", "-out-crt", "nope", "duration", "100m"}, ob, eb), "-name is required")
[]string{"-version", "1", "-out-key", "nope", "-out-crt", "nope", "duration", "100m"}, ob, eb, nopw, assert.Equal(t, "", ob.String())
), "-name is required") assert.Equal(t, "", eb.String())
assert.Empty(t, ob.String())
assert.Empty(t, eb.String())
// ipv4 only ips
assertHelpError(t, ca([]string{"-version", "1", "-name", "ipv6", "-ips", "100::100/100"}, ob, eb, nopw), "invalid -networks definition: v1 certificates can only be ipv4, have 100::100/100")
assert.Empty(t, ob.String())
assert.Empty(t, eb.String())
// ipv4 only subnets
assertHelpError(t, ca([]string{"-version", "1", "-name", "ipv6", "-subnets", "100::100/100"}, ob, eb, nopw), "invalid -unsafe-networks definition: v1 certificates can only be ipv4, have 100::100/100")
assert.Empty(t, ob.String())
assert.Empty(t, eb.String())
// failed key write // failed key write
ob.Reset() ob.Reset()
eb.Reset() eb.Reset()
args := []string{"-version", "1", "-name", "test", "-duration", "100m", "-out-crt", "/do/not/write/pleasecrt", "-out-key", "/do/not/write/pleasekey"} args := []string{"-name", "test", "-duration", "100m", "-out-crt", "/do/not/write/pleasecrt", "-out-key", "/do/not/write/pleasekey"}
require.EqualError(t, ca(args, ob, eb, nopw), "error while writing out-key: open /do/not/write/pleasekey: "+NoSuchDirError) assert.EqualError(t, ca(args, ob, eb), "error while writing out-key: open /do/not/write/pleasekey: "+NoSuchDirError)
assert.Empty(t, ob.String()) assert.Equal(t, "", ob.String())
assert.Empty(t, eb.String()) assert.Equal(t, "", eb.String())
// create temp key file // create temp key file
keyF, err := os.CreateTemp("", "test.key") keyF, err := ioutil.TempFile("", "test.key")
require.NoError(t, err) assert.Nil(t, err)
require.NoError(t, os.Remove(keyF.Name())) os.Remove(keyF.Name())
// failed cert write // failed cert write
ob.Reset() ob.Reset()
eb.Reset() eb.Reset()
args = []string{"-version", "1", "-name", "test", "-duration", "100m", "-out-crt", "/do/not/write/pleasecrt", "-out-key", keyF.Name()} args = []string{"-name", "test", "-duration", "100m", "-out-crt", "/do/not/write/pleasecrt", "-out-key", keyF.Name()}
require.EqualError(t, ca(args, ob, eb, nopw), "error while writing out-crt: open /do/not/write/pleasecrt: "+NoSuchDirError) assert.EqualError(t, ca(args, ob, eb), "error while writing out-crt: open /do/not/write/pleasecrt: "+NoSuchDirError)
assert.Empty(t, ob.String()) assert.Equal(t, "", ob.String())
assert.Empty(t, eb.String()) assert.Equal(t, "", eb.String())
// create temp cert file // create temp cert file
crtF, err := os.CreateTemp("", "test.crt") crtF, err := ioutil.TempFile("", "test.crt")
require.NoError(t, err) assert.Nil(t, err)
require.NoError(t, os.Remove(crtF.Name())) os.Remove(crtF.Name())
require.NoError(t, os.Remove(keyF.Name())) os.Remove(keyF.Name())
// test proper cert with removed empty groups and subnets // test proper cert with removed empty groups and subnets
ob.Reset() ob.Reset()
eb.Reset() eb.Reset()
args = []string{"-version", "1", "-name", "test", "-duration", "100m", "-groups", "1,, 2 , ,,,3,4,5", "-out-crt", crtF.Name(), "-out-key", keyF.Name()} args = []string{"-name", "test", "-duration", "100m", "-groups", "1,, 2 , ,,,3,4,5", "-out-crt", crtF.Name(), "-out-key", keyF.Name()}
require.NoError(t, ca(args, ob, eb, nopw)) assert.Nil(t, ca(args, ob, eb))
assert.Empty(t, ob.String()) assert.Equal(t, "", ob.String())
assert.Empty(t, eb.String()) assert.Equal(t, "", eb.String())
// read cert and key files // read cert and key files
rb, _ := os.ReadFile(keyF.Name()) rb, _ := ioutil.ReadFile(keyF.Name())
lKey, b, c, err := cert.UnmarshalSigningPrivateKeyFromPEM(rb) lKey, b, err := cert.UnmarshalEd25519PrivateKey(rb)
assert.Equal(t, cert.Curve_CURVE25519, c) assert.Len(t, b, 0)
assert.Empty(t, b) assert.Nil(t, err)
require.NoError(t, err)
assert.Len(t, lKey, 64) assert.Len(t, lKey, 64)
rb, _ = os.ReadFile(crtF.Name()) rb, _ = ioutil.ReadFile(crtF.Name())
lCrt, b, err := cert.UnmarshalCertificateFromPEM(rb) lCrt, b, err := cert.UnmarshalNebulaCertificateFromPEM(rb)
assert.Empty(t, b) assert.Len(t, b, 0)
require.NoError(t, err) assert.Nil(t, err)
assert.Equal(t, "test", lCrt.Name()) assert.Equal(t, "test", lCrt.Details.Name)
assert.Empty(t, lCrt.Networks()) assert.Len(t, lCrt.Details.Ips, 0)
assert.True(t, lCrt.IsCA()) assert.True(t, lCrt.Details.IsCA)
assert.Equal(t, []string{"1", "2", "3", "4", "5"}, lCrt.Groups()) assert.Equal(t, []string{"1", "2", "3", "4", "5"}, lCrt.Details.Groups)
assert.Empty(t, lCrt.UnsafeNetworks()) assert.Len(t, lCrt.Details.Subnets, 0)
assert.Len(t, lCrt.PublicKey(), 32) assert.Len(t, lCrt.Details.PublicKey, 32)
assert.Equal(t, time.Duration(time.Minute*100), lCrt.NotAfter().Sub(lCrt.NotBefore())) assert.Equal(t, time.Duration(time.Minute*100), lCrt.Details.NotAfter.Sub(lCrt.Details.NotBefore))
assert.Empty(t, lCrt.Issuer()) assert.Equal(t, "", lCrt.Details.Issuer)
assert.True(t, lCrt.CheckSignature(lCrt.PublicKey())) assert.True(t, lCrt.CheckSignature(lCrt.Details.PublicKey))
// test encrypted key
os.Remove(keyF.Name())
os.Remove(crtF.Name())
ob.Reset()
eb.Reset()
args = []string{"-version", "1", "-encrypt", "-name", "test", "-duration", "100m", "-groups", "1,2,3,4,5", "-out-crt", crtF.Name(), "-out-key", keyF.Name()}
require.NoError(t, ca(args, ob, eb, testpw))
assert.Equal(t, pwPromptOb, ob.String())
assert.Empty(t, eb.String())
// read encrypted key file and verify default params
rb, _ = os.ReadFile(keyF.Name())
k, _ := pem.Decode(rb)
ned, err := cert.UnmarshalNebulaEncryptedData(k.Bytes)
require.NoError(t, err)
// we won't know salt in advance, so just check start of string
assert.Equal(t, uint32(2*1024*1024), ned.EncryptionMetadata.Argon2Parameters.Memory)
assert.Equal(t, uint8(4), ned.EncryptionMetadata.Argon2Parameters.Parallelism)
assert.Equal(t, uint32(1), ned.EncryptionMetadata.Argon2Parameters.Iterations)
// verify the key is valid and decrypt-able
var curve cert.Curve
curve, lKey, b, err = cert.DecryptAndUnmarshalSigningPrivateKey(passphrase, rb)
assert.Equal(t, cert.Curve_CURVE25519, curve)
require.NoError(t, err)
assert.Empty(t, b)
assert.Len(t, lKey, 64)
// test when reading passsword results in an error
os.Remove(keyF.Name())
os.Remove(crtF.Name())
ob.Reset()
eb.Reset()
args = []string{"-version", "1", "-encrypt", "-name", "test", "-duration", "100m", "-groups", "1,2,3,4,5", "-out-crt", crtF.Name(), "-out-key", keyF.Name()}
require.Error(t, ca(args, ob, eb, errpw))
assert.Equal(t, pwPromptOb, ob.String())
assert.Empty(t, eb.String())
// test when user fails to enter a password
os.Remove(keyF.Name())
os.Remove(crtF.Name())
ob.Reset()
eb.Reset()
args = []string{"-version", "1", "-encrypt", "-name", "test", "-duration", "100m", "-groups", "1,2,3,4,5", "-out-crt", crtF.Name(), "-out-key", keyF.Name()}
require.EqualError(t, ca(args, ob, eb, nopw), "no passphrase specified, remove -encrypt flag to write out-key in plaintext")
assert.Equal(t, strings.Repeat(pwPromptOb, 5), ob.String()) // prompts 5 times before giving up
assert.Empty(t, eb.String())
// create valid cert/key for overwrite tests // create valid cert/key for overwrite tests
os.Remove(keyF.Name()) os.Remove(keyF.Name())
os.Remove(crtF.Name()) os.Remove(crtF.Name())
ob.Reset() ob.Reset()
eb.Reset() eb.Reset()
args = []string{"-version", "1", "-name", "test", "-duration", "100m", "-groups", "1,, 2 , ,,,3,4,5", "-out-crt", crtF.Name(), "-out-key", keyF.Name()} args = []string{"-name", "test", "-duration", "100m", "-groups", "1,, 2 , ,,,3,4,5", "-out-crt", crtF.Name(), "-out-key", keyF.Name()}
require.NoError(t, ca(args, ob, eb, nopw)) assert.Nil(t, ca(args, ob, eb))
// test that we won't overwrite existing certificate file // test that we won't overwrite existing certificate file
ob.Reset() ob.Reset()
eb.Reset() eb.Reset()
args = []string{"-version", "1", "-name", "test", "-duration", "100m", "-groups", "1,, 2 , ,,,3,4,5", "-out-crt", crtF.Name(), "-out-key", keyF.Name()} args = []string{"-name", "test", "-duration", "100m", "-groups", "1,, 2 , ,,,3,4,5", "-out-crt", crtF.Name(), "-out-key", keyF.Name()}
require.EqualError(t, ca(args, ob, eb, nopw), "refusing to overwrite existing CA key: "+keyF.Name()) assert.EqualError(t, ca(args, ob, eb), "refusing to overwrite existing CA key: "+keyF.Name())
assert.Empty(t, ob.String()) assert.Equal(t, "", ob.String())
assert.Empty(t, eb.String()) assert.Equal(t, "", eb.String())
// test that we won't overwrite existing key file // test that we won't overwrite existing key file
os.Remove(keyF.Name()) os.Remove(keyF.Name())
ob.Reset() ob.Reset()
eb.Reset() eb.Reset()
args = []string{"-version", "1", "-name", "test", "-duration", "100m", "-groups", "1,, 2 , ,,,3,4,5", "-out-crt", crtF.Name(), "-out-key", keyF.Name()} args = []string{"-name", "test", "-duration", "100m", "-groups", "1,, 2 , ,,,3,4,5", "-out-crt", crtF.Name(), "-out-key", keyF.Name()}
require.EqualError(t, ca(args, ob, eb, nopw), "refusing to overwrite existing CA cert: "+crtF.Name()) assert.EqualError(t, ca(args, ob, eb), "refusing to overwrite existing CA cert: "+crtF.Name())
assert.Empty(t, ob.String()) assert.Equal(t, "", ob.String())
assert.Empty(t, eb.String()) assert.Equal(t, "", eb.String())
os.Remove(keyF.Name()) os.Remove(keyF.Name())
} }

View File

@@ -4,10 +4,9 @@ import (
"flag" "flag"
"fmt" "fmt"
"io" "io"
"io/ioutil"
"os" "os"
"github.com/slackhq/nebula/pkclient"
"github.com/slackhq/nebula/cert" "github.com/slackhq/nebula/cert"
) )
@@ -15,8 +14,6 @@ type keygenFlags struct {
set *flag.FlagSet set *flag.FlagSet
outKeyPath *string outKeyPath *string
outPubPath *string outPubPath *string
curve *string
p11url *string
} }
func newKeygenFlags() *keygenFlags { func newKeygenFlags() *keygenFlags {
@@ -24,8 +21,6 @@ func newKeygenFlags() *keygenFlags {
cf.set.Usage = func() {} cf.set.Usage = func() {}
cf.outPubPath = cf.set.String("out-pub", "", "Required: path to write the public key to") cf.outPubPath = cf.set.String("out-pub", "", "Required: path to write the public key to")
cf.outKeyPath = cf.set.String("out-key", "", "Required: path to write the private key to") cf.outKeyPath = cf.set.String("out-key", "", "Required: path to write the private key to")
cf.curve = cf.set.String("curve", "25519", "ECDH Curve (25519, P256)")
cf.p11url = p11Flag(cf.set)
return &cf return &cf
} }
@@ -36,58 +31,21 @@ func keygen(args []string, out io.Writer, errOut io.Writer) error {
return err return err
} }
isP11 := len(*cf.p11url) > 0 if err := mustFlagString("out-key", cf.outKeyPath); err != nil {
return err
if !isP11 {
if err = mustFlagString("out-key", cf.outKeyPath); err != nil {
return err
}
} }
if err = mustFlagString("out-pub", cf.outPubPath); err != nil { if err := mustFlagString("out-pub", cf.outPubPath); err != nil {
return err return err
} }
var pub, rawPriv []byte pub, rawPriv := x25519Keypair()
var curve cert.Curve
if isP11 { err = ioutil.WriteFile(*cf.outKeyPath, cert.MarshalX25519PrivateKey(rawPriv), 0600)
switch *cf.curve { if err != nil {
case "P256": return fmt.Errorf("error while writing out-key: %s", err)
curve = cert.Curve_P256
default:
return fmt.Errorf("invalid curve for PKCS#11: %s", *cf.curve)
}
} else {
switch *cf.curve {
case "25519", "X25519", "Curve25519", "CURVE25519":
pub, rawPriv = x25519Keypair()
curve = cert.Curve_CURVE25519
case "P256":
pub, rawPriv = p256Keypair()
curve = cert.Curve_P256
default:
return fmt.Errorf("invalid curve: %s", *cf.curve)
}
} }
if isP11 { err = ioutil.WriteFile(*cf.outPubPath, cert.MarshalX25519PublicKey(pub), 0600)
p11Client, err := pkclient.FromUrl(*cf.p11url)
if err != nil {
return fmt.Errorf("error while creating PKCS#11 client: %w", err)
}
defer func(client *pkclient.PKClient) {
_ = client.Close()
}(p11Client)
pub, err = p11Client.GetPubKey()
if err != nil {
return fmt.Errorf("error while getting public key: %w", err)
}
} else {
err = os.WriteFile(*cf.outKeyPath, cert.MarshalPrivateKeyToPEM(curve, rawPriv), 0600)
if err != nil {
return fmt.Errorf("error while writing out-key: %s", err)
}
}
err = os.WriteFile(*cf.outPubPath, cert.MarshalPublicKeyToPEM(curve, pub), 0600)
if err != nil { if err != nil {
return fmt.Errorf("error while writing out-pub: %s", err) return fmt.Errorf("error while writing out-pub: %s", err)
} }
@@ -101,7 +59,7 @@ func keygenSummary() string {
func keygenHelp(out io.Writer) { func keygenHelp(out io.Writer) {
cf := newKeygenFlags() cf := newKeygenFlags()
_, _ = out.Write([]byte("Usage of " + os.Args[0] + " " + keygenSummary() + "\n")) out.Write([]byte("Usage of " + os.Args[0] + " " + keygenSummary() + "\n"))
cf.set.SetOutput(out) cf.set.SetOutput(out)
cf.set.PrintDefaults() cf.set.PrintDefaults()
} }

View File

@@ -2,14 +2,16 @@ package main
import ( import (
"bytes" "bytes"
"io/ioutil"
"os" "os"
"testing" "testing"
"github.com/slackhq/nebula/cert" "github.com/slackhq/nebula/cert"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
) )
//TODO: test file permissions
func Test_keygenSummary(t *testing.T) { func Test_keygenSummary(t *testing.T) {
assert.Equal(t, "keygen <flags>: create a public/private key pair. the public key can be passed to `nebula-cert sign`", keygenSummary()) assert.Equal(t, "keygen <flags>: create a public/private key pair. the public key can be passed to `nebula-cert sign`", keygenSummary())
} }
@@ -20,13 +22,10 @@ func Test_keygenHelp(t *testing.T) {
assert.Equal( assert.Equal(
t, t,
"Usage of "+os.Args[0]+" keygen <flags>: create a public/private key pair. the public key can be passed to `nebula-cert sign`\n"+ "Usage of "+os.Args[0]+" keygen <flags>: create a public/private key pair. the public key can be passed to `nebula-cert sign`\n"+
" -curve string\n"+
" \tECDH Curve (25519, P256) (default \"25519\")\n"+
" -out-key string\n"+ " -out-key string\n"+
" \tRequired: path to write the private key to\n"+ " \tRequired: path to write the private key to\n"+
" -out-pub string\n"+ " -out-pub string\n"+
" \tRequired: path to write the public key to\n"+ " \tRequired: path to write the public key to\n",
optionalPkcs11String(" -pkcs11 string\n \tOptional: PKCS#11 URI to an existing private key\n"),
ob.String(), ob.String(),
) )
} }
@@ -37,59 +36,57 @@ func Test_keygen(t *testing.T) {
// required args // required args
assertHelpError(t, keygen([]string{"-out-pub", "nope"}, ob, eb), "-out-key is required") assertHelpError(t, keygen([]string{"-out-pub", "nope"}, ob, eb), "-out-key is required")
assert.Empty(t, ob.String()) assert.Equal(t, "", ob.String())
assert.Empty(t, eb.String()) assert.Equal(t, "", eb.String())
assertHelpError(t, keygen([]string{"-out-key", "nope"}, ob, eb), "-out-pub is required") assertHelpError(t, keygen([]string{"-out-key", "nope"}, ob, eb), "-out-pub is required")
assert.Empty(t, ob.String()) assert.Equal(t, "", ob.String())
assert.Empty(t, eb.String()) assert.Equal(t, "", eb.String())
// failed key write // failed key write
ob.Reset() ob.Reset()
eb.Reset() eb.Reset()
args := []string{"-out-pub", "/do/not/write/pleasepub", "-out-key", "/do/not/write/pleasekey"} args := []string{"-out-pub", "/do/not/write/pleasepub", "-out-key", "/do/not/write/pleasekey"}
require.EqualError(t, keygen(args, ob, eb), "error while writing out-key: open /do/not/write/pleasekey: "+NoSuchDirError) assert.EqualError(t, keygen(args, ob, eb), "error while writing out-key: open /do/not/write/pleasekey: "+NoSuchDirError)
assert.Empty(t, ob.String()) assert.Equal(t, "", ob.String())
assert.Empty(t, eb.String()) assert.Equal(t, "", eb.String())
// create temp key file // create temp key file
keyF, err := os.CreateTemp("", "test.key") keyF, err := ioutil.TempFile("", "test.key")
require.NoError(t, err) assert.Nil(t, err)
defer os.Remove(keyF.Name()) defer os.Remove(keyF.Name())
// failed pub write // failed pub write
ob.Reset() ob.Reset()
eb.Reset() eb.Reset()
args = []string{"-out-pub", "/do/not/write/pleasepub", "-out-key", keyF.Name()} args = []string{"-out-pub", "/do/not/write/pleasepub", "-out-key", keyF.Name()}
require.EqualError(t, keygen(args, ob, eb), "error while writing out-pub: open /do/not/write/pleasepub: "+NoSuchDirError) assert.EqualError(t, keygen(args, ob, eb), "error while writing out-pub: open /do/not/write/pleasepub: "+NoSuchDirError)
assert.Empty(t, ob.String()) assert.Equal(t, "", ob.String())
assert.Empty(t, eb.String()) assert.Equal(t, "", eb.String())
// create temp pub file // create temp pub file
pubF, err := os.CreateTemp("", "test.pub") pubF, err := ioutil.TempFile("", "test.pub")
require.NoError(t, err) assert.Nil(t, err)
defer os.Remove(pubF.Name()) defer os.Remove(pubF.Name())
// test proper keygen // test proper keygen
ob.Reset() ob.Reset()
eb.Reset() eb.Reset()
args = []string{"-out-pub", pubF.Name(), "-out-key", keyF.Name()} args = []string{"-out-pub", pubF.Name(), "-out-key", keyF.Name()}
require.NoError(t, keygen(args, ob, eb)) assert.Nil(t, keygen(args, ob, eb))
assert.Empty(t, ob.String()) assert.Equal(t, "", ob.String())
assert.Empty(t, eb.String()) assert.Equal(t, "", eb.String())
// read cert and key files // read cert and key files
rb, _ := os.ReadFile(keyF.Name()) rb, _ := ioutil.ReadFile(keyF.Name())
lKey, b, curve, err := cert.UnmarshalPrivateKeyFromPEM(rb) lKey, b, err := cert.UnmarshalX25519PrivateKey(rb)
assert.Equal(t, cert.Curve_CURVE25519, curve) assert.Len(t, b, 0)
assert.Empty(t, b) assert.Nil(t, err)
require.NoError(t, err)
assert.Len(t, lKey, 32) assert.Len(t, lKey, 32)
rb, _ = os.ReadFile(pubF.Name()) rb, _ = ioutil.ReadFile(pubF.Name())
lPub, b, curve, err := cert.UnmarshalPublicKeyFromPEM(rb) lPub, b, err := cert.UnmarshalX25519PublicKey(rb)
assert.Equal(t, cert.Curve_CURVE25519, curve) assert.Len(t, b, 0)
assert.Empty(t, b) assert.Nil(t, err)
require.NoError(t, err)
assert.Len(t, lPub, 32) assert.Len(t, lPub, 32)
} }

View File

@@ -17,7 +17,7 @@ func (he *helpError) Error() string {
return he.s return he.s
} }
func newHelpErrorf(s string, v ...any) error { func newHelpErrorf(s string, v ...interface{}) error {
return &helpError{s: fmt.Sprintf(s, v...)} return &helpError{s: fmt.Sprintf(s, v...)}
} }
@@ -62,11 +62,11 @@ func main() {
switch args[0] { switch args[0] {
case "ca": case "ca":
err = ca(args[1:], os.Stdout, os.Stderr, StdinPasswordReader{}) err = ca(args[1:], os.Stdout, os.Stderr)
case "keygen": case "keygen":
err = keygen(args[1:], os.Stdout, os.Stderr) err = keygen(args[1:], os.Stdout, os.Stderr)
case "sign": case "sign":
err = signCert(args[1:], os.Stdout, os.Stderr, StdinPasswordReader{}) err = signCert(args[1:], os.Stdout, os.Stderr)
case "print": case "print":
err = printCert(args[1:], os.Stdout, os.Stderr) err = printCert(args[1:], os.Stdout, os.Stderr)
case "verify": case "verify":
@@ -127,8 +127,6 @@ func help(err string, out io.Writer) {
fmt.Fprintln(out, " "+signSummary()) fmt.Fprintln(out, " "+signSummary())
fmt.Fprintln(out, " "+printSummary()) fmt.Fprintln(out, " "+printSummary())
fmt.Fprintln(out, " "+verifySummary()) fmt.Fprintln(out, " "+verifySummary())
fmt.Fprintln(out, "")
fmt.Fprintf(out, " To see usage for a given mode, use %s <mode> -h\n", os.Args[0])
} }
func mustFlagString(name string, val *string) error { func mustFlagString(name string, val *string) error {

View File

@@ -3,15 +3,15 @@ package main
import ( import (
"bytes" "bytes"
"errors" "errors"
"fmt"
"io" "io"
"os" "os"
"testing" "testing"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
) )
//TODO: all flag parsing continueOnError will print to stderr on its own currently
func Test_help(t *testing.T) { func Test_help(t *testing.T) {
expected := "Usage of " + os.Args[0] + " <global flags> <mode>:\n" + expected := "Usage of " + os.Args[0] + " <global flags> <mode>:\n" +
" Global flags:\n" + " Global flags:\n" +
@@ -22,9 +22,7 @@ func Test_help(t *testing.T) {
" " + keygenSummary() + "\n" + " " + keygenSummary() + "\n" +
" " + signSummary() + "\n" + " " + signSummary() + "\n" +
" " + printSummary() + "\n" + " " + printSummary() + "\n" +
" " + verifySummary() + "\n" + " " + verifySummary() + "\n"
"\n" +
" To see usage for a given mode, use " + os.Args[0] + " <mode> -h\n"
ob := &bytes.Buffer{} ob := &bytes.Buffer{}
@@ -77,16 +75,8 @@ func assertHelpError(t *testing.T, err error, msg string) {
case *helpError: case *helpError:
// good // good
default: default:
t.Fatal(fmt.Sprintf("err was not a helpError: %q, expected %q", err, msg)) t.Fatal("err was not a helpError")
} }
require.EqualError(t, err, msg) assert.EqualError(t, err, msg)
}
func optionalPkcs11String(msg string) string {
if p11Supported() {
return msg
} else {
return ""
}
} }

View File

@@ -1,15 +0,0 @@
//go:build cgo && pkcs11
package main
import (
"flag"
)
func p11Supported() bool {
return true
}
func p11Flag(set *flag.FlagSet) *string {
return set.String("pkcs11", "", "Optional: PKCS#11 URI to an existing private key")
}

View File

@@ -1,16 +0,0 @@
//go:build !cgo || !pkcs11
package main
import (
"flag"
)
func p11Supported() bool {
return false
}
func p11Flag(set *flag.FlagSet) *string {
var ret = ""
return &ret
}

View File

@@ -1,28 +0,0 @@
package main
import (
"errors"
"fmt"
"os"
"golang.org/x/term"
)
var ErrNoTerminal = errors.New("cannot read password from nonexistent terminal")
type PasswordReader interface {
ReadPassword() ([]byte, error)
}
type StdinPasswordReader struct{}
func (pr StdinPasswordReader) ReadPassword() ([]byte, error) {
if !term.IsTerminal(int(os.Stdin.Fd())) {
return nil, ErrNoTerminal
}
password, err := term.ReadPassword(int(os.Stdin.Fd()))
fmt.Println()
return password, err
}

View File

@@ -1,10 +0,0 @@
package main
type StubPasswordReader struct {
password []byte
err error
}
func (pr *StubPasswordReader) ReadPassword() ([]byte, error) {
return pr.password, pr.err
}

View File

@@ -5,6 +5,7 @@ import (
"flag" "flag"
"fmt" "fmt"
"io" "io"
"io/ioutil"
"os" "os"
"strings" "strings"
@@ -40,32 +41,33 @@ func printCert(args []string, out io.Writer, errOut io.Writer) error {
return err return err
} }
rawCert, err := os.ReadFile(*pf.path) rawCert, err := ioutil.ReadFile(*pf.path)
if err != nil { if err != nil {
return fmt.Errorf("unable to read cert; %s", err) return fmt.Errorf("unable to read cert; %s", err)
} }
var c cert.Certificate var c *cert.NebulaCertificate
var qrBytes []byte var qrBytes []byte
part := 0 part := 0
var jsonCerts []cert.Certificate
for { for {
c, rawCert, err = cert.UnmarshalCertificateFromPEM(rawCert) c, rawCert, err = cert.UnmarshalNebulaCertificateFromPEM(rawCert)
if err != nil { if err != nil {
return fmt.Errorf("error while unmarshaling cert: %s", err) return fmt.Errorf("error while unmarshaling cert: %s", err)
} }
if *pf.json { if *pf.json {
jsonCerts = append(jsonCerts, c) b, _ := json.Marshal(c)
out.Write(b)
out.Write([]byte("\n"))
} else { } else {
_, _ = out.Write([]byte(c.String())) out.Write([]byte(c.String()))
_, _ = out.Write([]byte("\n")) out.Write([]byte("\n"))
} }
if *pf.outQRPath != "" { if *pf.outQRPath != "" {
b, err := c.MarshalPEM() b, err := c.MarshalToPEM()
if err != nil { if err != nil {
return fmt.Errorf("error while marshalling cert to PEM: %s", err) return fmt.Errorf("error while marshalling cert to PEM: %s", err)
} }
@@ -79,19 +81,13 @@ func printCert(args []string, out io.Writer, errOut io.Writer) error {
part++ part++
} }
if *pf.json {
b, _ := json.Marshal(jsonCerts)
_, _ = out.Write(b)
_, _ = out.Write([]byte("\n"))
}
if *pf.outQRPath != "" { if *pf.outQRPath != "" {
b, err := qrcode.Encode(string(qrBytes), qrcode.Medium, -5) b, err := qrcode.Encode(string(qrBytes), qrcode.Medium, -5)
if err != nil { if err != nil {
return fmt.Errorf("error while generating qr code: %s", err) return fmt.Errorf("error while generating qr code: %s", err)
} }
err = os.WriteFile(*pf.outQRPath, b, 0600) err = ioutil.WriteFile(*pf.outQRPath, b, 0600)
if err != nil { if err != nil {
return fmt.Errorf("error while writing out-qr: %s", err) return fmt.Errorf("error while writing out-qr: %s", err)
} }

View File

@@ -2,17 +2,13 @@ package main
import ( import (
"bytes" "bytes"
"crypto/ed25519" "io/ioutil"
"crypto/rand"
"encoding/hex"
"net/netip"
"os" "os"
"testing" "testing"
"time" "time"
"github.com/slackhq/nebula/cert" "github.com/slackhq/nebula/cert"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
) )
func Test_printSummary(t *testing.T) { func Test_printSummary(t *testing.T) {
@@ -43,203 +39,84 @@ func Test_printCert(t *testing.T) {
// no path // no path
err := printCert([]string{}, ob, eb) err := printCert([]string{}, ob, eb)
assert.Empty(t, ob.String()) assert.Equal(t, "", ob.String())
assert.Empty(t, eb.String()) assert.Equal(t, "", eb.String())
assertHelpError(t, err, "-path is required") assertHelpError(t, err, "-path is required")
// no cert at path // no cert at path
ob.Reset() ob.Reset()
eb.Reset() eb.Reset()
err = printCert([]string{"-path", "does_not_exist"}, ob, eb) err = printCert([]string{"-path", "does_not_exist"}, ob, eb)
assert.Empty(t, ob.String()) assert.Equal(t, "", ob.String())
assert.Empty(t, eb.String()) assert.Equal(t, "", eb.String())
require.EqualError(t, err, "unable to read cert; open does_not_exist: "+NoSuchFileError) assert.EqualError(t, err, "unable to read cert; open does_not_exist: "+NoSuchFileError)
// invalid cert at path // invalid cert at path
ob.Reset() ob.Reset()
eb.Reset() eb.Reset()
tf, err := os.CreateTemp("", "print-cert") tf, err := ioutil.TempFile("", "print-cert")
require.NoError(t, err) assert.Nil(t, err)
defer os.Remove(tf.Name()) defer os.Remove(tf.Name())
tf.WriteString("-----BEGIN NOPE-----") tf.WriteString("-----BEGIN NOPE-----")
err = printCert([]string{"-path", tf.Name()}, ob, eb) err = printCert([]string{"-path", tf.Name()}, ob, eb)
assert.Empty(t, ob.String()) assert.Equal(t, "", ob.String())
assert.Empty(t, eb.String()) assert.Equal(t, "", eb.String())
require.EqualError(t, err, "error while unmarshaling cert: input did not contain a valid PEM encoded block") assert.EqualError(t, err, "error while unmarshaling cert: input did not contain a valid PEM encoded block")
// test multiple certs // test multiple certs
ob.Reset() ob.Reset()
eb.Reset() eb.Reset()
tf.Truncate(0) tf.Truncate(0)
tf.Seek(0, 0) tf.Seek(0, 0)
ca, caKey := NewTestCaCert("test ca", nil, nil, time.Time{}, time.Time{}, nil, nil, nil) c := cert.NebulaCertificate{
c, _ := NewTestCert(ca, caKey, "test", time.Time{}, time.Time{}, []netip.Prefix{netip.MustParsePrefix("10.0.0.123/8")}, nil, []string{"hi"}) Details: cert.NebulaCertificateDetails{
Name: "test",
Groups: []string{"hi"},
PublicKey: []byte{1, 2, 3, 4, 5, 6, 7, 8, 9, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 0, 1, 2},
},
Signature: []byte{1, 2, 3, 4, 5, 6, 7, 8, 9, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 0, 1, 2},
}
p, _ := c.MarshalPEM() p, _ := c.MarshalToPEM()
tf.Write(p) tf.Write(p)
tf.Write(p) tf.Write(p)
tf.Write(p) tf.Write(p)
err = printCert([]string{"-path", tf.Name()}, ob, eb) err = printCert([]string{"-path", tf.Name()}, ob, eb)
fp, _ := c.Fingerprint() assert.Nil(t, err)
pk := hex.EncodeToString(c.PublicKey())
sig := hex.EncodeToString(c.Signature())
require.NoError(t, err)
assert.Equal( assert.Equal(
t, t,
//"NebulaCertificate {\n\tDetails {\n\t\tName: test\n\t\tIps: []\n\t\tSubnets: []\n\t\tGroups: [\n\t\t\t\"hi\"\n\t\t]\n\t\tNot before: 0001-01-01 00:00:00 +0000 UTC\n\t\tNot After: 0001-01-01 00:00:00 +0000 UTC\n\t\tIs CA: false\n\t\tIssuer: "+c.Issuer()+"\n\t\tPublic key: "+pk+"\n\t\tCurve: CURVE25519\n\t}\n\tFingerprint: "+fp+"\n\tSignature: "+sig+"\n}\nNebulaCertificate {\n\tDetails {\n\t\tName: test\n\t\tIps: []\n\t\tSubnets: []\n\t\tGroups: [\n\t\t\t\"hi\"\n\t\t]\n\t\tNot before: 0001-01-01 00:00:00 +0000 UTC\n\t\tNot After: 0001-01-01 00:00:00 +0000 UTC\n\t\tIs CA: false\n\t\tIssuer: "+c.Issuer()+"\n\t\tPublic key: "+pk+"\n\t\tCurve: CURVE25519\n\t}\n\tFingerprint: "+fp+"\n\tSignature: "+sig+"\n}\nNebulaCertificate {\n\tDetails {\n\t\tName: test\n\t\tIps: []\n\t\tSubnets: []\n\t\tGroups: [\n\t\t\t\"hi\"\n\t\t]\n\t\tNot before: 0001-01-01 00:00:00 +0000 UTC\n\t\tNot After: 0001-01-01 00:00:00 +0000 UTC\n\t\tIs CA: false\n\t\tIssuer: "+c.Issuer()+"\n\t\tPublic key: "+pk+"\n\t\tCurve: CURVE25519\n\t}\n\tFingerprint: "+fp+"\n\tSignature: "+sig+"\n}\n", "NebulaCertificate {\n\tDetails {\n\t\tName: test\n\t\tIps: []\n\t\tSubnets: []\n\t\tGroups: [\n\t\t\t\"hi\"\n\t\t]\n\t\tNot before: 0001-01-01 00:00:00 +0000 UTC\n\t\tNot After: 0001-01-01 00:00:00 +0000 UTC\n\t\tIs CA: false\n\t\tIssuer: \n\t\tPublic key: 0102030405060708090001020304050607080900010203040506070809000102\n\t}\n\tFingerprint: cc3492c0e9c48f17547f5987ea807462ebb3451e622590a10bb3763c344c82bd\n\tSignature: 0102030405060708090001020304050607080900010203040506070809000102\n}\nNebulaCertificate {\n\tDetails {\n\t\tName: test\n\t\tIps: []\n\t\tSubnets: []\n\t\tGroups: [\n\t\t\t\"hi\"\n\t\t]\n\t\tNot before: 0001-01-01 00:00:00 +0000 UTC\n\t\tNot After: 0001-01-01 00:00:00 +0000 UTC\n\t\tIs CA: false\n\t\tIssuer: \n\t\tPublic key: 0102030405060708090001020304050607080900010203040506070809000102\n\t}\n\tFingerprint: cc3492c0e9c48f17547f5987ea807462ebb3451e622590a10bb3763c344c82bd\n\tSignature: 0102030405060708090001020304050607080900010203040506070809000102\n}\nNebulaCertificate {\n\tDetails {\n\t\tName: test\n\t\tIps: []\n\t\tSubnets: []\n\t\tGroups: [\n\t\t\t\"hi\"\n\t\t]\n\t\tNot before: 0001-01-01 00:00:00 +0000 UTC\n\t\tNot After: 0001-01-01 00:00:00 +0000 UTC\n\t\tIs CA: false\n\t\tIssuer: \n\t\tPublic key: 0102030405060708090001020304050607080900010203040506070809000102\n\t}\n\tFingerprint: cc3492c0e9c48f17547f5987ea807462ebb3451e622590a10bb3763c344c82bd\n\tSignature: 0102030405060708090001020304050607080900010203040506070809000102\n}\n",
`{
"details": {
"curve": "CURVE25519",
"groups": [
"hi"
],
"isCa": false,
"issuer": "`+c.Issuer()+`",
"name": "test",
"networks": [
"10.0.0.123/8"
],
"notAfter": "0001-01-01T00:00:00Z",
"notBefore": "0001-01-01T00:00:00Z",
"publicKey": "`+pk+`",
"unsafeNetworks": []
},
"fingerprint": "`+fp+`",
"signature": "`+sig+`",
"version": 1
}
{
"details": {
"curve": "CURVE25519",
"groups": [
"hi"
],
"isCa": false,
"issuer": "`+c.Issuer()+`",
"name": "test",
"networks": [
"10.0.0.123/8"
],
"notAfter": "0001-01-01T00:00:00Z",
"notBefore": "0001-01-01T00:00:00Z",
"publicKey": "`+pk+`",
"unsafeNetworks": []
},
"fingerprint": "`+fp+`",
"signature": "`+sig+`",
"version": 1
}
{
"details": {
"curve": "CURVE25519",
"groups": [
"hi"
],
"isCa": false,
"issuer": "`+c.Issuer()+`",
"name": "test",
"networks": [
"10.0.0.123/8"
],
"notAfter": "0001-01-01T00:00:00Z",
"notBefore": "0001-01-01T00:00:00Z",
"publicKey": "`+pk+`",
"unsafeNetworks": []
},
"fingerprint": "`+fp+`",
"signature": "`+sig+`",
"version": 1
}
`,
ob.String(), ob.String(),
) )
assert.Empty(t, eb.String()) assert.Equal(t, "", eb.String())
// test json // test json
ob.Reset() ob.Reset()
eb.Reset() eb.Reset()
tf.Truncate(0) tf.Truncate(0)
tf.Seek(0, 0) tf.Seek(0, 0)
c = cert.NebulaCertificate{
Details: cert.NebulaCertificateDetails{
Name: "test",
Groups: []string{"hi"},
PublicKey: []byte{1, 2, 3, 4, 5, 6, 7, 8, 9, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 0, 1, 2},
},
Signature: []byte{1, 2, 3, 4, 5, 6, 7, 8, 9, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 0, 1, 2},
}
p, _ = c.MarshalToPEM()
tf.Write(p) tf.Write(p)
tf.Write(p) tf.Write(p)
tf.Write(p) tf.Write(p)
err = printCert([]string{"-json", "-path", tf.Name()}, ob, eb) err = printCert([]string{"-json", "-path", tf.Name()}, ob, eb)
fp, _ = c.Fingerprint() assert.Nil(t, err)
pk = hex.EncodeToString(c.PublicKey())
sig = hex.EncodeToString(c.Signature())
require.NoError(t, err)
assert.Equal( assert.Equal(
t, t,
`[{"details":{"curve":"CURVE25519","groups":["hi"],"isCa":false,"issuer":"`+c.Issuer()+`","name":"test","networks":["10.0.0.123/8"],"notAfter":"0001-01-01T00:00:00Z","notBefore":"0001-01-01T00:00:00Z","publicKey":"`+pk+`","unsafeNetworks":[]},"fingerprint":"`+fp+`","signature":"`+sig+`","version":1},{"details":{"curve":"CURVE25519","groups":["hi"],"isCa":false,"issuer":"`+c.Issuer()+`","name":"test","networks":["10.0.0.123/8"],"notAfter":"0001-01-01T00:00:00Z","notBefore":"0001-01-01T00:00:00Z","publicKey":"`+pk+`","unsafeNetworks":[]},"fingerprint":"`+fp+`","signature":"`+sig+`","version":1},{"details":{"curve":"CURVE25519","groups":["hi"],"isCa":false,"issuer":"`+c.Issuer()+`","name":"test","networks":["10.0.0.123/8"],"notAfter":"0001-01-01T00:00:00Z","notBefore":"0001-01-01T00:00:00Z","publicKey":"`+pk+`","unsafeNetworks":[]},"fingerprint":"`+fp+`","signature":"`+sig+`","version":1}] "{\"details\":{\"groups\":[\"hi\"],\"ips\":[],\"isCa\":false,\"issuer\":\"\",\"name\":\"test\",\"notAfter\":\"0001-01-01T00:00:00Z\",\"notBefore\":\"0001-01-01T00:00:00Z\",\"publicKey\":\"0102030405060708090001020304050607080900010203040506070809000102\",\"subnets\":[]},\"fingerprint\":\"cc3492c0e9c48f17547f5987ea807462ebb3451e622590a10bb3763c344c82bd\",\"signature\":\"0102030405060708090001020304050607080900010203040506070809000102\"}\n{\"details\":{\"groups\":[\"hi\"],\"ips\":[],\"isCa\":false,\"issuer\":\"\",\"name\":\"test\",\"notAfter\":\"0001-01-01T00:00:00Z\",\"notBefore\":\"0001-01-01T00:00:00Z\",\"publicKey\":\"0102030405060708090001020304050607080900010203040506070809000102\",\"subnets\":[]},\"fingerprint\":\"cc3492c0e9c48f17547f5987ea807462ebb3451e622590a10bb3763c344c82bd\",\"signature\":\"0102030405060708090001020304050607080900010203040506070809000102\"}\n{\"details\":{\"groups\":[\"hi\"],\"ips\":[],\"isCa\":false,\"issuer\":\"\",\"name\":\"test\",\"notAfter\":\"0001-01-01T00:00:00Z\",\"notBefore\":\"0001-01-01T00:00:00Z\",\"publicKey\":\"0102030405060708090001020304050607080900010203040506070809000102\",\"subnets\":[]},\"fingerprint\":\"cc3492c0e9c48f17547f5987ea807462ebb3451e622590a10bb3763c344c82bd\",\"signature\":\"0102030405060708090001020304050607080900010203040506070809000102\"}\n",
`,
ob.String(), ob.String(),
) )
assert.Empty(t, eb.String()) assert.Equal(t, "", eb.String())
}
// NewTestCaCert will generate a CA cert
func NewTestCaCert(name string, pubKey, privKey []byte, before, after time.Time, networks, unsafeNetworks []netip.Prefix, groups []string) (cert.Certificate, []byte) {
var err error
if pubKey == nil || privKey == nil {
pubKey, privKey, err = ed25519.GenerateKey(rand.Reader)
if err != nil {
panic(err)
}
}
t := &cert.TBSCertificate{
Version: cert.Version1,
Name: name,
NotBefore: time.Unix(before.Unix(), 0),
NotAfter: time.Unix(after.Unix(), 0),
PublicKey: pubKey,
Networks: networks,
UnsafeNetworks: unsafeNetworks,
Groups: groups,
IsCA: true,
}
c, err := t.Sign(nil, cert.Curve_CURVE25519, privKey)
if err != nil {
panic(err)
}
return c, privKey
}
func NewTestCert(ca cert.Certificate, signerKey []byte, name string, before, after time.Time, networks, unsafeNetworks []netip.Prefix, groups []string) (cert.Certificate, []byte) {
if before.IsZero() {
before = ca.NotBefore()
}
if after.IsZero() {
after = ca.NotAfter()
}
if len(networks) == 0 {
networks = []netip.Prefix{netip.MustParsePrefix("10.0.0.123/8")}
}
pub, rawPriv := x25519Keypair()
nc := &cert.TBSCertificate{
Version: cert.Version1,
Name: name,
Networks: networks,
UnsafeNetworks: unsafeNetworks,
Groups: groups,
NotBefore: time.Unix(before.Unix(), 0),
NotAfter: time.Unix(after.Unix(), 0),
PublicKey: pub,
IsCA: false,
}
c, err := nc.Sign(ca, ca.Curve(), signerKey)
if err != nil {
panic(err)
}
return c, rawPriv
} }

View File

@@ -1,80 +1,63 @@
package main package main
import ( import (
"crypto/ecdh"
"crypto/rand" "crypto/rand"
"errors"
"flag" "flag"
"fmt" "fmt"
"io" "io"
"net/netip" "io/ioutil"
"net"
"os" "os"
"strings" "strings"
"time" "time"
"github.com/skip2/go-qrcode" "github.com/skip2/go-qrcode"
"github.com/slackhq/nebula/cert" "github.com/slackhq/nebula/cert"
"github.com/slackhq/nebula/pkclient"
"golang.org/x/crypto/curve25519" "golang.org/x/crypto/curve25519"
) )
type signFlags struct { type signFlags struct {
set *flag.FlagSet set *flag.FlagSet
version *uint caKeyPath *string
caKeyPath *string caCertPath *string
caCertPath *string name *string
name *string ip *string
networks *string duration *time.Duration
unsafeNetworks *string inPubPath *string
duration *time.Duration outKeyPath *string
inPubPath *string outCertPath *string
outKeyPath *string outQRPath *string
outCertPath *string groups *string
outQRPath *string subnets *string
groups *string
p11url *string
// Deprecated options
ip *string
subnets *string
} }
func newSignFlags() *signFlags { func newSignFlags() *signFlags {
sf := signFlags{set: flag.NewFlagSet("sign", flag.ContinueOnError)} sf := signFlags{set: flag.NewFlagSet("sign", flag.ContinueOnError)}
sf.set.Usage = func() {} sf.set.Usage = func() {}
sf.version = sf.set.Uint("version", 0, "Optional: version of the certificate format to use, the default is to create both v1 and v2 certificates.")
sf.caKeyPath = sf.set.String("ca-key", "ca.key", "Optional: path to the signing CA key") sf.caKeyPath = sf.set.String("ca-key", "ca.key", "Optional: path to the signing CA key")
sf.caCertPath = sf.set.String("ca-crt", "ca.crt", "Optional: path to the signing CA cert") sf.caCertPath = sf.set.String("ca-crt", "ca.crt", "Optional: path to the signing CA cert")
sf.name = sf.set.String("name", "", "Required: name of the cert, usually a hostname") sf.name = sf.set.String("name", "", "Required: name of the cert, usually a hostname")
sf.networks = sf.set.String("networks", "", "Required: comma separated list of ip address and network in CIDR notation to assign to this cert") sf.ip = sf.set.String("ip", "", "Required: ip and network in CIDR notation to assign the cert")
sf.unsafeNetworks = sf.set.String("unsafe-networks", "", "Optional: comma separated list of ip address and network in CIDR notation. Unsafe networks this cert can route for")
sf.duration = sf.set.Duration("duration", 0, "Optional: how long the cert should be valid for. The default is 1 second before the signing cert expires. Valid time units are seconds: \"s\", minutes: \"m\", hours: \"h\"") sf.duration = sf.set.Duration("duration", 0, "Optional: how long the cert should be valid for. The default is 1 second before the signing cert expires. Valid time units are seconds: \"s\", minutes: \"m\", hours: \"h\"")
sf.inPubPath = sf.set.String("in-pub", "", "Optional (if out-key not set): path to read a previously generated public key") sf.inPubPath = sf.set.String("in-pub", "", "Optional (if out-key not set): path to read a previously generated public key")
sf.outKeyPath = sf.set.String("out-key", "", "Optional (if in-pub not set): path to write the private key to") sf.outKeyPath = sf.set.String("out-key", "", "Optional (if in-pub not set): path to write the private key to")
sf.outCertPath = sf.set.String("out-crt", "", "Optional: path to write the certificate to") sf.outCertPath = sf.set.String("out-crt", "", "Optional: path to write the certificate to")
sf.outQRPath = sf.set.String("out-qr", "", "Optional: output a qr code image (png) of the certificate") sf.outQRPath = sf.set.String("out-qr", "", "Optional: output a qr code image (png) of the certificate")
sf.groups = sf.set.String("groups", "", "Optional: comma separated list of groups") sf.groups = sf.set.String("groups", "", "Optional: comma separated list of groups")
sf.p11url = p11Flag(sf.set) sf.subnets = sf.set.String("subnets", "", "Optional: comma separated list of subnet this cert can serve for")
sf.ip = sf.set.String("ip", "", "Deprecated, see -networks")
sf.subnets = sf.set.String("subnets", "", "Deprecated, see -unsafe-networks")
return &sf return &sf
} }
func signCert(args []string, out io.Writer, errOut io.Writer, pr PasswordReader) error { func signCert(args []string, out io.Writer, errOut io.Writer) error {
sf := newSignFlags() sf := newSignFlags()
err := sf.set.Parse(args) err := sf.set.Parse(args)
if err != nil { if err != nil {
return err return err
} }
isP11 := len(*sf.p11url) > 0 if err := mustFlagString("ca-key", sf.caKeyPath); err != nil {
return err
if !isP11 {
if err := mustFlagString("ca-key", sf.caKeyPath); err != nil {
return err
}
} }
if err := mustFlagString("ca-crt", sf.caCertPath); err != nil { if err := mustFlagString("ca-crt", sf.caCertPath); err != nil {
return err return err
@@ -82,83 +65,40 @@ func signCert(args []string, out io.Writer, errOut io.Writer, pr PasswordReader)
if err := mustFlagString("name", sf.name); err != nil { if err := mustFlagString("name", sf.name); err != nil {
return err return err
} }
if !isP11 && *sf.inPubPath != "" && *sf.outKeyPath != "" { if err := mustFlagString("ip", sf.ip); err != nil {
return err
}
if *sf.inPubPath != "" && *sf.outKeyPath != "" {
return newHelpErrorf("cannot set both -in-pub and -out-key") return newHelpErrorf("cannot set both -in-pub and -out-key")
} }
var v4Networks []netip.Prefix rawCAKey, err := ioutil.ReadFile(*sf.caKeyPath)
var v6Networks []netip.Prefix if err != nil {
if *sf.networks == "" && *sf.ip != "" { return fmt.Errorf("error while reading ca-key: %s", err)
// Pull up deprecated -ip flag if needed
*sf.networks = *sf.ip
} }
if len(*sf.networks) == 0 { caKey, _, err := cert.UnmarshalEd25519PrivateKey(rawCAKey)
return newHelpErrorf("-networks is required") if err != nil {
return fmt.Errorf("error while parsing ca-key: %s", err)
} }
version := cert.Version(*sf.version) rawCACert, err := ioutil.ReadFile(*sf.caCertPath)
if version != 0 && version != cert.Version1 && version != cert.Version2 {
return newHelpErrorf("-version must be either %v or %v", cert.Version1, cert.Version2)
}
var curve cert.Curve
var caKey []byte
if !isP11 {
var rawCAKey []byte
rawCAKey, err := os.ReadFile(*sf.caKeyPath)
if err != nil {
return fmt.Errorf("error while reading ca-key: %s", err)
}
// naively attempt to decode the private key as though it is not encrypted
caKey, _, curve, err = cert.UnmarshalSigningPrivateKeyFromPEM(rawCAKey)
if errors.Is(err, cert.ErrPrivateKeyEncrypted) {
// ask for a passphrase until we get one
var passphrase []byte
for i := 0; i < 5; i++ {
out.Write([]byte("Enter passphrase: "))
passphrase, err = pr.ReadPassword()
if errors.Is(err, ErrNoTerminal) {
return fmt.Errorf("ca-key is encrypted and must be decrypted interactively")
} else if err != nil {
return fmt.Errorf("error reading password: %s", err)
}
if len(passphrase) > 0 {
break
}
}
if len(passphrase) == 0 {
return fmt.Errorf("cannot open encrypted ca-key without passphrase")
}
curve, caKey, _, err = cert.DecryptAndUnmarshalSigningPrivateKey(passphrase, rawCAKey)
if err != nil {
return fmt.Errorf("error while parsing encrypted ca-key: %s", err)
}
} else if err != nil {
return fmt.Errorf("error while parsing ca-key: %s", err)
}
}
rawCACert, err := os.ReadFile(*sf.caCertPath)
if err != nil { if err != nil {
return fmt.Errorf("error while reading ca-crt: %s", err) return fmt.Errorf("error while reading ca-crt: %s", err)
} }
caCert, _, err := cert.UnmarshalCertificateFromPEM(rawCACert) caCert, _, err := cert.UnmarshalNebulaCertificateFromPEM(rawCACert)
if err != nil { if err != nil {
return fmt.Errorf("error while parsing ca-crt: %s", err) return fmt.Errorf("error while parsing ca-crt: %s", err)
} }
if !isP11 { if err := caCert.VerifyPrivateKey(caKey); err != nil {
if err := caCert.VerifyPrivateKey(curve, caKey); err != nil { return fmt.Errorf("refusing to sign, root certificate does not match private key")
return fmt.Errorf("refusing to sign, root certificate does not match private key") }
}
issuer, err := caCert.Sha256Sum()
if err != nil {
return fmt.Errorf("error while getting -ca-crt fingerprint: %s", err)
} }
if caCert.Expired(time.Now()) { if caCert.Expired(time.Now()) {
@@ -167,53 +107,16 @@ func signCert(args []string, out io.Writer, errOut io.Writer, pr PasswordReader)
// if no duration is given, expire one second before the root expires // if no duration is given, expire one second before the root expires
if *sf.duration <= 0 { if *sf.duration <= 0 {
*sf.duration = time.Until(caCert.NotAfter()) - time.Second*1 *sf.duration = time.Until(caCert.Details.NotAfter) - time.Second*1
} }
if *sf.networks != "" { ip, ipNet, err := net.ParseCIDR(*sf.ip)
for _, rs := range strings.Split(*sf.networks, ",") { if err != nil {
rs := strings.Trim(rs, " ") return newHelpErrorf("invalid ip definition: %s", err)
if rs != "" {
n, err := netip.ParsePrefix(rs)
if err != nil {
return newHelpErrorf("invalid -networks definition: %s", rs)
}
if n.Addr().Is4() {
v4Networks = append(v4Networks, n)
} else {
v6Networks = append(v6Networks, n)
}
}
}
} }
ipNet.IP = ip
var v4UnsafeNetworks []netip.Prefix groups := []string{}
var v6UnsafeNetworks []netip.Prefix
if *sf.unsafeNetworks == "" && *sf.subnets != "" {
// Pull up deprecated -subnets flag if needed
*sf.unsafeNetworks = *sf.subnets
}
if *sf.unsafeNetworks != "" {
for _, rs := range strings.Split(*sf.unsafeNetworks, ",") {
rs := strings.Trim(rs, " ")
if rs != "" {
n, err := netip.ParsePrefix(rs)
if err != nil {
return newHelpErrorf("invalid -unsafe-networks definition: %s", rs)
}
if n.Addr().Is4() {
v4UnsafeNetworks = append(v4UnsafeNetworks, n)
} else {
v6UnsafeNetworks = append(v6UnsafeNetworks, n)
}
}
}
}
var groups []string
if *sf.groups != "" { if *sf.groups != "" {
for _, rg := range strings.Split(*sf.groups, ",") { for _, rg := range strings.Split(*sf.groups, ",") {
g := strings.TrimSpace(rg) g := strings.TrimSpace(rg)
@@ -223,41 +126,50 @@ func signCert(args []string, out io.Writer, errOut io.Writer, pr PasswordReader)
} }
} }
var pub, rawPriv []byte subnets := []*net.IPNet{}
var p11Client *pkclient.PKClient if *sf.subnets != "" {
for _, rs := range strings.Split(*sf.subnets, ",") {
if isP11 { rs := strings.Trim(rs, " ")
curve = cert.Curve_P256 if rs != "" {
p11Client, err = pkclient.FromUrl(*sf.p11url) _, s, err := net.ParseCIDR(rs)
if err != nil { if err != nil {
return fmt.Errorf("error while creating PKCS#11 client: %w", err) return newHelpErrorf("invalid subnet definition: %s", err)
}
subnets = append(subnets, s)
}
} }
defer func(client *pkclient.PKClient) {
_ = client.Close()
}(p11Client)
} }
var pub, rawPriv []byte
if *sf.inPubPath != "" { if *sf.inPubPath != "" {
var pubCurve cert.Curve rawPub, err := ioutil.ReadFile(*sf.inPubPath)
rawPub, err := os.ReadFile(*sf.inPubPath)
if err != nil { if err != nil {
return fmt.Errorf("error while reading in-pub: %s", err) return fmt.Errorf("error while reading in-pub: %s", err)
} }
pub, _, err = cert.UnmarshalX25519PublicKey(rawPub)
pub, _, pubCurve, err = cert.UnmarshalPublicKeyFromPEM(rawPub)
if err != nil { if err != nil {
return fmt.Errorf("error while parsing in-pub: %s", err) return fmt.Errorf("error while parsing in-pub: %s", err)
} }
if pubCurve != curve {
return fmt.Errorf("curve of in-pub does not match ca")
}
} else if isP11 {
pub, err = p11Client.GetPubKey()
if err != nil {
return fmt.Errorf("error while getting public key with PKCS#11: %w", err)
}
} else { } else {
pub, rawPriv = newKeypair(curve) pub, rawPriv = x25519Keypair()
}
nc := cert.NebulaCertificate{
Details: cert.NebulaCertificateDetails{
Name: *sf.name,
Ips: []*net.IPNet{ipNet},
Groups: groups,
Subnets: subnets,
NotBefore: time.Now(),
NotAfter: time.Now().Add(*sf.duration),
PublicKey: pub,
IsCA: false,
Issuer: issuer,
},
}
if err := nc.CheckRootConstrains(caCert); err != nil {
return fmt.Errorf("refusing to sign, root certificate constraints violated: %s", err)
} }
if *sf.outKeyPath == "" { if *sf.outKeyPath == "" {
@@ -272,108 +184,28 @@ func signCert(args []string, out io.Writer, errOut io.Writer, pr PasswordReader)
return fmt.Errorf("refusing to overwrite existing cert: %s", *sf.outCertPath) return fmt.Errorf("refusing to overwrite existing cert: %s", *sf.outCertPath)
} }
var crts []cert.Certificate err = nc.Sign(caKey)
if err != nil {
notBefore := time.Now() return fmt.Errorf("error while signing: %s", err)
notAfter := notBefore.Add(*sf.duration)
if version == 0 || version == cert.Version1 {
// Make sure we at least have an ip
if len(v4Networks) != 1 {
return newHelpErrorf("invalid -networks definition: v1 certificates can only have a single ipv4 address")
}
if version == cert.Version1 {
// If we are asked to mint a v1 certificate only then we cant just ignore any v6 addresses
if len(v6Networks) > 0 {
return newHelpErrorf("invalid -networks definition: v1 certificates can only be ipv4")
}
if len(v6UnsafeNetworks) > 0 {
return newHelpErrorf("invalid -unsafe-networks definition: v1 certificates can only be ipv4")
}
}
t := &cert.TBSCertificate{
Version: cert.Version1,
Name: *sf.name,
Networks: []netip.Prefix{v4Networks[0]},
Groups: groups,
UnsafeNetworks: v4UnsafeNetworks,
NotBefore: notBefore,
NotAfter: notAfter,
PublicKey: pub,
IsCA: false,
Curve: curve,
}
var nc cert.Certificate
if p11Client == nil {
nc, err = t.Sign(caCert, curve, caKey)
if err != nil {
return fmt.Errorf("error while signing: %w", err)
}
} else {
nc, err = t.SignWith(caCert, curve, p11Client.SignASN1)
if err != nil {
return fmt.Errorf("error while signing with PKCS#11: %w", err)
}
}
crts = append(crts, nc)
} }
if version == 0 || version == cert.Version2 { if *sf.inPubPath == "" {
t := &cert.TBSCertificate{
Version: cert.Version2,
Name: *sf.name,
Networks: append(v4Networks, v6Networks...),
Groups: groups,
UnsafeNetworks: append(v4UnsafeNetworks, v6UnsafeNetworks...),
NotBefore: notBefore,
NotAfter: notAfter,
PublicKey: pub,
IsCA: false,
Curve: curve,
}
var nc cert.Certificate
if p11Client == nil {
nc, err = t.Sign(caCert, curve, caKey)
if err != nil {
return fmt.Errorf("error while signing: %w", err)
}
} else {
nc, err = t.SignWith(caCert, curve, p11Client.SignASN1)
if err != nil {
return fmt.Errorf("error while signing with PKCS#11: %w", err)
}
}
crts = append(crts, nc)
}
if !isP11 && *sf.inPubPath == "" {
if _, err := os.Stat(*sf.outKeyPath); err == nil { if _, err := os.Stat(*sf.outKeyPath); err == nil {
return fmt.Errorf("refusing to overwrite existing key: %s", *sf.outKeyPath) return fmt.Errorf("refusing to overwrite existing key: %s", *sf.outKeyPath)
} }
err = os.WriteFile(*sf.outKeyPath, cert.MarshalPrivateKeyToPEM(curve, rawPriv), 0600) err = ioutil.WriteFile(*sf.outKeyPath, cert.MarshalX25519PrivateKey(rawPriv), 0600)
if err != nil { if err != nil {
return fmt.Errorf("error while writing out-key: %s", err) return fmt.Errorf("error while writing out-key: %s", err)
} }
} }
var b []byte b, err := nc.MarshalToPEM()
for _, c := range crts { if err != nil {
sb, err := c.MarshalPEM() return fmt.Errorf("error while marshalling certificate: %s", err)
if err != nil {
return fmt.Errorf("error while marshalling certificate: %s", err)
}
b = append(b, sb...)
} }
err = os.WriteFile(*sf.outCertPath, b, 0600) err = ioutil.WriteFile(*sf.outCertPath, b, 0600)
if err != nil { if err != nil {
return fmt.Errorf("error while writing out-crt: %s", err) return fmt.Errorf("error while writing out-crt: %s", err)
} }
@@ -384,7 +216,7 @@ func signCert(args []string, out io.Writer, errOut io.Writer, pr PasswordReader)
return fmt.Errorf("error while generating qr code: %s", err) return fmt.Errorf("error while generating qr code: %s", err)
} }
err = os.WriteFile(*sf.outQRPath, b, 0600) err = ioutil.WriteFile(*sf.outQRPath, b, 0600)
if err != nil { if err != nil {
return fmt.Errorf("error while writing out-qr: %s", err) return fmt.Errorf("error while writing out-qr: %s", err)
} }
@@ -393,17 +225,6 @@ func signCert(args []string, out io.Writer, errOut io.Writer, pr PasswordReader)
return nil return nil
} }
func newKeypair(curve cert.Curve) ([]byte, []byte) {
switch curve {
case cert.Curve_CURVE25519:
return x25519Keypair()
case cert.Curve_P256:
return p256Keypair()
default:
return nil, nil
}
}
func x25519Keypair() ([]byte, []byte) { func x25519Keypair() ([]byte, []byte) {
privkey := make([]byte, 32) privkey := make([]byte, 32)
if _, err := io.ReadFull(rand.Reader, privkey); err != nil { if _, err := io.ReadFull(rand.Reader, privkey); err != nil {
@@ -418,15 +239,6 @@ func x25519Keypair() ([]byte, []byte) {
return pubkey, privkey return pubkey, privkey
} }
func p256Keypair() ([]byte, []byte) {
privkey, err := ecdh.P256().GenerateKey(rand.Reader)
if err != nil {
panic(err)
}
pubkey := privkey.PublicKey()
return pubkey.Bytes(), privkey.Bytes()
}
func signSummary() string { func signSummary() string {
return "sign <flags>: create and sign a certificate" return "sign <flags>: create and sign a certificate"
} }

View File

@@ -6,17 +6,18 @@ package main
import ( import (
"bytes" "bytes"
"crypto/rand" "crypto/rand"
"errors" "io/ioutil"
"os" "os"
"testing" "testing"
"time" "time"
"github.com/slackhq/nebula/cert" "github.com/slackhq/nebula/cert"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"golang.org/x/crypto/ed25519" "golang.org/x/crypto/ed25519"
) )
//TODO: test file permissions
func Test_signSummary(t *testing.T) { func Test_signSummary(t *testing.T) {
assert.Equal(t, "sign <flags>: create and sign a certificate", signSummary()) assert.Equal(t, "sign <flags>: create and sign a certificate", signSummary())
} }
@@ -38,24 +39,17 @@ func Test_signHelp(t *testing.T) {
" -in-pub string\n"+ " -in-pub string\n"+
" \tOptional (if out-key not set): path to read a previously generated public key\n"+ " \tOptional (if out-key not set): path to read a previously generated public key\n"+
" -ip string\n"+ " -ip string\n"+
" \tDeprecated, see -networks\n"+ " \tRequired: ip and network in CIDR notation to assign the cert\n"+
" -name string\n"+ " -name string\n"+
" \tRequired: name of the cert, usually a hostname\n"+ " \tRequired: name of the cert, usually a hostname\n"+
" -networks string\n"+
" \tRequired: comma separated list of ip address and network in CIDR notation to assign to this cert\n"+
" -out-crt string\n"+ " -out-crt string\n"+
" \tOptional: path to write the certificate to\n"+ " \tOptional: path to write the certificate to\n"+
" -out-key string\n"+ " -out-key string\n"+
" \tOptional (if in-pub not set): path to write the private key to\n"+ " \tOptional (if in-pub not set): path to write the private key to\n"+
" -out-qr string\n"+ " -out-qr string\n"+
" \tOptional: output a qr code image (png) of the certificate\n"+ " \tOptional: output a qr code image (png) of the certificate\n"+
optionalPkcs11String(" -pkcs11 string\n \tOptional: PKCS#11 URI to an existing private key\n")+
" -subnets string\n"+ " -subnets string\n"+
" \tDeprecated, see -unsafe-networks\n"+ " \tOptional: comma separated list of subnet this cert can serve for\n",
" -unsafe-networks string\n"+
" \tOptional: comma separated list of ip address and network in CIDR notation. Unsafe networks this cert can route for\n"+
" -version uint\n"+
" \tOptional: version of the certificate format to use, the default is to create both v1 and v2 certificates.\n",
ob.String(), ob.String(),
) )
} }
@@ -64,57 +58,36 @@ func Test_signCert(t *testing.T) {
ob := &bytes.Buffer{} ob := &bytes.Buffer{}
eb := &bytes.Buffer{} eb := &bytes.Buffer{}
nopw := &StubPasswordReader{
password: []byte(""),
err: nil,
}
errpw := &StubPasswordReader{
password: []byte(""),
err: errors.New("stub error"),
}
passphrase := []byte("DO NOT USE THIS KEY")
testpw := &StubPasswordReader{
password: passphrase,
err: nil,
}
// required args // required args
assertHelpError(t, signCert(
[]string{"-version", "1", "-ca-crt", "./nope", "-ca-key", "./nope", "-ip", "1.1.1.1/24", "-out-key", "nope", "-out-crt", "nope"}, ob, eb, nopw, assertHelpError(t, signCert([]string{"-ca-crt", "./nope", "-ca-key", "./nope", "-ip", "1.1.1.1/24", "-out-key", "nope", "-out-crt", "nope"}, ob, eb), "-name is required")
), "-name is required")
assert.Empty(t, ob.String()) assert.Empty(t, ob.String())
assert.Empty(t, eb.String()) assert.Empty(t, eb.String())
assertHelpError(t, signCert( assertHelpError(t, signCert([]string{"-ca-crt", "./nope", "-ca-key", "./nope", "-name", "test", "-out-key", "nope", "-out-crt", "nope"}, ob, eb), "-ip is required")
[]string{"-version", "1", "-ca-crt", "./nope", "-ca-key", "./nope", "-name", "test", "-out-key", "nope", "-out-crt", "nope"}, ob, eb, nopw,
), "-networks is required")
assert.Empty(t, ob.String()) assert.Empty(t, ob.String())
assert.Empty(t, eb.String()) assert.Empty(t, eb.String())
// cannot set -in-pub and -out-key // cannot set -in-pub and -out-key
assertHelpError(t, signCert( assertHelpError(t, signCert([]string{"-ca-crt", "./nope", "-ca-key", "./nope", "-name", "test", "-in-pub", "nope", "-ip", "1.1.1.1/24", "-out-crt", "nope", "-out-key", "nope"}, ob, eb), "cannot set both -in-pub and -out-key")
[]string{"-version", "1", "-ca-crt", "./nope", "-ca-key", "./nope", "-name", "test", "-in-pub", "nope", "-ip", "1.1.1.1/24", "-out-crt", "nope", "-out-key", "nope"}, ob, eb, nopw,
), "cannot set both -in-pub and -out-key")
assert.Empty(t, ob.String()) assert.Empty(t, ob.String())
assert.Empty(t, eb.String()) assert.Empty(t, eb.String())
// failed to read key // failed to read key
ob.Reset() ob.Reset()
eb.Reset() eb.Reset()
args := []string{"-version", "1", "-ca-crt", "./nope", "-ca-key", "./nope", "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", "nope", "-out-key", "nope", "-duration", "100m"} args := []string{"-ca-crt", "./nope", "-ca-key", "./nope", "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", "nope", "-out-key", "nope", "-duration", "100m"}
require.EqualError(t, signCert(args, ob, eb, nopw), "error while reading ca-key: open ./nope: "+NoSuchFileError) assert.EqualError(t, signCert(args, ob, eb), "error while reading ca-key: open ./nope: "+NoSuchFileError)
// failed to unmarshal key // failed to unmarshal key
ob.Reset() ob.Reset()
eb.Reset() eb.Reset()
caKeyF, err := os.CreateTemp("", "sign-cert.key") caKeyF, err := ioutil.TempFile("", "sign-cert.key")
require.NoError(t, err) assert.Nil(t, err)
defer os.Remove(caKeyF.Name()) defer os.Remove(caKeyF.Name())
args = []string{"-version", "1", "-ca-crt", "./nope", "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", "nope", "-out-key", "nope", "-duration", "100m"} args = []string{"-ca-crt", "./nope", "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", "nope", "-out-key", "nope", "-duration", "100m"}
require.EqualError(t, signCert(args, ob, eb, nopw), "error while parsing ca-key: input did not contain a valid PEM encoded block") assert.EqualError(t, signCert(args, ob, eb), "error while parsing ca-key: input did not contain a valid PEM encoded block")
assert.Empty(t, ob.String()) assert.Empty(t, ob.String())
assert.Empty(t, eb.String()) assert.Empty(t, eb.String())
@@ -122,46 +95,54 @@ func Test_signCert(t *testing.T) {
ob.Reset() ob.Reset()
eb.Reset() eb.Reset()
caPub, caPriv, _ := ed25519.GenerateKey(rand.Reader) caPub, caPriv, _ := ed25519.GenerateKey(rand.Reader)
caKeyF.Write(cert.MarshalSigningPrivateKeyToPEM(cert.Curve_CURVE25519, caPriv)) caKeyF.Write(cert.MarshalEd25519PrivateKey(caPriv))
// failed to read cert // failed to read cert
args = []string{"-version", "1", "-ca-crt", "./nope", "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", "nope", "-out-key", "nope", "-duration", "100m"} args = []string{"-ca-crt", "./nope", "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", "nope", "-out-key", "nope", "-duration", "100m"}
require.EqualError(t, signCert(args, ob, eb, nopw), "error while reading ca-crt: open ./nope: "+NoSuchFileError) assert.EqualError(t, signCert(args, ob, eb), "error while reading ca-crt: open ./nope: "+NoSuchFileError)
assert.Empty(t, ob.String()) assert.Empty(t, ob.String())
assert.Empty(t, eb.String()) assert.Empty(t, eb.String())
// failed to unmarshal cert // failed to unmarshal cert
ob.Reset() ob.Reset()
eb.Reset() eb.Reset()
caCrtF, err := os.CreateTemp("", "sign-cert.crt") caCrtF, err := ioutil.TempFile("", "sign-cert.crt")
require.NoError(t, err) assert.Nil(t, err)
defer os.Remove(caCrtF.Name()) defer os.Remove(caCrtF.Name())
args = []string{"-version", "1", "-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", "nope", "-out-key", "nope", "-duration", "100m"} args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", "nope", "-out-key", "nope", "-duration", "100m"}
require.EqualError(t, signCert(args, ob, eb, nopw), "error while parsing ca-crt: input did not contain a valid PEM encoded block") assert.EqualError(t, signCert(args, ob, eb), "error while parsing ca-crt: input did not contain a valid PEM encoded block")
assert.Empty(t, ob.String()) assert.Empty(t, ob.String())
assert.Empty(t, eb.String()) assert.Empty(t, eb.String())
// write a proper ca cert for later // write a proper ca cert for later
ca, _ := NewTestCaCert("ca", caPub, caPriv, time.Now(), time.Now().Add(time.Minute*200), nil, nil, nil) ca := cert.NebulaCertificate{
b, _ := ca.MarshalPEM() Details: cert.NebulaCertificateDetails{
Name: "ca",
NotBefore: time.Now(),
NotAfter: time.Now().Add(time.Minute * 200),
PublicKey: caPub,
IsCA: true,
},
}
b, _ := ca.MarshalToPEM()
caCrtF.Write(b) caCrtF.Write(b)
// failed to read pub // failed to read pub
args = []string{"-version", "1", "-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", "nope", "-in-pub", "./nope", "-duration", "100m"} args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", "nope", "-in-pub", "./nope", "-duration", "100m"}
require.EqualError(t, signCert(args, ob, eb, nopw), "error while reading in-pub: open ./nope: "+NoSuchFileError) assert.EqualError(t, signCert(args, ob, eb), "error while reading in-pub: open ./nope: "+NoSuchFileError)
assert.Empty(t, ob.String()) assert.Empty(t, ob.String())
assert.Empty(t, eb.String()) assert.Empty(t, eb.String())
// failed to unmarshal pub // failed to unmarshal pub
ob.Reset() ob.Reset()
eb.Reset() eb.Reset()
inPubF, err := os.CreateTemp("", "in.pub") inPubF, err := ioutil.TempFile("", "in.pub")
require.NoError(t, err) assert.Nil(t, err)
defer os.Remove(inPubF.Name()) defer os.Remove(inPubF.Name())
args = []string{"-version", "1", "-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", "nope", "-in-pub", inPubF.Name(), "-duration", "100m"} args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", "nope", "-in-pub", inPubF.Name(), "-duration", "100m"}
require.EqualError(t, signCert(args, ob, eb, nopw), "error while parsing in-pub: input did not contain a valid PEM encoded block") assert.EqualError(t, signCert(args, ob, eb), "error while parsing in-pub: input did not contain a valid PEM encoded block")
assert.Empty(t, ob.String()) assert.Empty(t, ob.String())
assert.Empty(t, eb.String()) assert.Empty(t, eb.String())
@@ -169,124 +150,102 @@ func Test_signCert(t *testing.T) {
ob.Reset() ob.Reset()
eb.Reset() eb.Reset()
inPub, _ := x25519Keypair() inPub, _ := x25519Keypair()
inPubF.Write(cert.MarshalPublicKeyToPEM(cert.Curve_CURVE25519, inPub)) inPubF.Write(cert.MarshalX25519PublicKey(inPub))
// bad ip cidr // bad ip cidr
ob.Reset() ob.Reset()
eb.Reset() eb.Reset()
args = []string{"-version", "1", "-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "a1.1.1.1/24", "-out-crt", "nope", "-out-key", "nope", "-duration", "100m"} args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "a1.1.1.1/24", "-out-crt", "nope", "-out-key", "nope", "-duration", "100m"}
assertHelpError(t, signCert(args, ob, eb, nopw), "invalid -networks definition: a1.1.1.1/24") assertHelpError(t, signCert(args, ob, eb), "invalid ip definition: invalid CIDR address: a1.1.1.1/24")
assert.Empty(t, ob.String())
assert.Empty(t, eb.String())
ob.Reset()
eb.Reset()
args = []string{"-version", "1", "-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "100::100/100", "-out-crt", "nope", "-out-key", "nope", "-duration", "100m"}
assertHelpError(t, signCert(args, ob, eb, nopw), "invalid -networks definition: v1 certificates can only have a single ipv4 address")
assert.Empty(t, ob.String())
assert.Empty(t, eb.String())
ob.Reset()
eb.Reset()
args = []string{"-version", "1", "-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24,1.1.1.2/24", "-out-crt", "nope", "-out-key", "nope", "-duration", "100m"}
assertHelpError(t, signCert(args, ob, eb, nopw), "invalid -networks definition: v1 certificates can only have a single ipv4 address")
assert.Empty(t, ob.String()) assert.Empty(t, ob.String())
assert.Empty(t, eb.String()) assert.Empty(t, eb.String())
// bad subnet cidr // bad subnet cidr
ob.Reset() ob.Reset()
eb.Reset() eb.Reset()
args = []string{"-version", "1", "-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", "nope", "-out-key", "nope", "-duration", "100m", "-subnets", "a"} args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", "nope", "-out-key", "nope", "-duration", "100m", "-subnets", "a"}
assertHelpError(t, signCert(args, ob, eb, nopw), "invalid -unsafe-networks definition: a") assertHelpError(t, signCert(args, ob, eb), "invalid subnet definition: invalid CIDR address: a")
assert.Empty(t, ob.String())
assert.Empty(t, eb.String())
ob.Reset()
eb.Reset()
args = []string{"-version", "1", "-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", "nope", "-out-key", "nope", "-duration", "100m", "-subnets", "100::100/100"}
assertHelpError(t, signCert(args, ob, eb, nopw), "invalid -unsafe-networks definition: v1 certificates can only be ipv4")
assert.Empty(t, ob.String()) assert.Empty(t, ob.String())
assert.Empty(t, eb.String()) assert.Empty(t, eb.String())
// mismatched ca key // mismatched ca key
_, caPriv2, _ := ed25519.GenerateKey(rand.Reader) _, caPriv2, _ := ed25519.GenerateKey(rand.Reader)
caKeyF2, err := os.CreateTemp("", "sign-cert-2.key") caKeyF2, err := ioutil.TempFile("", "sign-cert-2.key")
require.NoError(t, err) assert.Nil(t, err)
defer os.Remove(caKeyF2.Name()) defer os.Remove(caKeyF2.Name())
caKeyF2.Write(cert.MarshalSigningPrivateKeyToPEM(cert.Curve_CURVE25519, caPriv2)) caKeyF2.Write(cert.MarshalEd25519PrivateKey(caPriv2))
ob.Reset() ob.Reset()
eb.Reset() eb.Reset()
args = []string{"-version", "1", "-ca-crt", caCrtF.Name(), "-ca-key", caKeyF2.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", "nope", "-out-key", "nope", "-duration", "100m", "-subnets", "a"} args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF2.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", "nope", "-out-key", "nope", "-duration", "100m", "-subnets", "a"}
require.EqualError(t, signCert(args, ob, eb, nopw), "refusing to sign, root certificate does not match private key") assert.EqualError(t, signCert(args, ob, eb), "refusing to sign, root certificate does not match private key")
assert.Empty(t, ob.String()) assert.Empty(t, ob.String())
assert.Empty(t, eb.String()) assert.Empty(t, eb.String())
// failed key write // failed key write
ob.Reset() ob.Reset()
eb.Reset() eb.Reset()
args = []string{"-version", "1", "-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", "/do/not/write/pleasecrt", "-out-key", "/do/not/write/pleasekey", "-duration", "100m", "-subnets", "10.1.1.1/32"} args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", "/do/not/write/pleasecrt", "-out-key", "/do/not/write/pleasekey", "-duration", "100m", "-subnets", "10.1.1.1/32"}
require.EqualError(t, signCert(args, ob, eb, nopw), "error while writing out-key: open /do/not/write/pleasekey: "+NoSuchDirError) assert.EqualError(t, signCert(args, ob, eb), "error while writing out-key: open /do/not/write/pleasekey: "+NoSuchDirError)
assert.Empty(t, ob.String()) assert.Empty(t, ob.String())
assert.Empty(t, eb.String()) assert.Empty(t, eb.String())
// create temp key file // create temp key file
keyF, err := os.CreateTemp("", "test.key") keyF, err := ioutil.TempFile("", "test.key")
require.NoError(t, err) assert.Nil(t, err)
os.Remove(keyF.Name()) os.Remove(keyF.Name())
// failed cert write // failed cert write
ob.Reset() ob.Reset()
eb.Reset() eb.Reset()
args = []string{"-version", "1", "-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", "/do/not/write/pleasecrt", "-out-key", keyF.Name(), "-duration", "100m", "-subnets", "10.1.1.1/32"} args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", "/do/not/write/pleasecrt", "-out-key", keyF.Name(), "-duration", "100m", "-subnets", "10.1.1.1/32"}
require.EqualError(t, signCert(args, ob, eb, nopw), "error while writing out-crt: open /do/not/write/pleasecrt: "+NoSuchDirError) assert.EqualError(t, signCert(args, ob, eb), "error while writing out-crt: open /do/not/write/pleasecrt: "+NoSuchDirError)
assert.Empty(t, ob.String()) assert.Empty(t, ob.String())
assert.Empty(t, eb.String()) assert.Empty(t, eb.String())
os.Remove(keyF.Name()) os.Remove(keyF.Name())
// create temp cert file // create temp cert file
crtF, err := os.CreateTemp("", "test.crt") crtF, err := ioutil.TempFile("", "test.crt")
require.NoError(t, err) assert.Nil(t, err)
os.Remove(crtF.Name()) os.Remove(crtF.Name())
// test proper cert with removed empty groups and subnets // test proper cert with removed empty groups and subnets
ob.Reset() ob.Reset()
eb.Reset() eb.Reset()
args = []string{"-version", "1", "-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", crtF.Name(), "-out-key", keyF.Name(), "-duration", "100m", "-subnets", "10.1.1.1/32, , 10.2.2.2/32 , , ,, 10.5.5.5/32", "-groups", "1,, 2 , ,,,3,4,5"} args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", crtF.Name(), "-out-key", keyF.Name(), "-duration", "100m", "-subnets", "10.1.1.1/32, , 10.2.2.2/32 , , ,, 10.5.5.5/32", "-groups", "1,, 2 , ,,,3,4,5"}
require.NoError(t, signCert(args, ob, eb, nopw)) assert.Nil(t, signCert(args, ob, eb))
assert.Empty(t, ob.String()) assert.Empty(t, ob.String())
assert.Empty(t, eb.String()) assert.Empty(t, eb.String())
// read cert and key files // read cert and key files
rb, _ := os.ReadFile(keyF.Name()) rb, _ := ioutil.ReadFile(keyF.Name())
lKey, b, curve, err := cert.UnmarshalPrivateKeyFromPEM(rb) lKey, b, err := cert.UnmarshalX25519PrivateKey(rb)
assert.Equal(t, cert.Curve_CURVE25519, curve) assert.Len(t, b, 0)
assert.Empty(t, b) assert.Nil(t, err)
require.NoError(t, err)
assert.Len(t, lKey, 32) assert.Len(t, lKey, 32)
rb, _ = os.ReadFile(crtF.Name()) rb, _ = ioutil.ReadFile(crtF.Name())
lCrt, b, err := cert.UnmarshalCertificateFromPEM(rb) lCrt, b, err := cert.UnmarshalNebulaCertificateFromPEM(rb)
assert.Empty(t, b) assert.Len(t, b, 0)
require.NoError(t, err) assert.Nil(t, err)
assert.Equal(t, "test", lCrt.Name()) assert.Equal(t, "test", lCrt.Details.Name)
assert.Equal(t, "1.1.1.1/24", lCrt.Networks()[0].String()) assert.Equal(t, "1.1.1.1/24", lCrt.Details.Ips[0].String())
assert.Len(t, lCrt.Networks(), 1) assert.Len(t, lCrt.Details.Ips, 1)
assert.False(t, lCrt.IsCA()) assert.False(t, lCrt.Details.IsCA)
assert.Equal(t, []string{"1", "2", "3", "4", "5"}, lCrt.Groups()) assert.Equal(t, []string{"1", "2", "3", "4", "5"}, lCrt.Details.Groups)
assert.Len(t, lCrt.UnsafeNetworks(), 3) assert.Len(t, lCrt.Details.Subnets, 3)
assert.Len(t, lCrt.PublicKey(), 32) assert.Len(t, lCrt.Details.PublicKey, 32)
assert.Equal(t, time.Duration(time.Minute*100), lCrt.NotAfter().Sub(lCrt.NotBefore())) assert.Equal(t, time.Duration(time.Minute*100), lCrt.Details.NotAfter.Sub(lCrt.Details.NotBefore))
sns := []string{} sns := []string{}
for _, sn := range lCrt.UnsafeNetworks() { for _, sn := range lCrt.Details.Subnets {
sns = append(sns, sn.String()) sns = append(sns, sn.String())
} }
assert.Equal(t, []string{"10.1.1.1/32", "10.2.2.2/32", "10.5.5.5/32"}, sns) assert.Equal(t, []string{"10.1.1.1/32", "10.2.2.2/32", "10.5.5.5/32"}, sns)
issuer, _ := ca.Fingerprint() issuer, _ := ca.Sha256Sum()
assert.Equal(t, issuer, lCrt.Issuer()) assert.Equal(t, issuer, lCrt.Details.Issuer)
assert.True(t, lCrt.CheckSignature(caPub)) assert.True(t, lCrt.CheckSignature(caPub))
@@ -295,116 +254,53 @@ func Test_signCert(t *testing.T) {
os.Remove(crtF.Name()) os.Remove(crtF.Name())
ob.Reset() ob.Reset()
eb.Reset() eb.Reset()
args = []string{"-version", "1", "-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", crtF.Name(), "-in-pub", inPubF.Name(), "-duration", "100m", "-groups", "1"} args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", crtF.Name(), "-in-pub", inPubF.Name(), "-duration", "100m", "-groups", "1"}
require.NoError(t, signCert(args, ob, eb, nopw)) assert.Nil(t, signCert(args, ob, eb))
assert.Empty(t, ob.String()) assert.Empty(t, ob.String())
assert.Empty(t, eb.String()) assert.Empty(t, eb.String())
// read cert file and check pub key matches in-pub // read cert file and check pub key matches in-pub
rb, _ = os.ReadFile(crtF.Name()) rb, _ = ioutil.ReadFile(crtF.Name())
lCrt, b, err = cert.UnmarshalCertificateFromPEM(rb) lCrt, b, err = cert.UnmarshalNebulaCertificateFromPEM(rb)
assert.Empty(t, b) assert.Len(t, b, 0)
require.NoError(t, err) assert.Nil(t, err)
assert.Equal(t, lCrt.PublicKey(), inPub) assert.Equal(t, lCrt.Details.PublicKey, inPub)
// test refuse to sign cert with duration beyond root // test refuse to sign cert with duration beyond root
ob.Reset() ob.Reset()
eb.Reset() eb.Reset()
os.Remove(keyF.Name()) args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", crtF.Name(), "-out-key", keyF.Name(), "-duration", "1000m", "-subnets", "10.1.1.1/32, , 10.2.2.2/32 , , ,, 10.5.5.5/32", "-groups", "1,, 2 , ,,,3,4,5"}
os.Remove(crtF.Name()) assert.EqualError(t, signCert(args, ob, eb), "refusing to sign, root certificate constraints violated: certificate expires after signing certificate")
args = []string{"-version", "1", "-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", crtF.Name(), "-out-key", keyF.Name(), "-duration", "1000m", "-subnets", "10.1.1.1/32, , 10.2.2.2/32 , , ,, 10.5.5.5/32", "-groups", "1,, 2 , ,,,3,4,5"}
require.EqualError(t, signCert(args, ob, eb, nopw), "error while signing: certificate expires after signing certificate")
assert.Empty(t, ob.String()) assert.Empty(t, ob.String())
assert.Empty(t, eb.String()) assert.Empty(t, eb.String())
// create valid cert/key for overwrite tests // create valid cert/key for overwrite tests
os.Remove(keyF.Name()) os.Remove(keyF.Name())
os.Remove(crtF.Name()) os.Remove(crtF.Name())
args = []string{"-version", "1", "-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", crtF.Name(), "-out-key", keyF.Name(), "-duration", "100m", "-subnets", "10.1.1.1/32, , 10.2.2.2/32 , , ,, 10.5.5.5/32", "-groups", "1,, 2 , ,,,3,4,5"} args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", crtF.Name(), "-out-key", keyF.Name(), "-duration", "100m", "-subnets", "10.1.1.1/32, , 10.2.2.2/32 , , ,, 10.5.5.5/32", "-groups", "1,, 2 , ,,,3,4,5"}
require.NoError(t, signCert(args, ob, eb, nopw)) assert.Nil(t, signCert(args, ob, eb))
// test that we won't overwrite existing key file // test that we won't overwrite existing key file
os.Remove(crtF.Name()) os.Remove(crtF.Name())
ob.Reset() ob.Reset()
eb.Reset() eb.Reset()
args = []string{"-version", "1", "-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", crtF.Name(), "-out-key", keyF.Name(), "-duration", "100m", "-subnets", "10.1.1.1/32, , 10.2.2.2/32 , , ,, 10.5.5.5/32", "-groups", "1,, 2 , ,,,3,4,5"} args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", crtF.Name(), "-out-key", keyF.Name(), "-duration", "100m", "-subnets", "10.1.1.1/32, , 10.2.2.2/32 , , ,, 10.5.5.5/32", "-groups", "1,, 2 , ,,,3,4,5"}
require.EqualError(t, signCert(args, ob, eb, nopw), "refusing to overwrite existing key: "+keyF.Name()) assert.EqualError(t, signCert(args, ob, eb), "refusing to overwrite existing key: "+keyF.Name())
assert.Empty(t, ob.String()) assert.Empty(t, ob.String())
assert.Empty(t, eb.String()) assert.Empty(t, eb.String())
// create valid cert/key for overwrite tests // create valid cert/key for overwrite tests
os.Remove(keyF.Name()) os.Remove(keyF.Name())
os.Remove(crtF.Name()) os.Remove(crtF.Name())
args = []string{"-version", "1", "-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", crtF.Name(), "-out-key", keyF.Name(), "-duration", "100m", "-subnets", "10.1.1.1/32, , 10.2.2.2/32 , , ,, 10.5.5.5/32", "-groups", "1,, 2 , ,,,3,4,5"} args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", crtF.Name(), "-out-key", keyF.Name(), "-duration", "100m", "-subnets", "10.1.1.1/32, , 10.2.2.2/32 , , ,, 10.5.5.5/32", "-groups", "1,, 2 , ,,,3,4,5"}
require.NoError(t, signCert(args, ob, eb, nopw)) assert.Nil(t, signCert(args, ob, eb))
// test that we won't overwrite existing certificate file // test that we won't overwrite existing certificate file
os.Remove(keyF.Name()) os.Remove(keyF.Name())
ob.Reset() ob.Reset()
eb.Reset() eb.Reset()
args = []string{"-version", "1", "-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", crtF.Name(), "-out-key", keyF.Name(), "-duration", "100m", "-subnets", "10.1.1.1/32, , 10.2.2.2/32 , , ,, 10.5.5.5/32", "-groups", "1,, 2 , ,,,3,4,5"} args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", crtF.Name(), "-out-key", keyF.Name(), "-duration", "100m", "-subnets", "10.1.1.1/32, , 10.2.2.2/32 , , ,, 10.5.5.5/32", "-groups", "1,, 2 , ,,,3,4,5"}
require.EqualError(t, signCert(args, ob, eb, nopw), "refusing to overwrite existing cert: "+crtF.Name()) assert.EqualError(t, signCert(args, ob, eb), "refusing to overwrite existing cert: "+crtF.Name())
assert.Empty(t, ob.String()) assert.Empty(t, ob.String())
assert.Empty(t, eb.String()) assert.Empty(t, eb.String())
// create valid cert/key using encrypted CA key
os.Remove(caKeyF.Name())
os.Remove(caCrtF.Name())
os.Remove(keyF.Name())
os.Remove(crtF.Name())
ob.Reset()
eb.Reset()
caKeyF, err = os.CreateTemp("", "sign-cert.key")
require.NoError(t, err)
defer os.Remove(caKeyF.Name())
caCrtF, err = os.CreateTemp("", "sign-cert.crt")
require.NoError(t, err)
defer os.Remove(caCrtF.Name())
// generate the encrypted key
caPub, caPriv, _ = ed25519.GenerateKey(rand.Reader)
kdfParams := cert.NewArgon2Parameters(64*1024, 4, 3)
b, _ = cert.EncryptAndMarshalSigningPrivateKey(cert.Curve_CURVE25519, caPriv, passphrase, kdfParams)
caKeyF.Write(b)
ca, _ = NewTestCaCert("ca", caPub, caPriv, time.Now(), time.Now().Add(time.Minute*200), nil, nil, nil)
b, _ = ca.MarshalPEM()
caCrtF.Write(b)
// test with the proper password
args = []string{"-version", "1", "-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", crtF.Name(), "-out-key", keyF.Name(), "-duration", "100m", "-subnets", "10.1.1.1/32, , 10.2.2.2/32 , , ,, 10.5.5.5/32", "-groups", "1,, 2 , ,,,3,4,5"}
require.NoError(t, signCert(args, ob, eb, testpw))
assert.Equal(t, "Enter passphrase: ", ob.String())
assert.Empty(t, eb.String())
// test with the wrong password
ob.Reset()
eb.Reset()
testpw.password = []byte("invalid password")
args = []string{"-version", "1", "-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", crtF.Name(), "-out-key", keyF.Name(), "-duration", "100m", "-subnets", "10.1.1.1/32, , 10.2.2.2/32 , , ,, 10.5.5.5/32", "-groups", "1,, 2 , ,,,3,4,5"}
require.Error(t, signCert(args, ob, eb, testpw))
assert.Equal(t, "Enter passphrase: ", ob.String())
assert.Empty(t, eb.String())
// test with the user not entering a password
ob.Reset()
eb.Reset()
args = []string{"-version", "1", "-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", crtF.Name(), "-out-key", keyF.Name(), "-duration", "100m", "-subnets", "10.1.1.1/32, , 10.2.2.2/32 , , ,, 10.5.5.5/32", "-groups", "1,, 2 , ,,,3,4,5"}
require.Error(t, signCert(args, ob, eb, nopw))
// normally the user hitting enter on the prompt would add newlines between these
assert.Equal(t, "Enter passphrase: Enter passphrase: Enter passphrase: Enter passphrase: Enter passphrase: ", ob.String())
assert.Empty(t, eb.String())
// test an error condition
ob.Reset()
eb.Reset()
args = []string{"-version", "1", "-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", crtF.Name(), "-out-key", keyF.Name(), "-duration", "100m", "-subnets", "10.1.1.1/32, , 10.2.2.2/32 , , ,, 10.5.5.5/32", "-groups", "1,, 2 , ,,,3,4,5"}
require.Error(t, signCert(args, ob, eb, errpw))
assert.Equal(t, "Enter passphrase: ", ob.String())
assert.Empty(t, eb.String())
} }

View File

@@ -1,10 +1,10 @@
package main package main
import ( import (
"errors"
"flag" "flag"
"fmt" "fmt"
"io" "io"
"io/ioutil"
"os" "os"
"strings" "strings"
"time" "time"
@@ -40,16 +40,16 @@ func verify(args []string, out io.Writer, errOut io.Writer) error {
return err return err
} }
rawCACert, err := os.ReadFile(*vf.caPath) rawCACert, err := ioutil.ReadFile(*vf.caPath)
if err != nil { if err != nil {
return fmt.Errorf("error while reading ca: %w", err) return fmt.Errorf("error while reading ca: %s", err)
} }
caPool := cert.NewCAPool() caPool := cert.NewCAPool()
for { for {
rawCACert, err = caPool.AddCAFromPEM(rawCACert) rawCACert, err = caPool.AddCACertificate(rawCACert)
if err != nil { if err != nil {
return fmt.Errorf("error while adding ca cert to pool: %w", err) return fmt.Errorf("error while adding ca cert to pool: %s", err)
} }
if rawCACert == nil || len(rawCACert) == 0 || strings.TrimSpace(string(rawCACert)) == "" { if rawCACert == nil || len(rawCACert) == 0 || strings.TrimSpace(string(rawCACert)) == "" {
@@ -57,32 +57,22 @@ func verify(args []string, out io.Writer, errOut io.Writer) error {
} }
} }
rawCert, err := os.ReadFile(*vf.certPath) rawCert, err := ioutil.ReadFile(*vf.certPath)
if err != nil { if err != nil {
return fmt.Errorf("unable to read crt: %w", err) return fmt.Errorf("unable to read crt; %s", err)
}
var errs []error
for {
if len(rawCert) == 0 {
break
}
c, extra, err := cert.UnmarshalCertificateFromPEM(rawCert)
if err != nil {
return fmt.Errorf("error while parsing crt: %w", err)
}
rawCert = extra
_, err = caPool.VerifyCertificate(time.Now(), c)
if err != nil {
switch {
case errors.Is(err, cert.ErrCaNotFound):
errs = append(errs, fmt.Errorf("error while verifying certificate v%d %s with issuer %s: %w", c.Version(), c.Name(), c.Issuer(), err))
default:
errs = append(errs, fmt.Errorf("error while verifying certificate %+v: %w", c, err))
}
}
} }
return errors.Join(errs...) c, _, err := cert.UnmarshalNebulaCertificateFromPEM(rawCert)
if err != nil {
return fmt.Errorf("error while parsing crt: %s", err)
}
good, err := c.Verify(time.Now(), caPool)
if !good {
return err
}
return nil
} }
func verifySummary() string { func verifySummary() string {
@@ -91,7 +81,7 @@ func verifySummary() string {
func verifyHelp(out io.Writer) { func verifyHelp(out io.Writer) {
vf := newVerifyFlags() vf := newVerifyFlags()
_, _ = out.Write([]byte("Usage of " + os.Args[0] + " " + verifySummary() + "\n")) out.Write([]byte("Usage of " + os.Args[0] + " " + verifySummary() + "\n"))
vf.set.SetOutput(out) vf.set.SetOutput(out)
vf.set.PrintDefaults() vf.set.PrintDefaults()
} }

View File

@@ -3,13 +3,13 @@ package main
import ( import (
"bytes" "bytes"
"crypto/rand" "crypto/rand"
"io/ioutil"
"os" "os"
"testing" "testing"
"time" "time"
"github.com/slackhq/nebula/cert" "github.com/slackhq/nebula/cert"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"golang.org/x/crypto/ed25519" "golang.org/x/crypto/ed25519"
) )
@@ -38,87 +38,105 @@ func Test_verify(t *testing.T) {
// required args // required args
assertHelpError(t, verify([]string{"-ca", "derp"}, ob, eb), "-crt is required") assertHelpError(t, verify([]string{"-ca", "derp"}, ob, eb), "-crt is required")
assert.Empty(t, ob.String()) assert.Equal(t, "", ob.String())
assert.Empty(t, eb.String()) assert.Equal(t, "", eb.String())
assertHelpError(t, verify([]string{"-crt", "derp"}, ob, eb), "-ca is required") assertHelpError(t, verify([]string{"-crt", "derp"}, ob, eb), "-ca is required")
assert.Empty(t, ob.String()) assert.Equal(t, "", ob.String())
assert.Empty(t, eb.String()) assert.Equal(t, "", eb.String())
// no ca at path // no ca at path
ob.Reset() ob.Reset()
eb.Reset() eb.Reset()
err := verify([]string{"-ca", "does_not_exist", "-crt", "does_not_exist"}, ob, eb) err := verify([]string{"-ca", "does_not_exist", "-crt", "does_not_exist"}, ob, eb)
assert.Empty(t, ob.String()) assert.Equal(t, "", ob.String())
assert.Empty(t, eb.String()) assert.Equal(t, "", eb.String())
require.EqualError(t, err, "error while reading ca: open does_not_exist: "+NoSuchFileError) assert.EqualError(t, err, "error while reading ca: open does_not_exist: "+NoSuchFileError)
// invalid ca at path // invalid ca at path
ob.Reset() ob.Reset()
eb.Reset() eb.Reset()
caFile, err := os.CreateTemp("", "verify-ca") caFile, err := ioutil.TempFile("", "verify-ca")
require.NoError(t, err) assert.Nil(t, err)
defer os.Remove(caFile.Name()) defer os.Remove(caFile.Name())
caFile.WriteString("-----BEGIN NOPE-----") caFile.WriteString("-----BEGIN NOPE-----")
err = verify([]string{"-ca", caFile.Name(), "-crt", "does_not_exist"}, ob, eb) err = verify([]string{"-ca", caFile.Name(), "-crt", "does_not_exist"}, ob, eb)
assert.Empty(t, ob.String()) assert.Equal(t, "", ob.String())
assert.Empty(t, eb.String()) assert.Equal(t, "", eb.String())
require.EqualError(t, err, "error while adding ca cert to pool: input did not contain a valid PEM encoded block") assert.EqualError(t, err, "error while adding ca cert to pool: input did not contain a valid PEM encoded block")
// make a ca for later // make a ca for later
caPub, caPriv, _ := ed25519.GenerateKey(rand.Reader) caPub, caPriv, _ := ed25519.GenerateKey(rand.Reader)
ca, _ := NewTestCaCert("test-ca", caPub, caPriv, time.Now().Add(time.Hour*-1), time.Now().Add(time.Hour*2), nil, nil, nil) ca := cert.NebulaCertificate{
b, _ := ca.MarshalPEM() Details: cert.NebulaCertificateDetails{
Name: "test-ca",
NotBefore: time.Now().Add(time.Hour * -1),
NotAfter: time.Now().Add(time.Hour),
PublicKey: caPub,
IsCA: true,
},
}
ca.Sign(caPriv)
b, _ := ca.MarshalToPEM()
caFile.Truncate(0) caFile.Truncate(0)
caFile.Seek(0, 0) caFile.Seek(0, 0)
caFile.Write(b) caFile.Write(b)
// no crt at path // no crt at path
err = verify([]string{"-ca", caFile.Name(), "-crt", "does_not_exist"}, ob, eb) err = verify([]string{"-ca", caFile.Name(), "-crt", "does_not_exist"}, ob, eb)
assert.Empty(t, ob.String()) assert.Equal(t, "", ob.String())
assert.Empty(t, eb.String()) assert.Equal(t, "", eb.String())
require.EqualError(t, err, "unable to read crt: open does_not_exist: "+NoSuchFileError) assert.EqualError(t, err, "unable to read crt; open does_not_exist: "+NoSuchFileError)
// invalid crt at path // invalid crt at path
ob.Reset() ob.Reset()
eb.Reset() eb.Reset()
certFile, err := os.CreateTemp("", "verify-cert") certFile, err := ioutil.TempFile("", "verify-cert")
require.NoError(t, err) assert.Nil(t, err)
defer os.Remove(certFile.Name()) defer os.Remove(certFile.Name())
certFile.WriteString("-----BEGIN NOPE-----") certFile.WriteString("-----BEGIN NOPE-----")
err = verify([]string{"-ca", caFile.Name(), "-crt", certFile.Name()}, ob, eb) err = verify([]string{"-ca", caFile.Name(), "-crt", certFile.Name()}, ob, eb)
assert.Empty(t, ob.String()) assert.Equal(t, "", ob.String())
assert.Empty(t, eb.String()) assert.Equal(t, "", eb.String())
require.EqualError(t, err, "error while parsing crt: input did not contain a valid PEM encoded block") assert.EqualError(t, err, "error while parsing crt: input did not contain a valid PEM encoded block")
// unverifiable cert at path // unverifiable cert at path
crt, _ := NewTestCert(ca, caPriv, "test-cert", time.Now().Add(time.Hour*-1), time.Now().Add(time.Hour), nil, nil, nil) _, badPriv, _ := ed25519.GenerateKey(rand.Reader)
// Slightly evil hack to modify the certificate after it was sealed to generate an invalid signature certPub, _ := x25519Keypair()
pub := crt.PublicKey() signer, _ := ca.Sha256Sum()
for i, _ := range pub { crt := cert.NebulaCertificate{
pub[i] = 0 Details: cert.NebulaCertificateDetails{
Name: "test-cert",
NotBefore: time.Now().Add(time.Hour * -1),
NotAfter: time.Now().Add(time.Hour),
PublicKey: certPub,
IsCA: false,
Issuer: signer,
},
} }
b, _ = crt.MarshalPEM()
crt.Sign(badPriv)
b, _ = crt.MarshalToPEM()
certFile.Truncate(0) certFile.Truncate(0)
certFile.Seek(0, 0) certFile.Seek(0, 0)
certFile.Write(b) certFile.Write(b)
err = verify([]string{"-ca", caFile.Name(), "-crt", certFile.Name()}, ob, eb) err = verify([]string{"-ca", caFile.Name(), "-crt", certFile.Name()}, ob, eb)
assert.Empty(t, ob.String()) assert.Equal(t, "", ob.String())
assert.Empty(t, eb.String()) assert.Equal(t, "", eb.String())
require.ErrorIs(t, err, cert.ErrSignatureMismatch) assert.EqualError(t, err, "certificate signature did not match")
// verified cert at path // verified cert at path
crt, _ = NewTestCert(ca, caPriv, "test-cert", time.Now().Add(time.Hour*-1), time.Now().Add(time.Hour), nil, nil, nil) crt.Sign(caPriv)
b, _ = crt.MarshalPEM() b, _ = crt.MarshalToPEM()
certFile.Truncate(0) certFile.Truncate(0)
certFile.Seek(0, 0) certFile.Seek(0, 0)
certFile.Write(b) certFile.Write(b)
err = verify([]string{"-ca", caFile.Name(), "-crt", certFile.Name()}, ob, eb) err = verify([]string{"-ca", caFile.Name(), "-crt", certFile.Name()}, ob, eb)
assert.Empty(t, ob.String()) assert.Equal(t, "", ob.String())
assert.Empty(t, eb.String()) assert.Equal(t, "", eb.String())
require.NoError(t, err) assert.Nil(t, err)
} }

View File

@@ -8,12 +8,11 @@ import (
"github.com/sirupsen/logrus" "github.com/sirupsen/logrus"
"github.com/slackhq/nebula" "github.com/slackhq/nebula"
"github.com/slackhq/nebula/config" "github.com/slackhq/nebula/config"
"github.com/slackhq/nebula/util"
) )
// A version string that can be set with // A version string that can be set with
// //
// -ldflags "-X main.Build=SOMEVERSION" // -ldflags "-X main.Build=SOMEVERSION"
// //
// at compile-time. // at compile-time.
var Build string var Build string
@@ -59,22 +58,19 @@ func main() {
} }
ctrl, err := nebula.Main(c, *configTest, Build, l, nil) ctrl, err := nebula.Main(c, *configTest, Build, l, nil)
if err != nil {
util.LogWithContextIfNeeded("Failed to start", err, l) switch v := err.(type) {
case nebula.ContextualError:
v.Log(l)
os.Exit(1)
case error:
l.WithError(err).Error("Failed to start")
os.Exit(1) os.Exit(1)
} }
if !*configTest { if !*configTest {
wait, err := ctrl.Start() ctrl.Start()
if err != nil { ctrl.ShutdownBlock()
util.LogWithContextIfNeeded("Error while running", err, l)
os.Exit(1)
}
go ctrl.ShutdownBlock()
wait()
l.Info("Goodbye")
} }
os.Exit(0) os.Exit(0)

View File

@@ -49,14 +49,6 @@ func (p *program) Stop(s service.Service) error {
return nil return nil
} }
func fileExists(filename string) bool {
_, err := os.Stat(filename)
if os.IsNotExist(err) {
return false
}
return true
}
func doService(configPath *string, configTest *bool, build string, serviceFlag *string) { func doService(configPath *string, configTest *bool, build string, serviceFlag *string) {
if *configPath == "" { if *configPath == "" {
ex, err := os.Executable() ex, err := os.Executable()
@@ -64,9 +56,6 @@ func doService(configPath *string, configTest *bool, build string, serviceFlag *
panic(err) panic(err)
} }
*configPath = filepath.Dir(ex) + "/config.yaml" *configPath = filepath.Dir(ex) + "/config.yaml"
if !fileExists(*configPath) {
*configPath = filepath.Dir(ex) + "/config.yml"
}
} }
svcConfig := &service.Config{ svcConfig := &service.Config{

View File

@@ -3,20 +3,16 @@ package main
import ( import (
"flag" "flag"
"fmt" "fmt"
"log"
"net/http"
_ "net/http/pprof"
"os" "os"
"github.com/sirupsen/logrus" "github.com/sirupsen/logrus"
"github.com/slackhq/nebula" "github.com/slackhq/nebula"
"github.com/slackhq/nebula/config" "github.com/slackhq/nebula/config"
"github.com/slackhq/nebula/util"
) )
// A version string that can be set with // A version string that can be set with
// //
// -ldflags "-X main.Build=SOMEVERSION" // -ldflags "-X main.Build=SOMEVERSION"
// //
// at compile-time. // at compile-time.
var Build string var Build string
@@ -56,27 +52,19 @@ func main() {
} }
ctrl, err := nebula.Main(c, *configTest, Build, l, nil) ctrl, err := nebula.Main(c, *configTest, Build, l, nil)
if err != nil {
util.LogWithContextIfNeeded("Failed to start", err, l) switch v := err.(type) {
case nebula.ContextualError:
v.Log(l)
os.Exit(1)
case error:
l.WithError(err).Error("Failed to start")
os.Exit(1) os.Exit(1)
} }
go func() {
log.Println(http.ListenAndServe("0.0.0.0:6060", nil))
}()
if !*configTest { if !*configTest {
wait, err := ctrl.Start() ctrl.Start()
if err != nil { ctrl.ShutdownBlock()
util.LogWithContextIfNeeded("Error while running", err, l)
os.Exit(1)
}
go ctrl.ShutdownBlock()
notifyReady(l)
wait()
l.Info("Goodbye")
} }
os.Exit(0) os.Exit(0)

View File

@@ -1,42 +0,0 @@
package main
import (
"net"
"os"
"time"
"github.com/sirupsen/logrus"
)
// SdNotifyReady tells systemd the service is ready and dependent services can now be started
// https://www.freedesktop.org/software/systemd/man/sd_notify.html
// https://www.freedesktop.org/software/systemd/man/systemd.service.html
const SdNotifyReady = "READY=1"
func notifyReady(l *logrus.Logger) {
sockName := os.Getenv("NOTIFY_SOCKET")
if sockName == "" {
l.Debugln("NOTIFY_SOCKET systemd env var not set, not sending ready signal")
return
}
conn, err := net.DialTimeout("unixgram", sockName, time.Second)
if err != nil {
l.WithError(err).Error("failed to connect to systemd notification socket")
return
}
defer conn.Close()
err = conn.SetWriteDeadline(time.Now().Add(time.Second))
if err != nil {
l.WithError(err).Error("failed to set the write deadline for the systemd notification socket")
return
}
if _, err = conn.Write([]byte(SdNotifyReady)); err != nil {
l.WithError(err).Error("failed to signal the systemd notification socket")
return
}
l.Debugln("notified systemd the service is ready")
}

View File

@@ -1,10 +0,0 @@
//go:build !linux
// +build !linux
package main
import "github.com/sirupsen/logrus"
func notifyReady(_ *logrus.Logger) {
// No init service to notify
}

View File

@@ -4,35 +4,33 @@ import (
"context" "context"
"errors" "errors"
"fmt" "fmt"
"math" "io/ioutil"
"os" "os"
"os/signal" "os/signal"
"path/filepath" "path/filepath"
"sort" "sort"
"strconv" "strconv"
"strings" "strings"
"sync"
"syscall" "syscall"
"time" "time"
"dario.cat/mergo" "github.com/imdario/mergo"
"github.com/sirupsen/logrus" "github.com/sirupsen/logrus"
"gopkg.in/yaml.v3" "gopkg.in/yaml.v2"
) )
type C struct { type C struct {
path string path string
files []string files []string
Settings map[string]any Settings map[interface{}]interface{}
oldSettings map[string]any oldSettings map[interface{}]interface{}
callbacks []func(*C) callbacks []func(*C)
l *logrus.Logger l *logrus.Logger
reloadLock sync.Mutex
} }
func NewC(l *logrus.Logger) *C { func NewC(l *logrus.Logger) *C {
return &C{ return &C{
Settings: make(map[string]any), Settings: make(map[interface{}]interface{}),
l: l, l: l,
} }
} }
@@ -76,11 +74,6 @@ func (c *C) RegisterReloadCallback(f func(*C)) {
c.callbacks = append(c.callbacks, f) c.callbacks = append(c.callbacks, f)
} }
// InitialLoad returns true if this is the first load of the config, and ReloadConfig has not been called yet.
func (c *C) InitialLoad() bool {
return c.oldSettings == nil
}
// HasChanged checks if the underlying structure of the provided key has changed after a config reload. The value of // HasChanged checks if the underlying structure of the provided key has changed after a config reload. The value of
// k in both the old and new settings will be serialized, the result of the string comparison is returned. // k in both the old and new settings will be serialized, the result of the string comparison is returned.
// If k is an empty string the entire config is tested. // If k is an empty string the entire config is tested.
@@ -92,8 +85,8 @@ func (c *C) HasChanged(k string) bool {
} }
var ( var (
nv any nv interface{}
ov any ov interface{}
) )
if k == "" { if k == "" {
@@ -121,10 +114,6 @@ func (c *C) HasChanged(k string) bool {
// CatchHUP will listen for the HUP signal in a go routine and reload all configs found in the // CatchHUP will listen for the HUP signal in a go routine and reload all configs found in the
// original path provided to Load. The old settings are shallow copied for change detection after the reload. // original path provided to Load. The old settings are shallow copied for change detection after the reload.
func (c *C) CatchHUP(ctx context.Context) { func (c *C) CatchHUP(ctx context.Context) {
if c.path == "" {
return
}
ch := make(chan os.Signal, 1) ch := make(chan os.Signal, 1)
signal.Notify(ch, syscall.SIGHUP) signal.Notify(ch, syscall.SIGHUP)
@@ -144,10 +133,7 @@ func (c *C) CatchHUP(ctx context.Context) {
} }
func (c *C) ReloadConfig() { func (c *C) ReloadConfig() {
c.reloadLock.Lock() c.oldSettings = make(map[interface{}]interface{})
defer c.reloadLock.Unlock()
c.oldSettings = make(map[string]any)
for k, v := range c.Settings { for k, v := range c.Settings {
c.oldSettings[k] = v c.oldSettings[k] = v
} }
@@ -163,27 +149,6 @@ func (c *C) ReloadConfig() {
} }
} }
func (c *C) ReloadConfigString(raw string) error {
c.reloadLock.Lock()
defer c.reloadLock.Unlock()
c.oldSettings = make(map[string]any)
for k, v := range c.Settings {
c.oldSettings[k] = v
}
err := c.LoadString(raw)
if err != nil {
return err
}
for _, v := range c.callbacks {
v(c)
}
return nil
}
// GetString will get the string for k or return the default d if not found or invalid // GetString will get the string for k or return the default d if not found or invalid
func (c *C) GetString(k, d string) string { func (c *C) GetString(k, d string) string {
r := c.Get(k) r := c.Get(k)
@@ -201,7 +166,7 @@ func (c *C) GetStringSlice(k string, d []string) []string {
return d return d
} }
rv, ok := r.([]any) rv, ok := r.([]interface{})
if !ok { if !ok {
return d return d
} }
@@ -215,13 +180,13 @@ func (c *C) GetStringSlice(k string, d []string) []string {
} }
// GetMap will get the map for k or return the default d if not found or invalid // GetMap will get the map for k or return the default d if not found or invalid
func (c *C) GetMap(k string, d map[string]any) map[string]any { func (c *C) GetMap(k string, d map[interface{}]interface{}) map[interface{}]interface{} {
r := c.Get(k) r := c.Get(k)
if r == nil { if r == nil {
return d return d
} }
v, ok := r.(map[string]any) v, ok := r.(map[interface{}]interface{})
if !ok { if !ok {
return d return d
} }
@@ -240,15 +205,6 @@ func (c *C) GetInt(k string, d int) int {
return v return v
} }
// GetUint32 will get the uint32 for k or return the default d if not found or invalid
func (c *C) GetUint32(k string, d uint32) uint32 {
r := c.GetInt(k, int(d))
if r < 0 || uint64(r) > uint64(math.MaxUint32) {
return d
}
return uint32(r)
}
// GetBool will get the bool for k or return the default d if not found or invalid // GetBool will get the bool for k or return the default d if not found or invalid
func (c *C) GetBool(k string, d bool) bool { func (c *C) GetBool(k string, d bool) bool {
r := strings.ToLower(c.GetString(k, fmt.Sprintf("%v", d))) r := strings.ToLower(c.GetString(k, fmt.Sprintf("%v", d)))
@@ -266,22 +222,6 @@ func (c *C) GetBool(k string, d bool) bool {
return v return v
} }
func AsBool(v any) (value bool, ok bool) {
switch x := v.(type) {
case bool:
return x, true
case string:
switch x {
case "y", "yes":
return true, true
case "n", "no":
return false, true
}
}
return false, false
}
// GetDuration will get the duration for k or return the default d if not found or invalid // GetDuration will get the duration for k or return the default d if not found or invalid
func (c *C) GetDuration(k string, d time.Duration) time.Duration { func (c *C) GetDuration(k string, d time.Duration) time.Duration {
r := c.GetString(k, "") r := c.GetString(k, "")
@@ -292,7 +232,7 @@ func (c *C) GetDuration(k string, d time.Duration) time.Duration {
return v return v
} }
func (c *C) Get(k string) any { func (c *C) Get(k string) interface{} {
return c.get(k, c.Settings) return c.get(k, c.Settings)
} }
@@ -300,10 +240,10 @@ func (c *C) IsSet(k string) bool {
return c.get(k, c.Settings) != nil return c.get(k, c.Settings) != nil
} }
func (c *C) get(k string, v any) any { func (c *C) get(k string, v interface{}) interface{} {
parts := strings.Split(k, ".") parts := strings.Split(k, ".")
for _, p := range parts { for _, p := range parts {
m, ok := v.(map[string]any) m, ok := v.(map[interface{}]interface{})
if !ok { if !ok {
return nil return nil
} }
@@ -362,7 +302,7 @@ func (c *C) addFile(path string, direct bool) error {
} }
func (c *C) parseRaw(b []byte) error { func (c *C) parseRaw(b []byte) error {
var m map[string]any var m map[interface{}]interface{}
err := yaml.Unmarshal(b, &m) err := yaml.Unmarshal(b, &m)
if err != nil { if err != nil {
@@ -374,15 +314,15 @@ func (c *C) parseRaw(b []byte) error {
} }
func (c *C) parse() error { func (c *C) parse() error {
var m map[string]any var m map[interface{}]interface{}
for _, path := range c.files { for _, path := range c.files {
b, err := os.ReadFile(path) b, err := ioutil.ReadFile(path)
if err != nil { if err != nil {
return err return err
} }
var nm map[string]any var nm map[interface{}]interface{}
err = yaml.Unmarshal(b, &nm) err = yaml.Unmarshal(b, &nm)
if err != nil { if err != nil {
return err return err

View File

@@ -1,55 +1,56 @@
package config package config
import ( import (
"io/ioutil"
"os" "os"
"path/filepath" "path/filepath"
"testing" "testing"
"time" "time"
"dario.cat/mergo" "github.com/slackhq/nebula/util"
"github.com/slackhq/nebula/test"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"gopkg.in/yaml.v3"
) )
func TestConfig_Load(t *testing.T) { func TestConfig_Load(t *testing.T) {
l := test.NewLogger() l := util.NewTestLogger()
dir, err := os.MkdirTemp("", "config-test") dir, err := ioutil.TempDir("", "config-test")
// invalid yaml // invalid yaml
c := NewC(l) c := NewC(l)
os.WriteFile(filepath.Join(dir, "01.yaml"), []byte(" invalid yaml"), 0644) ioutil.WriteFile(filepath.Join(dir, "01.yaml"), []byte(" invalid yaml"), 0644)
require.EqualError(t, c.Load(dir), "yaml: unmarshal errors:\n line 1: cannot unmarshal !!str `invalid...` into map[string]interface {}") assert.EqualError(t, c.Load(dir), "yaml: unmarshal errors:\n line 1: cannot unmarshal !!str `invalid...` into map[interface {}]interface {}")
// simple multi config merge // simple multi config merge
c = NewC(l) c = NewC(l)
os.RemoveAll(dir) os.RemoveAll(dir)
os.Mkdir(dir, 0755) os.Mkdir(dir, 0755)
require.NoError(t, err) assert.Nil(t, err)
os.WriteFile(filepath.Join(dir, "01.yaml"), []byte("outer:\n inner: hi"), 0644) ioutil.WriteFile(filepath.Join(dir, "01.yaml"), []byte("outer:\n inner: hi"), 0644)
os.WriteFile(filepath.Join(dir, "02.yml"), []byte("outer:\n inner: override\nnew: hi"), 0644) ioutil.WriteFile(filepath.Join(dir, "02.yml"), []byte("outer:\n inner: override\nnew: hi"), 0644)
require.NoError(t, c.Load(dir)) assert.Nil(t, c.Load(dir))
expected := map[string]any{ expected := map[interface{}]interface{}{
"outer": map[string]any{ "outer": map[interface{}]interface{}{
"inner": "override", "inner": "override",
}, },
"new": "hi", "new": "hi",
} }
assert.Equal(t, expected, c.Settings) assert.Equal(t, expected, c.Settings)
//TODO: test symlinked file
//TODO: test symlinked directory
} }
func TestConfig_Get(t *testing.T) { func TestConfig_Get(t *testing.T) {
l := test.NewLogger() l := util.NewTestLogger()
// test simple type // test simple type
c := NewC(l) c := NewC(l)
c.Settings["firewall"] = map[string]any{"outbound": "hi"} c.Settings["firewall"] = map[interface{}]interface{}{"outbound": "hi"}
assert.Equal(t, "hi", c.Get("firewall.outbound")) assert.Equal(t, "hi", c.Get("firewall.outbound"))
// test complex type // test complex type
inner := []map[string]any{{"port": "1", "code": "2"}} inner := []map[interface{}]interface{}{{"port": "1", "code": "2"}}
c.Settings["firewall"] = map[string]any{"outbound": inner} c.Settings["firewall"] = map[interface{}]interface{}{"outbound": inner}
assert.EqualValues(t, inner, c.Get("firewall.outbound")) assert.EqualValues(t, inner, c.Get("firewall.outbound"))
// test missing // test missing
@@ -57,42 +58,42 @@ func TestConfig_Get(t *testing.T) {
} }
func TestConfig_GetStringSlice(t *testing.T) { func TestConfig_GetStringSlice(t *testing.T) {
l := test.NewLogger() l := util.NewTestLogger()
c := NewC(l) c := NewC(l)
c.Settings["slice"] = []any{"one", "two"} c.Settings["slice"] = []interface{}{"one", "two"}
assert.Equal(t, []string{"one", "two"}, c.GetStringSlice("slice", []string{})) assert.Equal(t, []string{"one", "two"}, c.GetStringSlice("slice", []string{}))
} }
func TestConfig_GetBool(t *testing.T) { func TestConfig_GetBool(t *testing.T) {
l := test.NewLogger() l := util.NewTestLogger()
c := NewC(l) c := NewC(l)
c.Settings["bool"] = true c.Settings["bool"] = true
assert.True(t, c.GetBool("bool", false)) assert.Equal(t, true, c.GetBool("bool", false))
c.Settings["bool"] = "true" c.Settings["bool"] = "true"
assert.True(t, c.GetBool("bool", false)) assert.Equal(t, true, c.GetBool("bool", false))
c.Settings["bool"] = false c.Settings["bool"] = false
assert.False(t, c.GetBool("bool", true)) assert.Equal(t, false, c.GetBool("bool", true))
c.Settings["bool"] = "false" c.Settings["bool"] = "false"
assert.False(t, c.GetBool("bool", true)) assert.Equal(t, false, c.GetBool("bool", true))
c.Settings["bool"] = "Y" c.Settings["bool"] = "Y"
assert.True(t, c.GetBool("bool", false)) assert.Equal(t, true, c.GetBool("bool", false))
c.Settings["bool"] = "yEs" c.Settings["bool"] = "yEs"
assert.True(t, c.GetBool("bool", false)) assert.Equal(t, true, c.GetBool("bool", false))
c.Settings["bool"] = "N" c.Settings["bool"] = "N"
assert.False(t, c.GetBool("bool", true)) assert.Equal(t, false, c.GetBool("bool", true))
c.Settings["bool"] = "nO" c.Settings["bool"] = "nO"
assert.False(t, c.GetBool("bool", true)) assert.Equal(t, false, c.GetBool("bool", true))
} }
func TestConfig_HasChanged(t *testing.T) { func TestConfig_HasChanged(t *testing.T) {
l := test.NewLogger() l := util.NewTestLogger()
// No reload has occurred, return false // No reload has occurred, return false
c := NewC(l) c := NewC(l)
c.Settings["test"] = "hi" c.Settings["test"] = "hi"
@@ -101,33 +102,33 @@ func TestConfig_HasChanged(t *testing.T) {
// Test key change // Test key change
c = NewC(l) c = NewC(l)
c.Settings["test"] = "hi" c.Settings["test"] = "hi"
c.oldSettings = map[string]any{"test": "no"} c.oldSettings = map[interface{}]interface{}{"test": "no"}
assert.True(t, c.HasChanged("test")) assert.True(t, c.HasChanged("test"))
assert.True(t, c.HasChanged("")) assert.True(t, c.HasChanged(""))
// No key change // No key change
c = NewC(l) c = NewC(l)
c.Settings["test"] = "hi" c.Settings["test"] = "hi"
c.oldSettings = map[string]any{"test": "hi"} c.oldSettings = map[interface{}]interface{}{"test": "hi"}
assert.False(t, c.HasChanged("test")) assert.False(t, c.HasChanged("test"))
assert.False(t, c.HasChanged("")) assert.False(t, c.HasChanged(""))
} }
func TestConfig_ReloadConfig(t *testing.T) { func TestConfig_ReloadConfig(t *testing.T) {
l := test.NewLogger() l := util.NewTestLogger()
done := make(chan bool, 1) done := make(chan bool, 1)
dir, err := os.MkdirTemp("", "config-test") dir, err := ioutil.TempDir("", "config-test")
require.NoError(t, err) assert.Nil(t, err)
os.WriteFile(filepath.Join(dir, "01.yaml"), []byte("outer:\n inner: hi"), 0644) ioutil.WriteFile(filepath.Join(dir, "01.yaml"), []byte("outer:\n inner: hi"), 0644)
c := NewC(l) c := NewC(l)
require.NoError(t, c.Load(dir)) assert.Nil(t, c.Load(dir))
assert.False(t, c.HasChanged("outer.inner")) assert.False(t, c.HasChanged("outer.inner"))
assert.False(t, c.HasChanged("outer")) assert.False(t, c.HasChanged("outer"))
assert.False(t, c.HasChanged("")) assert.False(t, c.HasChanged(""))
os.WriteFile(filepath.Join(dir, "01.yaml"), []byte("outer:\n inner: ho"), 0644) ioutil.WriteFile(filepath.Join(dir, "01.yaml"), []byte("outer:\n inner: ho"), 0644)
c.RegisterReloadCallback(func(c *C) { c.RegisterReloadCallback(func(c *C) {
done <- true done <- true
@@ -146,77 +147,3 @@ func TestConfig_ReloadConfig(t *testing.T) {
} }
} }
// Ensure mergo merges are done the way we expect.
// This is needed to test for potential regressions, like:
// - https://github.com/imdario/mergo/issues/187
func TestConfig_MergoMerge(t *testing.T) {
configs := [][]byte{
[]byte(`
listen:
port: 1234
`),
[]byte(`
firewall:
inbound:
- port: 443
proto: tcp
groups:
- server
- port: 443
proto: tcp
groups:
- webapp
`),
[]byte(`
listen:
host: 0.0.0.0
port: 4242
firewall:
outbound:
- port: any
proto: any
host: any
inbound:
- port: any
proto: icmp
host: any
`),
}
var m map[string]any
// merge the same way config.parse() merges
for _, b := range configs {
var nm map[string]any
err := yaml.Unmarshal(b, &nm)
require.NoError(t, err)
// We need to use WithAppendSlice so that firewall rules in separate
// files are appended together
err = mergo.Merge(&nm, m, mergo.WithAppendSlice)
m = nm
require.NoError(t, err)
}
t.Logf("Merged Config: %#v", m)
mYaml, err := yaml.Marshal(m)
require.NoError(t, err)
t.Logf("Merged Config as YAML:\n%s", mYaml)
// If a bug is present, some items might be replaced instead of merged like we expect
expected := map[string]any{
"firewall": map[string]any{
"inbound": []any{
map[string]any{"host": "any", "port": "any", "proto": "icmp"},
map[string]any{"groups": []any{"server"}, "port": 443, "proto": "tcp"},
map[string]any{"groups": []any{"webapp"}, "port": 443, "proto": "tcp"}},
"outbound": []any{
map[string]any{"host": "any", "port": "any", "proto": "any"}}},
"listen": map[string]any{
"host": "0.0.0.0",
"port": 4242,
},
}
assert.Equal(t, expected, m)
}

View File

@@ -1,153 +1,151 @@
package nebula package nebula
import ( import (
"bytes"
"context" "context"
"encoding/binary"
"fmt"
"net/netip"
"sync" "sync"
"sync/atomic"
"time" "time"
"github.com/rcrowley/go-metrics"
"github.com/sirupsen/logrus" "github.com/sirupsen/logrus"
"github.com/slackhq/nebula/cert"
"github.com/slackhq/nebula/config"
"github.com/slackhq/nebula/header" "github.com/slackhq/nebula/header"
"github.com/slackhq/nebula/iputil"
) )
type trafficDecision int // TODO: incount and outcount are intended as a shortcut to locking the mutexes for every single packet
// and something like every 10 packets we could lock, send 10, then unlock for a moment
const (
doNothing trafficDecision = 0
deleteTunnel trafficDecision = 1 // delete the hostinfo on our side, do not notify the remote
closeTunnel trafficDecision = 2 // delete the hostinfo and notify the remote
swapPrimary trafficDecision = 3
migrateRelays trafficDecision = 4
tryRehandshake trafficDecision = 5
sendTestPacket trafficDecision = 6
)
type connectionManager struct { type connectionManager struct {
// relayUsed holds which relay localIndexs are in use
relayUsed map[uint32]struct{}
relayUsedLock *sync.RWMutex
hostMap *HostMap hostMap *HostMap
trafficTimer *LockingTimerWheel[uint32] in map[iputil.VpnIp]struct{}
inLock *sync.RWMutex
inCount int
out map[iputil.VpnIp]struct{}
outLock *sync.RWMutex
outCount int
TrafficTimer *SystemTimerWheel
intf *Interface intf *Interface
punchy *Punchy
// Configuration settings pendingDeletion map[iputil.VpnIp]int
checkInterval time.Duration pendingDeletionLock *sync.RWMutex
pendingDeletionInterval time.Duration pendingDeletionTimer *SystemTimerWheel
inactivityTimeout atomic.Int64
dropInactive atomic.Bool
metricsTxPunchy metrics.Counter checkInterval int
pendingDeletionInterval int
l *logrus.Logger l *logrus.Logger
// I wanted to call one matLock
} }
func newConnectionManagerFromConfig(l *logrus.Logger, c *config.C, hm *HostMap, p *Punchy) *connectionManager { func newConnectionManager(ctx context.Context, l *logrus.Logger, intf *Interface, checkInterval, pendingDeletionInterval int) *connectionManager {
cm := &connectionManager{ nc := &connectionManager{
hostMap: hm, hostMap: intf.hostMap,
l: l, in: make(map[iputil.VpnIp]struct{}),
punchy: p, inLock: &sync.RWMutex{},
relayUsed: make(map[uint32]struct{}), inCount: 0,
relayUsedLock: &sync.RWMutex{}, out: make(map[iputil.VpnIp]struct{}),
metricsTxPunchy: metrics.GetOrRegisterCounter("messages.tx.punchy", nil), outLock: &sync.RWMutex{},
outCount: 0,
TrafficTimer: NewSystemTimerWheel(time.Millisecond*500, time.Second*60),
intf: intf,
pendingDeletion: make(map[iputil.VpnIp]int),
pendingDeletionLock: &sync.RWMutex{},
pendingDeletionTimer: NewSystemTimerWheel(time.Millisecond*500, time.Second*60),
checkInterval: checkInterval,
pendingDeletionInterval: pendingDeletionInterval,
l: l,
} }
nc.Start(ctx)
cm.reload(c, true) return nc
c.RegisterReloadCallback(func(c *config.C) {
cm.reload(c, false)
})
return cm
} }
func (cm *connectionManager) reload(c *config.C, initial bool) { func (n *connectionManager) In(ip iputil.VpnIp) {
if initial { n.inLock.RLock()
cm.checkInterval = time.Duration(c.GetInt("timers.connection_alive_interval", 5)) * time.Second
cm.pendingDeletionInterval = time.Duration(c.GetInt("timers.pending_deletion_interval", 10)) * time.Second
// We want at least a minimum resolution of 500ms per tick so that we can hit these intervals
// pretty close to their configured duration.
// The inactivity duration is checked each time a hostinfo ticks through so we don't need the wheel to contain it.
minDuration := min(time.Millisecond*500, cm.checkInterval, cm.pendingDeletionInterval)
maxDuration := max(cm.checkInterval, cm.pendingDeletionInterval)
cm.trafficTimer = NewLockingTimerWheel[uint32](minDuration, maxDuration)
}
if initial || c.HasChanged("tunnels.inactivity_timeout") {
old := cm.getInactivityTimeout()
cm.inactivityTimeout.Store((int64)(c.GetDuration("tunnels.inactivity_timeout", 10*time.Minute)))
if !initial {
cm.l.WithField("oldDuration", old).
WithField("newDuration", cm.getInactivityTimeout()).
Info("Inactivity timeout has changed")
}
}
if initial || c.HasChanged("tunnels.drop_inactive") {
old := cm.dropInactive.Load()
cm.dropInactive.Store(c.GetBool("tunnels.drop_inactive", false))
if !initial {
cm.l.WithField("oldBool", old).
WithField("newBool", cm.dropInactive.Load()).
Info("Drop inactive setting has changed")
}
}
}
func (cm *connectionManager) getInactivityTimeout() time.Duration {
return (time.Duration)(cm.inactivityTimeout.Load())
}
func (cm *connectionManager) In(h *HostInfo) {
h.in.Store(true)
}
func (cm *connectionManager) Out(h *HostInfo) {
h.out.Store(true)
}
func (cm *connectionManager) RelayUsed(localIndex uint32) {
cm.relayUsedLock.RLock()
// If this already exists, return // If this already exists, return
if _, ok := cm.relayUsed[localIndex]; ok { if _, ok := n.in[ip]; ok {
cm.relayUsedLock.RUnlock() n.inLock.RUnlock()
return return
} }
cm.relayUsedLock.RUnlock() n.inLock.RUnlock()
cm.relayUsedLock.Lock() n.inLock.Lock()
cm.relayUsed[localIndex] = struct{}{} n.in[ip] = struct{}{}
cm.relayUsedLock.Unlock() n.inLock.Unlock()
} }
// getAndResetTrafficCheck returns if there was any inbound or outbound traffic within the last tick and func (n *connectionManager) Out(ip iputil.VpnIp) {
// resets the state for this local index n.outLock.RLock()
func (cm *connectionManager) getAndResetTrafficCheck(h *HostInfo, now time.Time) (bool, bool) { // If this already exists, return
in := h.in.Swap(false) if _, ok := n.out[ip]; ok {
out := h.out.Swap(false) n.outLock.RUnlock()
if in || out { return
h.lastUsed = now
} }
return in, out n.outLock.RUnlock()
} n.outLock.Lock()
// double check since we dropped the lock temporarily
// AddTrafficWatch must be called for every new HostInfo. if _, ok := n.out[ip]; ok {
// We will continue to monitor the HostInfo until the tunnel is dropped. n.outLock.Unlock()
func (cm *connectionManager) AddTrafficWatch(h *HostInfo) { return
if h.out.Swap(true) == false {
cm.trafficTimer.Add(h.localIndexId, cm.checkInterval)
} }
n.out[ip] = struct{}{}
n.AddTrafficWatch(ip, n.checkInterval)
n.outLock.Unlock()
} }
func (cm *connectionManager) Start(ctx context.Context) { func (n *connectionManager) CheckIn(vpnIp iputil.VpnIp) bool {
clockSource := time.NewTicker(cm.trafficTimer.t.tickDuration) n.inLock.RLock()
if _, ok := n.in[vpnIp]; ok {
n.inLock.RUnlock()
return true
}
n.inLock.RUnlock()
return false
}
func (n *connectionManager) ClearIP(ip iputil.VpnIp) {
n.inLock.Lock()
n.outLock.Lock()
delete(n.in, ip)
delete(n.out, ip)
n.inLock.Unlock()
n.outLock.Unlock()
}
func (n *connectionManager) ClearPendingDeletion(ip iputil.VpnIp) {
n.pendingDeletionLock.Lock()
delete(n.pendingDeletion, ip)
n.pendingDeletionLock.Unlock()
}
func (n *connectionManager) AddPendingDeletion(ip iputil.VpnIp) {
n.pendingDeletionLock.Lock()
if _, ok := n.pendingDeletion[ip]; ok {
n.pendingDeletion[ip] += 1
} else {
n.pendingDeletion[ip] = 0
}
n.pendingDeletionTimer.Add(ip, time.Second*time.Duration(n.pendingDeletionInterval))
n.pendingDeletionLock.Unlock()
}
func (n *connectionManager) checkPendingDeletion(ip iputil.VpnIp) bool {
n.pendingDeletionLock.RLock()
if _, ok := n.pendingDeletion[ip]; ok {
n.pendingDeletionLock.RUnlock()
return true
}
n.pendingDeletionLock.RUnlock()
return false
}
func (n *connectionManager) AddTrafficWatch(vpnIp iputil.VpnIp, seconds int) {
n.TrafficTimer.Add(vpnIp, time.Second*time.Duration(seconds))
}
func (n *connectionManager) Start(ctx context.Context) {
go n.Run(ctx)
}
func (n *connectionManager) Run(ctx context.Context) {
clockSource := time.NewTicker(500 * time.Millisecond)
defer clockSource.Stop() defer clockSource.Stop()
p := []byte("") p := []byte("")
@@ -158,387 +156,160 @@ func (cm *connectionManager) Start(ctx context.Context) {
select { select {
case <-ctx.Done(): case <-ctx.Done():
return return
case now := <-clockSource.C: case now := <-clockSource.C:
cm.trafficTimer.Advance(now) n.HandleMonitorTick(now, p, nb, out)
for { n.HandleDeletionTick(now)
localIndex, has := cm.trafficTimer.Purge()
if !has {
break
}
cm.doTrafficCheck(localIndex, p, nb, out, now)
}
} }
} }
} }
func (cm *connectionManager) doTrafficCheck(localIndex uint32, p, nb, out []byte, now time.Time) { func (n *connectionManager) HandleMonitorTick(now time.Time, p, nb, out []byte) {
decision, hostinfo, primary := cm.makeTrafficDecision(localIndex, now) n.TrafficTimer.advance(now)
for {
switch decision { ep := n.TrafficTimer.Purge()
case deleteTunnel: if ep == nil {
if cm.hostMap.DeleteHostInfo(hostinfo) { break
// Only clearing the lighthouse cache if this is the last hostinfo for this vpn ip in the hostmap
cm.intf.lightHouse.DeleteVpnAddrs(hostinfo.vpnAddrs)
} }
case closeTunnel: vpnIp := ep.(iputil.VpnIp)
cm.intf.sendCloseTunnel(hostinfo)
cm.intf.closeTunnel(hostinfo)
case swapPrimary: // Check for traffic coming back in from this host.
cm.swapPrimary(hostinfo, primary) traf := n.CheckIn(vpnIp)
case migrateRelays: hostinfo, err := n.hostMap.QueryVpnIp(vpnIp)
cm.migrateRelayUsed(hostinfo, primary) if err != nil {
n.l.Debugf("Not found in hostmap: %s", vpnIp)
case tryRehandshake: if !n.intf.disconnectInvalid {
cm.tryRehandshake(hostinfo) n.ClearIP(vpnIp)
n.ClearPendingDeletion(vpnIp)
case sendTestPacket:
cm.intf.SendMessageToHostInfo(header.Test, header.TestRequest, hostinfo, p, nb, out)
}
cm.resetRelayTrafficCheck(hostinfo)
}
func (cm *connectionManager) resetRelayTrafficCheck(hostinfo *HostInfo) {
if hostinfo != nil {
cm.relayUsedLock.Lock()
defer cm.relayUsedLock.Unlock()
// No need to migrate any relays, delete usage info now.
for _, idx := range hostinfo.relayState.CopyRelayForIdxs() {
delete(cm.relayUsed, idx)
}
}
}
func (cm *connectionManager) migrateRelayUsed(oldhostinfo, newhostinfo *HostInfo) {
relayFor := oldhostinfo.relayState.CopyAllRelayFor()
for _, r := range relayFor {
existing, ok := newhostinfo.relayState.QueryRelayForByIp(r.PeerAddr)
var index uint32
var relayFrom netip.Addr
var relayTo netip.Addr
switch {
case ok:
switch existing.State {
case Established, PeerRequested, Disestablished:
// This relay already exists in newhostinfo, then do nothing.
continue continue
case Requested:
// The relay exists in a Requested state; re-send the request
index = existing.LocalIndex
switch r.Type {
case TerminalType:
relayFrom = cm.intf.myVpnAddrs[0]
relayTo = existing.PeerAddr
case ForwardingType:
relayFrom = existing.PeerAddr
relayTo = newhostinfo.vpnAddrs[0]
default:
// should never happen
panic(fmt.Sprintf("Migrating unknown relay type: %v", r.Type))
}
}
case !ok:
cm.relayUsedLock.RLock()
if _, relayUsed := cm.relayUsed[r.LocalIndex]; !relayUsed {
// The relay hasn't been used; don't migrate it.
cm.relayUsedLock.RUnlock()
continue
}
cm.relayUsedLock.RUnlock()
// The relay doesn't exist at all; create some relay state and send the request.
var err error
index, err = AddRelay(cm.l, newhostinfo, cm.hostMap, r.PeerAddr, nil, r.Type, Requested)
if err != nil {
cm.l.WithError(err).Error("failed to migrate relay to new hostinfo")
continue
}
switch r.Type {
case TerminalType:
relayFrom = cm.intf.myVpnAddrs[0]
relayTo = r.PeerAddr
case ForwardingType:
relayFrom = r.PeerAddr
relayTo = newhostinfo.vpnAddrs[0]
default:
// should never happen
panic(fmt.Sprintf("Migrating unknown relay type: %v", r.Type))
} }
} }
// Send a CreateRelayRequest to the peer. if n.handleInvalidCertificate(now, vpnIp, hostinfo) {
req := NebulaControl{
Type: NebulaControl_CreateRelayRequest,
InitiatorRelayIndex: index,
}
switch newhostinfo.GetCert().Certificate.Version() {
case cert.Version1:
if !relayFrom.Is4() {
cm.l.Error("can not migrate v1 relay with a v6 network because the relay is not running a current nebula version")
continue
}
if !relayTo.Is4() {
cm.l.Error("can not migrate v1 relay with a v6 remote network because the relay is not running a current nebula version")
continue
}
b := relayFrom.As4()
req.OldRelayFromAddr = binary.BigEndian.Uint32(b[:])
b = relayTo.As4()
req.OldRelayToAddr = binary.BigEndian.Uint32(b[:])
case cert.Version2:
req.RelayFromAddr = netAddrToProtoAddr(relayFrom)
req.RelayToAddr = netAddrToProtoAddr(relayTo)
default:
newhostinfo.logger(cm.l).Error("Unknown certificate version found while attempting to migrate relay")
continue continue
} }
msg, err := req.Marshal() // If we saw an incoming packets from this ip and peer's certificate is not
// expired, just ignore.
if traf {
if n.l.Level >= logrus.DebugLevel {
n.l.WithField("vpnIp", vpnIp).
WithField("tunnelCheck", m{"state": "alive", "method": "passive"}).
Debug("Tunnel status")
}
n.ClearIP(vpnIp)
n.ClearPendingDeletion(vpnIp)
continue
}
hostinfo.logger(n.l).
WithField("tunnelCheck", m{"state": "testing", "method": "active"}).
Debug("Tunnel status")
if hostinfo != nil && hostinfo.ConnectionState != nil {
// Send a test packet to trigger an authenticated tunnel test, this should suss out any lingering tunnel issues
n.intf.SendMessageToVpnIp(header.Test, header.TestRequest, vpnIp, p, nb, out)
} else {
hostinfo.logger(n.l).Debugf("Hostinfo sadness: %s", vpnIp)
}
n.AddPendingDeletion(vpnIp)
}
}
func (n *connectionManager) HandleDeletionTick(now time.Time) {
n.pendingDeletionTimer.advance(now)
for {
ep := n.pendingDeletionTimer.Purge()
if ep == nil {
break
}
vpnIp := ep.(iputil.VpnIp)
hostinfo, err := n.hostMap.QueryVpnIp(vpnIp)
if err != nil { if err != nil {
cm.l.WithError(err).Error("failed to marshal Control message to migrate relay") n.l.Debugf("Not found in hostmap: %s", vpnIp)
} else {
cm.intf.SendMessageToHostInfo(header.Control, 0, newhostinfo, msg, make([]byte, 12), make([]byte, mtu))
cm.l.WithFields(logrus.Fields{
"relayFrom": req.RelayFromAddr,
"relayTo": req.RelayToAddr,
"initiatorRelayIndex": req.InitiatorRelayIndex,
"responderRelayIndex": req.ResponderRelayIndex,
"vpnAddrs": newhostinfo.vpnAddrs}).
Info("send CreateRelayRequest")
}
}
}
func (cm *connectionManager) makeTrafficDecision(localIndex uint32, now time.Time) (trafficDecision, *HostInfo, *HostInfo) { if !n.intf.disconnectInvalid {
// Read lock the main hostmap to order decisions based on tunnels being the primary tunnel n.ClearIP(vpnIp)
cm.hostMap.RLock() n.ClearPendingDeletion(vpnIp)
defer cm.hostMap.RUnlock() continue
hostinfo := cm.hostMap.Indexes[localIndex]
if hostinfo == nil {
cm.l.WithField("localIndex", localIndex).Debugln("Not found in hostmap")
return doNothing, nil, nil
}
if cm.isInvalidCertificate(now, hostinfo) {
return closeTunnel, hostinfo, nil
}
primary := cm.hostMap.Hosts[hostinfo.vpnAddrs[0]]
mainHostInfo := true
if primary != nil && primary != hostinfo {
mainHostInfo = false
}
// Check for traffic on this hostinfo
inTraffic, outTraffic := cm.getAndResetTrafficCheck(hostinfo, now)
// A hostinfo is determined alive if there is incoming traffic
if inTraffic {
decision := doNothing
if cm.l.Level >= logrus.DebugLevel {
hostinfo.logger(cm.l).
WithField("tunnelCheck", m{"state": "alive", "method": "passive"}).
Debug("Tunnel status")
}
hostinfo.pendingDeletion.Store(false)
if mainHostInfo {
decision = tryRehandshake
} else {
if cm.shouldSwapPrimary(hostinfo) {
decision = swapPrimary
} else {
// migrate the relays to the primary, if in use.
decision = migrateRelays
} }
} }
cm.trafficTimer.Add(hostinfo.localIndexId, cm.checkInterval) if n.handleInvalidCertificate(now, vpnIp, hostinfo) {
continue
if !outTraffic {
// Send a punch packet to keep the NAT state alive
cm.sendPunch(hostinfo)
} }
return decision, hostinfo, primary // If we saw an incoming packets from this ip and peer's certificate is not
} // expired, just ignore.
traf := n.CheckIn(vpnIp)
if hostinfo.pendingDeletion.Load() { if traf {
// We have already sent a test packet and nothing was returned, this hostinfo is dead n.l.WithField("vpnIp", vpnIp).
hostinfo.logger(cm.l). WithField("tunnelCheck", m{"state": "alive", "method": "active"}).
WithField("tunnelCheck", m{"state": "dead", "method": "active"}).
Info("Tunnel status")
return deleteTunnel, hostinfo, nil
}
decision := doNothing
if hostinfo != nil && hostinfo.ConnectionState != nil && mainHostInfo {
if !outTraffic {
inactiveFor, isInactive := cm.isInactive(hostinfo, now)
if isInactive {
// Tunnel is inactive, tear it down
hostinfo.logger(cm.l).
WithField("inactiveDuration", inactiveFor).
WithField("primary", mainHostInfo).
Info("Dropping tunnel due to inactivity")
return closeTunnel, hostinfo, primary
}
// If we aren't sending or receiving traffic then its an unused tunnel and we don't to test the tunnel.
// Just maintain NAT state if configured to do so.
cm.sendPunch(hostinfo)
cm.trafficTimer.Add(hostinfo.localIndexId, cm.checkInterval)
return doNothing, nil, nil
}
if cm.punchy.GetTargetEverything() {
// This is similar to the old punchy behavior with a slight optimization.
// We aren't receiving traffic but we are sending it, punch on all known
// ips in case we need to re-prime NAT state
cm.sendPunch(hostinfo)
}
if cm.l.Level >= logrus.DebugLevel {
hostinfo.logger(cm.l).
WithField("tunnelCheck", m{"state": "testing", "method": "active"}).
Debug("Tunnel status") Debug("Tunnel status")
n.ClearIP(vpnIp)
n.ClearPendingDeletion(vpnIp)
continue
} }
// Send a test packet to trigger an authenticated tunnel test, this should suss out any lingering tunnel issues // If it comes around on deletion wheel and hasn't resolved itself, delete
decision = sendTestPacket if n.checkPendingDeletion(vpnIp) {
cn := ""
if hostinfo.ConnectionState != nil && hostinfo.ConnectionState.peerCert != nil {
cn = hostinfo.ConnectionState.peerCert.Details.Name
}
hostinfo.logger(n.l).
WithField("tunnelCheck", m{"state": "dead", "method": "active"}).
WithField("certName", cn).
Info("Tunnel status")
} else { n.ClearIP(vpnIp)
if cm.l.Level >= logrus.DebugLevel { n.ClearPendingDeletion(vpnIp)
hostinfo.logger(cm.l).Debugf("Hostinfo sadness") // TODO: This is only here to let tests work. Should do proper mocking
if n.intf.lightHouse != nil {
n.intf.lightHouse.DeleteVpnIp(vpnIp)
}
n.hostMap.DeleteHostInfo(hostinfo)
} else {
n.ClearIP(vpnIp)
n.ClearPendingDeletion(vpnIp)
} }
} }
hostinfo.pendingDeletion.Store(true)
cm.trafficTimer.Add(hostinfo.localIndexId, cm.pendingDeletionInterval)
return decision, hostinfo, nil
} }
func (cm *connectionManager) isInactive(hostinfo *HostInfo, now time.Time) (time.Duration, bool) { // handleInvalidCertificates will destroy a tunnel if pki.disconnect_invalid is true and the certificate is no longer valid
if cm.dropInactive.Load() == false { func (n *connectionManager) handleInvalidCertificate(now time.Time, vpnIp iputil.VpnIp, hostinfo *HostInfo) bool {
// We aren't configured to drop inactive tunnels if !n.intf.disconnectInvalid {
return 0, false
}
inactiveDuration := now.Sub(hostinfo.lastUsed)
if inactiveDuration < cm.getInactivityTimeout() {
// It's not considered inactive
return inactiveDuration, false
}
// The tunnel is inactive
return inactiveDuration, true
}
func (cm *connectionManager) shouldSwapPrimary(current *HostInfo) bool {
// The primary tunnel is the most recent handshake to complete locally and should work entirely fine.
// If we are here then we have multiple tunnels for a host pair and neither side believes the same tunnel is primary.
// Let's sort this out.
// Only one side should swap because if both swap then we may never resolve to a single tunnel.
// vpn addr is static across all tunnels for this host pair so lets
// use that to determine if we should consider swapping.
if current.vpnAddrs[0].Compare(cm.intf.myVpnAddrs[0]) < 0 {
// Their primary vpn addr is less than mine. Do not swap.
return false return false
} }
crt := cm.intf.pki.getCertState().getCertificate(current.ConnectionState.myCert.Version())
// If this tunnel is using the latest certificate then we should swap it to primary for a bit and see if things
// settle down.
return bytes.Equal(current.ConnectionState.myCert.Signature(), crt.Signature())
}
func (cm *connectionManager) swapPrimary(current, primary *HostInfo) {
cm.hostMap.Lock()
// Make sure the primary is still the same after the write lock. This avoids a race with a rehandshake.
if cm.hostMap.Hosts[current.vpnAddrs[0]] == primary {
cm.hostMap.unlockedMakePrimary(current)
}
cm.hostMap.Unlock()
}
// isInvalidCertificate will check if we should destroy a tunnel if pki.disconnect_invalid is true and
// the certificate is no longer valid. Block listed certificates will skip the pki.disconnect_invalid
// check and return true.
func (cm *connectionManager) isInvalidCertificate(now time.Time, hostinfo *HostInfo) bool {
remoteCert := hostinfo.GetCert() remoteCert := hostinfo.GetCert()
if remoteCert == nil { if remoteCert == nil {
return false return false
} }
caPool := cm.intf.pki.GetCAPool() valid, err := remoteCert.Verify(now, n.intf.caPool)
err := caPool.VerifyCachedCertificate(now, remoteCert) if valid {
if err == nil {
return false return false
} }
if !cm.intf.disconnectInvalid.Load() && err != cert.ErrBlockListed { fingerprint, _ := remoteCert.Sha256Sum()
// Block listed certificates should always be disconnected n.l.WithField("vpnIp", vpnIp).WithError(err).
return false WithField("certName", remoteCert.Details.Name).
} WithField("fingerprint", fingerprint).
hostinfo.logger(cm.l).WithError(err).
WithField("fingerprint", remoteCert.Fingerprint).
Info("Remote certificate is no longer valid, tearing down the tunnel") Info("Remote certificate is no longer valid, tearing down the tunnel")
// Inform the remote and close the tunnel locally
n.intf.sendCloseTunnel(hostinfo)
n.intf.closeTunnel(hostinfo, false)
n.ClearIP(vpnIp)
n.ClearPendingDeletion(vpnIp)
return true return true
} }
func (cm *connectionManager) sendPunch(hostinfo *HostInfo) {
if !cm.punchy.GetPunch() {
// Punching is disabled
return
}
if cm.intf.lightHouse.IsAnyLighthouseAddr(hostinfo.vpnAddrs) {
// Do not punch to lighthouses, we assume our lighthouse update interval is good enough.
// In the event the update interval is not sufficient to maintain NAT state then a publicly available lighthouse
// would lose the ability to notify us and punchy.respond would become unreliable.
return
}
if cm.punchy.GetTargetEverything() {
hostinfo.remotes.ForEach(cm.hostMap.GetPreferredRanges(), func(addr netip.AddrPort, preferred bool) {
cm.metricsTxPunchy.Inc(1)
cm.intf.outside.WriteTo([]byte{1}, addr)
})
} else if hostinfo.remote.IsValid() {
cm.metricsTxPunchy.Inc(1)
cm.intf.outside.WriteTo([]byte{1}, hostinfo.remote)
}
}
func (cm *connectionManager) tryRehandshake(hostinfo *HostInfo) {
cs := cm.intf.pki.getCertState()
curCrt := hostinfo.ConnectionState.myCert
myCrt := cs.getCertificate(curCrt.Version())
if curCrt.Version() >= cs.initiatingVersion && bytes.Equal(curCrt.Signature(), myCrt.Signature()) == true {
// The current tunnel is using the latest certificate and version, no need to rehandshake.
return
}
cm.l.WithField("vpnAddrs", hostinfo.vpnAddrs).
WithField("reason", "local certificate is not current").
Info("Re-handshaking with remote")
cm.intf.handshakeManager.StartHandshake(hostinfo.vpnAddrs[0], nil)
}

View File

@@ -1,297 +1,162 @@
package nebula package nebula
import ( import (
"context"
"crypto/ed25519" "crypto/ed25519"
"crypto/rand" "crypto/rand"
"net/netip" "net"
"testing" "testing"
"time" "time"
"github.com/flynn/noise" "github.com/flynn/noise"
"github.com/slackhq/nebula/cert" "github.com/slackhq/nebula/cert"
"github.com/slackhq/nebula/config" "github.com/slackhq/nebula/iputil"
"github.com/slackhq/nebula/test"
"github.com/slackhq/nebula/udp" "github.com/slackhq/nebula/udp"
"github.com/slackhq/nebula/util"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
) )
func newTestLighthouse() *LightHouse { var vpnIp iputil.VpnIp
lh := &LightHouse{
l: test.NewLogger(),
addrMap: map[netip.Addr]*RemoteList{},
queryChan: make(chan netip.Addr, 10),
}
lighthouses := []netip.Addr{}
staticList := map[netip.Addr]struct{}{}
lh.lighthouses.Store(&lighthouses)
lh.staticList.Store(&staticList)
return lh
}
func Test_NewConnectionManagerTest(t *testing.T) { func Test_NewConnectionManagerTest(t *testing.T) {
l := test.NewLogger() l := util.NewTestLogger()
//_, tuncidr, _ := net.ParseCIDR("1.1.1.1/24") //_, tuncidr, _ := net.ParseCIDR("1.1.1.1/24")
localrange := netip.MustParsePrefix("10.1.1.1/24") _, vpncidr, _ := net.ParseCIDR("172.1.1.1/24")
vpnIp := netip.MustParseAddr("172.1.1.2") _, localrange, _ := net.ParseCIDR("10.1.1.1/24")
preferredRanges := []netip.Prefix{localrange} vpnIp = iputil.Ip2VpnIp(net.ParseIP("172.1.1.2"))
preferredRanges := []*net.IPNet{localrange}
// Very incomplete mock objects // Very incomplete mock objects
hostMap := newHostMap(l) hostMap := NewHostMap(l, "test", vpncidr, preferredRanges)
hostMap.preferredRanges.Store(&preferredRanges)
cs := &CertState{ cs := &CertState{
initiatingVersion: cert.Version1, rawCertificate: []byte{},
privateKey: []byte{}, privateKey: []byte{},
v1Cert: &dummyCert{version: cert.Version1}, certificate: &cert.NebulaCertificate{},
v1HandshakeBytes: []byte{}, rawCertificateNoKey: []byte{},
} }
lh := newTestLighthouse() lh := NewLightHouse(l, false, &net.IPNet{IP: net.IP{0, 0, 0, 0}, Mask: net.IPMask{0, 0, 0, 0}}, []iputil.VpnIp{}, 1000, 0, &udp.Conn{}, false, 1, false)
ifce := &Interface{ ifce := &Interface{
hostMap: hostMap, hostMap: hostMap,
inside: &test.NoopTun{}, inside: &Tun{},
outside: &udp.NoopConn{}, outside: &udp.Conn{},
certState: cs,
firewall: &Firewall{}, firewall: &Firewall{},
lightHouse: lh, lightHouse: lh,
pki: &PKI{}, handshakeManager: NewHandshakeManager(l, vpncidr, preferredRanges, hostMap, lh, &udp.Conn{}, defaultHandshakeConfig),
handshakeManager: NewHandshakeManager(l, hostMap, lh, &udp.NoopConn{}, defaultHandshakeConfig),
l: l, l: l,
} }
ifce.pki.cs.Store(cs) now := time.Now()
// Create manager // Create manager
conf := config.NewC(l) ctx, cancel := context.WithCancel(context.Background())
punchy := NewPunchyFromConfig(l, conf) defer cancel()
nc := newConnectionManagerFromConfig(l, conf, hostMap, punchy) nc := newConnectionManager(ctx, l, ifce, 5, 10)
nc.intf = ifce
p := []byte("") p := []byte("")
nb := make([]byte, 12, 12) nb := make([]byte, 12, 12)
out := make([]byte, mtu) out := make([]byte, mtu)
nc.HandleMonitorTick(now, p, nb, out)
// Add an ip we have established a connection w/ to hostmap // Add an ip we have established a connection w/ to hostmap
hostinfo := &HostInfo{ hostinfo := nc.hostMap.AddVpnIp(vpnIp)
vpnAddrs: []netip.Addr{vpnIp},
localIndexId: 1099,
remoteIndexId: 9901,
}
hostinfo.ConnectionState = &ConnectionState{ hostinfo.ConnectionState = &ConnectionState{
myCert: &dummyCert{version: cert.Version1}, certState: cs,
H: &noise.HandshakeState{}, H: &noise.HandshakeState{},
} }
nc.hostMap.unlockedAddHostInfo(hostinfo, ifce)
// We saw traffic out to vpnIp // We saw traffic out to vpnIp
nc.Out(hostinfo) nc.Out(vpnIp)
nc.In(hostinfo) assert.NotContains(t, nc.pendingDeletion, vpnIp)
assert.False(t, hostinfo.pendingDeletion.Load()) assert.Contains(t, nc.hostMap.Hosts, vpnIp)
assert.Contains(t, nc.hostMap.Hosts, hostinfo.vpnAddrs[0]) // Move ahead 5s. Nothing should happen
assert.Contains(t, nc.hostMap.Indexes, hostinfo.localIndexId) next_tick := now.Add(5 * time.Second)
assert.True(t, hostinfo.out.Load()) nc.HandleMonitorTick(next_tick, p, nb, out)
assert.True(t, hostinfo.in.Load()) nc.HandleDeletionTick(next_tick)
// Move ahead 6s. We haven't heard back
next_tick = now.Add(6 * time.Second)
nc.HandleMonitorTick(next_tick, p, nb, out)
nc.HandleDeletionTick(next_tick)
// This host should now be up for deletion
assert.Contains(t, nc.pendingDeletion, vpnIp)
assert.Contains(t, nc.hostMap.Hosts, vpnIp)
// Move ahead some more
next_tick = now.Add(45 * time.Second)
nc.HandleMonitorTick(next_tick, p, nb, out)
nc.HandleDeletionTick(next_tick)
// The host should be evicted
assert.NotContains(t, nc.pendingDeletion, vpnIp)
assert.NotContains(t, nc.hostMap.Hosts, vpnIp)
// Do a traffic check tick, should not be pending deletion but should not have any in/out packets recorded
nc.doTrafficCheck(hostinfo.localIndexId, p, nb, out, time.Now())
assert.False(t, hostinfo.pendingDeletion.Load())
assert.False(t, hostinfo.out.Load())
assert.False(t, hostinfo.in.Load())
// Do another traffic check tick, this host should be pending deletion now
nc.Out(hostinfo)
assert.True(t, hostinfo.out.Load())
nc.doTrafficCheck(hostinfo.localIndexId, p, nb, out, time.Now())
assert.True(t, hostinfo.pendingDeletion.Load())
assert.False(t, hostinfo.out.Load())
assert.False(t, hostinfo.in.Load())
assert.Contains(t, nc.hostMap.Indexes, hostinfo.localIndexId)
assert.Contains(t, nc.hostMap.Hosts, hostinfo.vpnAddrs[0])
// Do a final traffic check tick, the host should now be removed
nc.doTrafficCheck(hostinfo.localIndexId, p, nb, out, time.Now())
assert.NotContains(t, nc.hostMap.Hosts, hostinfo.vpnAddrs)
assert.NotContains(t, nc.hostMap.Indexes, hostinfo.localIndexId)
} }
func Test_NewConnectionManagerTest2(t *testing.T) { func Test_NewConnectionManagerTest2(t *testing.T) {
l := test.NewLogger() l := util.NewTestLogger()
//_, tuncidr, _ := net.ParseCIDR("1.1.1.1/24") //_, tuncidr, _ := net.ParseCIDR("1.1.1.1/24")
localrange := netip.MustParsePrefix("10.1.1.1/24") _, vpncidr, _ := net.ParseCIDR("172.1.1.1/24")
vpnIp := netip.MustParseAddr("172.1.1.2") _, localrange, _ := net.ParseCIDR("10.1.1.1/24")
preferredRanges := []netip.Prefix{localrange} preferredRanges := []*net.IPNet{localrange}
// Very incomplete mock objects // Very incomplete mock objects
hostMap := newHostMap(l) hostMap := NewHostMap(l, "test", vpncidr, preferredRanges)
hostMap.preferredRanges.Store(&preferredRanges)
cs := &CertState{ cs := &CertState{
initiatingVersion: cert.Version1, rawCertificate: []byte{},
privateKey: []byte{}, privateKey: []byte{},
v1Cert: &dummyCert{version: cert.Version1}, certificate: &cert.NebulaCertificate{},
v1HandshakeBytes: []byte{}, rawCertificateNoKey: []byte{},
} }
lh := newTestLighthouse() lh := NewLightHouse(l, false, &net.IPNet{IP: net.IP{0, 0, 0, 0}, Mask: net.IPMask{0, 0, 0, 0}}, []iputil.VpnIp{}, 1000, 0, &udp.Conn{}, false, 1, false)
ifce := &Interface{ ifce := &Interface{
hostMap: hostMap, hostMap: hostMap,
inside: &test.NoopTun{}, inside: &Tun{},
outside: &udp.NoopConn{}, outside: &udp.Conn{},
certState: cs,
firewall: &Firewall{}, firewall: &Firewall{},
lightHouse: lh, lightHouse: lh,
pki: &PKI{}, handshakeManager: NewHandshakeManager(l, vpncidr, preferredRanges, hostMap, lh, &udp.Conn{}, defaultHandshakeConfig),
handshakeManager: NewHandshakeManager(l, hostMap, lh, &udp.NoopConn{}, defaultHandshakeConfig),
l: l, l: l,
} }
ifce.pki.cs.Store(cs) now := time.Now()
// Create manager // Create manager
conf := config.NewC(l) ctx, cancel := context.WithCancel(context.Background())
punchy := NewPunchyFromConfig(l, conf) defer cancel()
nc := newConnectionManagerFromConfig(l, conf, hostMap, punchy) nc := newConnectionManager(ctx, l, ifce, 5, 10)
nc.intf = ifce
p := []byte("") p := []byte("")
nb := make([]byte, 12, 12) nb := make([]byte, 12, 12)
out := make([]byte, mtu) out := make([]byte, mtu)
nc.HandleMonitorTick(now, p, nb, out)
// Add an ip we have established a connection w/ to hostmap // Add an ip we have established a connection w/ to hostmap
hostinfo := &HostInfo{ hostinfo := nc.hostMap.AddVpnIp(vpnIp)
vpnAddrs: []netip.Addr{vpnIp},
localIndexId: 1099,
remoteIndexId: 9901,
}
hostinfo.ConnectionState = &ConnectionState{ hostinfo.ConnectionState = &ConnectionState{
myCert: &dummyCert{version: cert.Version1}, certState: cs,
H: &noise.HandshakeState{}, H: &noise.HandshakeState{},
} }
nc.hostMap.unlockedAddHostInfo(hostinfo, ifce)
// We saw traffic out to vpnIp // We saw traffic out to vpnIp
nc.Out(hostinfo) nc.Out(vpnIp)
nc.In(hostinfo) assert.NotContains(t, nc.pendingDeletion, vpnIp)
assert.True(t, hostinfo.in.Load()) assert.Contains(t, nc.hostMap.Hosts, vpnIp)
assert.True(t, hostinfo.out.Load()) // Move ahead 5s. Nothing should happen
assert.False(t, hostinfo.pendingDeletion.Load()) next_tick := now.Add(5 * time.Second)
assert.Contains(t, nc.hostMap.Hosts, hostinfo.vpnAddrs[0]) nc.HandleMonitorTick(next_tick, p, nb, out)
assert.Contains(t, nc.hostMap.Indexes, hostinfo.localIndexId) nc.HandleDeletionTick(next_tick)
// Move ahead 6s. We haven't heard back
next_tick = now.Add(6 * time.Second)
nc.HandleMonitorTick(next_tick, p, nb, out)
nc.HandleDeletionTick(next_tick)
// This host should now be up for deletion
assert.Contains(t, nc.pendingDeletion, vpnIp)
assert.Contains(t, nc.hostMap.Hosts, vpnIp)
// We heard back this time
nc.In(vpnIp)
// Move ahead some more
next_tick = now.Add(45 * time.Second)
nc.HandleMonitorTick(next_tick, p, nb, out)
nc.HandleDeletionTick(next_tick)
// The host should be evicted
assert.NotContains(t, nc.pendingDeletion, vpnIp)
assert.Contains(t, nc.hostMap.Hosts, vpnIp)
// Do a traffic check tick, should not be pending deletion but should not have any in/out packets recorded
nc.doTrafficCheck(hostinfo.localIndexId, p, nb, out, time.Now())
assert.False(t, hostinfo.pendingDeletion.Load())
assert.False(t, hostinfo.out.Load())
assert.False(t, hostinfo.in.Load())
// Do another traffic check tick, this host should be pending deletion now
nc.Out(hostinfo)
nc.doTrafficCheck(hostinfo.localIndexId, p, nb, out, time.Now())
assert.True(t, hostinfo.pendingDeletion.Load())
assert.False(t, hostinfo.out.Load())
assert.False(t, hostinfo.in.Load())
assert.Contains(t, nc.hostMap.Indexes, hostinfo.localIndexId)
assert.Contains(t, nc.hostMap.Hosts, hostinfo.vpnAddrs[0])
// We saw traffic, should no longer be pending deletion
nc.In(hostinfo)
nc.doTrafficCheck(hostinfo.localIndexId, p, nb, out, time.Now())
assert.False(t, hostinfo.pendingDeletion.Load())
assert.False(t, hostinfo.out.Load())
assert.False(t, hostinfo.in.Load())
assert.Contains(t, nc.hostMap.Indexes, hostinfo.localIndexId)
assert.Contains(t, nc.hostMap.Hosts, hostinfo.vpnAddrs[0])
}
func Test_NewConnectionManager_DisconnectInactive(t *testing.T) {
l := test.NewLogger()
localrange := netip.MustParsePrefix("10.1.1.1/24")
vpnAddrs := []netip.Addr{netip.MustParseAddr("172.1.1.2")}
preferredRanges := []netip.Prefix{localrange}
// Very incomplete mock objects
hostMap := newHostMap(l)
hostMap.preferredRanges.Store(&preferredRanges)
cs := &CertState{
initiatingVersion: cert.Version1,
privateKey: []byte{},
v1Cert: &dummyCert{version: cert.Version1},
v1HandshakeBytes: []byte{},
}
lh := newTestLighthouse()
ifce := &Interface{
hostMap: hostMap,
inside: &test.NoopTun{},
outside: &udp.NoopConn{},
firewall: &Firewall{},
lightHouse: lh,
pki: &PKI{},
handshakeManager: NewHandshakeManager(l, hostMap, lh, &udp.NoopConn{}, defaultHandshakeConfig),
l: l,
}
ifce.pki.cs.Store(cs)
// Create manager
conf := config.NewC(l)
conf.Settings["tunnels"] = map[string]any{
"drop_inactive": true,
}
punchy := NewPunchyFromConfig(l, conf)
nc := newConnectionManagerFromConfig(l, conf, hostMap, punchy)
assert.True(t, nc.dropInactive.Load())
nc.intf = ifce
// Add an ip we have established a connection w/ to hostmap
hostinfo := &HostInfo{
vpnAddrs: vpnAddrs,
localIndexId: 1099,
remoteIndexId: 9901,
}
hostinfo.ConnectionState = &ConnectionState{
myCert: &dummyCert{version: cert.Version1},
H: &noise.HandshakeState{},
}
nc.hostMap.unlockedAddHostInfo(hostinfo, ifce)
// Do a traffic check tick, in and out should be cleared but should not be pending deletion
nc.Out(hostinfo)
nc.In(hostinfo)
assert.True(t, hostinfo.out.Load())
assert.True(t, hostinfo.in.Load())
now := time.Now()
decision, _, _ := nc.makeTrafficDecision(hostinfo.localIndexId, now)
assert.Equal(t, tryRehandshake, decision)
assert.Equal(t, now, hostinfo.lastUsed)
assert.False(t, hostinfo.pendingDeletion.Load())
assert.False(t, hostinfo.out.Load())
assert.False(t, hostinfo.in.Load())
decision, _, _ = nc.makeTrafficDecision(hostinfo.localIndexId, now.Add(time.Second*5))
assert.Equal(t, doNothing, decision)
assert.Equal(t, now, hostinfo.lastUsed)
assert.False(t, hostinfo.pendingDeletion.Load())
assert.False(t, hostinfo.out.Load())
assert.False(t, hostinfo.in.Load())
// Do another traffic check tick, should still not be pending deletion
decision, _, _ = nc.makeTrafficDecision(hostinfo.localIndexId, now.Add(time.Second*10))
assert.Equal(t, doNothing, decision)
assert.Equal(t, now, hostinfo.lastUsed)
assert.False(t, hostinfo.pendingDeletion.Load())
assert.False(t, hostinfo.out.Load())
assert.False(t, hostinfo.in.Load())
assert.Contains(t, nc.hostMap.Indexes, hostinfo.localIndexId)
assert.Contains(t, nc.hostMap.Hosts, hostinfo.vpnAddrs[0])
// Finally advance beyond the inactivity timeout
decision, _, _ = nc.makeTrafficDecision(hostinfo.localIndexId, now.Add(time.Minute*10))
assert.Equal(t, closeTunnel, decision)
assert.Equal(t, now, hostinfo.lastUsed)
assert.False(t, hostinfo.pendingDeletion.Load())
assert.False(t, hostinfo.out.Load())
assert.False(t, hostinfo.in.Load())
assert.Contains(t, nc.hostMap.Indexes, hostinfo.localIndexId)
assert.Contains(t, nc.hostMap.Hosts, hostinfo.vpnAddrs[0])
} }
// Check if we can disconnect the peer. // Check if we can disconnect the peer.
@@ -299,209 +164,92 @@ func Test_NewConnectionManager_DisconnectInactive(t *testing.T) {
// Disconnect only if disconnectInvalid: true is set. // Disconnect only if disconnectInvalid: true is set.
func Test_NewConnectionManagerTest_DisconnectInvalid(t *testing.T) { func Test_NewConnectionManagerTest_DisconnectInvalid(t *testing.T) {
now := time.Now() now := time.Now()
l := test.NewLogger() l := util.NewTestLogger()
ipNet := net.IPNet{
vpncidr := netip.MustParsePrefix("172.1.1.1/24") IP: net.IPv4(172, 1, 1, 2),
localrange := netip.MustParsePrefix("10.1.1.1/24") Mask: net.IPMask{255, 255, 255, 0},
vpnIp := netip.MustParseAddr("172.1.1.2") }
preferredRanges := []netip.Prefix{localrange} _, vpncidr, _ := net.ParseCIDR("172.1.1.1/24")
hostMap := newHostMap(l) _, localrange, _ := net.ParseCIDR("10.1.1.1/24")
hostMap.preferredRanges.Store(&preferredRanges) preferredRanges := []*net.IPNet{localrange}
hostMap := NewHostMap(l, "test", vpncidr, preferredRanges)
// Generate keys for CA and peer's cert. // Generate keys for CA and peer's cert.
pubCA, privCA, _ := ed25519.GenerateKey(rand.Reader) pubCA, privCA, _ := ed25519.GenerateKey(rand.Reader)
tbs := &cert.TBSCertificate{ caCert := cert.NebulaCertificate{
Version: 1, Details: cert.NebulaCertificateDetails{
Name: "ca", Name: "ca",
IsCA: true, NotBefore: now,
NotBefore: now, NotAfter: now.Add(1 * time.Hour),
NotAfter: now.Add(1 * time.Hour), IsCA: true,
PublicKey: pubCA, PublicKey: pubCA,
}
caCert, err := tbs.Sign(nil, cert.Curve_CURVE25519, privCA)
require.NoError(t, err)
ncp := cert.NewCAPool()
require.NoError(t, ncp.AddCA(caCert))
pubCrt, _, _ := ed25519.GenerateKey(rand.Reader)
tbs = &cert.TBSCertificate{
Version: 1,
Name: "host",
Networks: []netip.Prefix{vpncidr},
NotBefore: now,
NotAfter: now.Add(60 * time.Second),
PublicKey: pubCrt,
}
peerCert, err := tbs.Sign(caCert, cert.Curve_CURVE25519, privCA)
require.NoError(t, err)
cachedPeerCert, err := ncp.VerifyCertificate(now.Add(time.Second), peerCert)
cs := &CertState{
privateKey: []byte{},
v1Cert: &dummyCert{},
v1HandshakeBytes: []byte{},
}
lh := newTestLighthouse()
ifce := &Interface{
hostMap: hostMap,
inside: &test.NoopTun{},
outside: &udp.NoopConn{},
firewall: &Firewall{},
lightHouse: lh,
handshakeManager: NewHandshakeManager(l, hostMap, lh, &udp.NoopConn{}, defaultHandshakeConfig),
l: l,
pki: &PKI{},
}
ifce.pki.cs.Store(cs)
ifce.pki.caPool.Store(ncp)
ifce.disconnectInvalid.Store(true)
// Create manager
conf := config.NewC(l)
punchy := NewPunchyFromConfig(l, conf)
nc := newConnectionManagerFromConfig(l, conf, hostMap, punchy)
nc.intf = ifce
ifce.connectionManager = nc
hostinfo := &HostInfo{
vpnAddrs: []netip.Addr{vpnIp},
ConnectionState: &ConnectionState{
myCert: &dummyCert{},
peerCert: cachedPeerCert,
H: &noise.HandshakeState{},
}, },
} }
nc.hostMap.unlockedAddHostInfo(hostinfo, ifce) caCert.Sign(privCA)
ncp := &cert.NebulaCAPool{
CAs: cert.NewCAPool().CAs,
}
ncp.CAs["ca"] = &caCert
pubCrt, _, _ := ed25519.GenerateKey(rand.Reader)
peerCert := cert.NebulaCertificate{
Details: cert.NebulaCertificateDetails{
Name: "host",
Ips: []*net.IPNet{&ipNet},
Subnets: []*net.IPNet{},
NotBefore: now,
NotAfter: now.Add(60 * time.Second),
PublicKey: pubCrt,
IsCA: false,
Issuer: "ca",
},
}
peerCert.Sign(privCA)
cs := &CertState{
rawCertificate: []byte{},
privateKey: []byte{},
certificate: &cert.NebulaCertificate{},
rawCertificateNoKey: []byte{},
}
lh := NewLightHouse(l, false, &net.IPNet{IP: net.IP{0, 0, 0, 0}, Mask: net.IPMask{0, 0, 0, 0}}, []iputil.VpnIp{}, 1000, 0, &udp.Conn{}, false, 1, false)
ifce := &Interface{
hostMap: hostMap,
inside: &Tun{},
outside: &udp.Conn{},
certState: cs,
firewall: &Firewall{},
lightHouse: lh,
handshakeManager: NewHandshakeManager(l, vpncidr, preferredRanges, hostMap, lh, &udp.Conn{}, defaultHandshakeConfig),
l: l,
disconnectInvalid: true,
caPool: ncp,
}
// Create manager
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
nc := newConnectionManager(ctx, l, ifce, 5, 10)
ifce.connectionManager = nc
hostinfo := nc.hostMap.AddVpnIp(vpnIp)
hostinfo.ConnectionState = &ConnectionState{
certState: cs,
peerCert: &peerCert,
H: &noise.HandshakeState{},
}
// Move ahead 45s. // Move ahead 45s.
// Check if to disconnect with invalid certificate. // Check if to disconnect with invalid certificate.
// Should be alive. // Should be alive.
nextTick := now.Add(45 * time.Second) nextTick := now.Add(45 * time.Second)
invalid := nc.isInvalidCertificate(nextTick, hostinfo) destroyed := nc.handleInvalidCertificate(nextTick, vpnIp, hostinfo)
assert.False(t, invalid) assert.False(t, destroyed)
// Move ahead 61s. // Move ahead 61s.
// Check if to disconnect with invalid certificate. // Check if to disconnect with invalid certificate.
// Should be disconnected. // Should be disconnected.
nextTick = now.Add(61 * time.Second) nextTick = now.Add(61 * time.Second)
invalid = nc.isInvalidCertificate(nextTick, hostinfo) destroyed = nc.handleInvalidCertificate(nextTick, vpnIp, hostinfo)
assert.True(t, invalid) assert.True(t, destroyed)
}
type dummyCert struct {
version cert.Version
curve cert.Curve
groups []string
isCa bool
issuer string
name string
networks []netip.Prefix
notAfter time.Time
notBefore time.Time
publicKey []byte
signature []byte
unsafeNetworks []netip.Prefix
}
func (d *dummyCert) Version() cert.Version {
return d.version
}
func (d *dummyCert) Curve() cert.Curve {
return d.curve
}
func (d *dummyCert) Groups() []string {
return d.groups
}
func (d *dummyCert) IsCA() bool {
return d.isCa
}
func (d *dummyCert) Issuer() string {
return d.issuer
}
func (d *dummyCert) Name() string {
return d.name
}
func (d *dummyCert) Networks() []netip.Prefix {
return d.networks
}
func (d *dummyCert) NotAfter() time.Time {
return d.notAfter
}
func (d *dummyCert) NotBefore() time.Time {
return d.notBefore
}
func (d *dummyCert) PublicKey() []byte {
return d.publicKey
}
func (d *dummyCert) MarshalPublicKeyPEM() []byte {
return cert.MarshalPublicKeyToPEM(d.curve, d.publicKey)
}
func (d *dummyCert) Signature() []byte {
return d.signature
}
func (d *dummyCert) UnsafeNetworks() []netip.Prefix {
return d.unsafeNetworks
}
func (d *dummyCert) MarshalForHandshakes() ([]byte, error) {
return nil, nil
}
func (d *dummyCert) Sign(curve cert.Curve, key []byte) error {
return nil
}
func (d *dummyCert) CheckSignature(key []byte) bool {
return true
}
func (d *dummyCert) Expired(t time.Time) bool {
return false
}
func (d *dummyCert) CheckRootConstraints(signer cert.Certificate) error {
return nil
}
func (d *dummyCert) VerifyPrivateKey(curve cert.Curve, key []byte) error {
return nil
}
func (d *dummyCert) String() string {
return ""
}
func (d *dummyCert) Marshal() ([]byte, error) {
return nil, nil
}
func (d *dummyCert) MarshalPEM() ([]byte, error) {
return nil, nil
}
func (d *dummyCert) Fingerprint() (string, error) {
return "", nil
}
func (d *dummyCert) MarshalJSON() ([]byte, error) {
return nil, nil
}
func (d *dummyCert) Copy() cert.Certificate {
return d
} }

View File

@@ -3,72 +3,55 @@ package nebula
import ( import (
"crypto/rand" "crypto/rand"
"encoding/json" "encoding/json"
"fmt"
"sync" "sync"
"sync/atomic" "sync/atomic"
"github.com/flynn/noise" "github.com/flynn/noise"
"github.com/sirupsen/logrus" "github.com/sirupsen/logrus"
"github.com/slackhq/nebula/cert" "github.com/slackhq/nebula/cert"
"github.com/slackhq/nebula/noiseutil"
) )
// TODO: In a 5Gbps test, 1024 is not sufficient. With a 1400 MTU this is about 1.4Gbps of window, assuming full packets. const ReplayWindow = 1024
// 4092 should be sufficient for 5Gbps
const ReplayWindow = 8192
type ConnectionState struct { type ConnectionState struct {
eKey *NebulaCipherState eKey *NebulaCipherState
dKey *NebulaCipherState dKey *NebulaCipherState
H *noise.HandshakeState H *noise.HandshakeState
myCert cert.Certificate certState *CertState
peerCert *cert.CachedCertificate peerCert *cert.NebulaCertificate
initiator bool initiator bool
messageCounter atomic.Uint64 atomicMessageCounter uint64
window *Bits window *Bits
writeLock sync.Mutex queueLock sync.Mutex
writeLock sync.Mutex
ready bool
} }
func NewConnectionState(l *logrus.Logger, cs *CertState, crt cert.Certificate, initiator bool, pattern noise.HandshakePattern) (*ConnectionState, error) { func (f *Interface) newConnectionState(l *logrus.Logger, initiator bool, p []byte) (*ConnectionState, error) {
var dhFunc noise.DHFunc cs := noise.NewCipherSuite(noise.DH25519, noise.CipherAESGCM, noise.HashSHA256)
switch crt.Curve() { if f.cipher == "chachapoly" {
case cert.Curve_CURVE25519: cs = noise.NewCipherSuite(noise.DH25519, noise.CipherChaChaPoly, noise.HashSHA256)
dhFunc = noise.DH25519
case cert.Curve_P256:
if cs.pkcs11Backed {
dhFunc = noiseutil.DHP256PKCS11
} else {
dhFunc = noiseutil.DHP256
}
default:
return nil, fmt.Errorf("invalid curve: %s", crt.Curve())
} }
var ncs noise.CipherSuite curCertState := f.certState
if cs.cipher == "chachapoly" { static := noise.DHKey{Private: curCertState.privateKey, Public: curCertState.publicKey}
ncs = noise.NewCipherSuite(dhFunc, noise.CipherChaChaPoly, noise.HashSHA256)
} else {
ncs = noise.NewCipherSuite(dhFunc, noiseutil.CipherAESGCM, noise.HashSHA256)
}
static := noise.DHKey{Private: cs.privateKey, Public: crt.PublicKey()}
b := NewBits(ReplayWindow) b := NewBits(ReplayWindow)
// Clear out bit 0, we never transmit it, and we don't want it showing as packet loss // Clear out bit 0, we never transmit it and we don't want it showing as packet loss
b.Update(l, 0) b.Update(l, 0)
hs, err := noise.NewHandshakeState(noise.Config{ hs, err := noise.NewHandshakeState(noise.Config{
CipherSuite: ncs, CipherSuite: cs,
Random: rand.Reader, Random: rand.Reader,
Pattern: pattern, Pattern: noise.HandshakeIX,
Initiator: initiator, Initiator: initiator,
StaticKeypair: static, StaticKeypair: static,
//NOTE: These should come from CertState (pki.go) when we finally implement it PresharedKey: p,
PresharedKey: []byte{},
PresharedKeyPlacement: 0, PresharedKeyPlacement: 0,
}) })
if err != nil { if err != nil {
return nil, fmt.Errorf("NewConnectionState: %s", err) return nil, err
} }
// The queue and ready params prevent a counter race that would happen when // The queue and ready params prevent a counter race that would happen when
@@ -77,10 +60,9 @@ func NewConnectionState(l *logrus.Logger, cs *CertState, crt cert.Certificate, i
H: hs, H: hs,
initiator: initiator, initiator: initiator,
window: b, window: b,
myCert: crt, ready: false,
certState: curCertState,
} }
// always start the counter from 2, as packet 1 and packet 2 are handshake packets.
ci.messageCounter.Add(2)
return ci, nil return ci, nil
} }
@@ -89,10 +71,7 @@ func (cs *ConnectionState) MarshalJSON() ([]byte, error) {
return json.Marshal(m{ return json.Marshal(m{
"certificate": cs.peerCert, "certificate": cs.peerCert,
"initiator": cs.initiator, "initiator": cs.initiator,
"message_counter": cs.messageCounter.Load(), "message_counter": atomic.LoadUint64(&cs.atomicMessageCounter),
"ready": cs.ready,
}) })
} }
func (cs *ConnectionState) Curve() cert.Curve {
return cs.myCert.Curve()
}

View File

@@ -2,83 +2,46 @@ package nebula
import ( import (
"context" "context"
"errors" "net"
"net/netip"
"os" "os"
"os/signal" "os/signal"
"sync" "sync/atomic"
"syscall" "syscall"
"github.com/sirupsen/logrus" "github.com/sirupsen/logrus"
"github.com/slackhq/nebula/cert" "github.com/slackhq/nebula/cert"
"github.com/slackhq/nebula/header" "github.com/slackhq/nebula/header"
"github.com/slackhq/nebula/overlay" "github.com/slackhq/nebula/iputil"
"github.com/slackhq/nebula/udp"
) )
type RunState int
const (
Stopped RunState = 0 // The control has yet to be started
Started RunState = 1 // The control has been started
Stopping RunState = 2 // The control is stopping
)
var ErrAlreadyStarted = errors.New("nebula is already started")
// Every interaction here needs to take extra care to copy memory and not return or use arguments "as is" when touching // Every interaction here needs to take extra care to copy memory and not return or use arguments "as is" when touching
// core. This means copying IP objects, slices, de-referencing pointers and taking the actual value, etc // core. This means copying IP objects, slices, de-referencing pointers and taking the actual value, etc
type controlEach func(h *HostInfo)
type controlHostLister interface {
QueryVpnAddr(vpnAddr netip.Addr) *HostInfo
ForEachIndex(each controlEach)
ForEachVpnAddr(each controlEach)
GetPreferredRanges() []netip.Prefix
}
type Control struct { type Control struct {
stateLock sync.Mutex f *Interface
state RunState l *logrus.Logger
cancel context.CancelFunc
f *Interface sshStart func()
l *logrus.Logger statsStart func()
ctx context.Context dnsStart func()
cancel context.CancelFunc
sshStart func()
statsStart func()
dnsStart func()
lighthouseStart func()
connectionManagerStart func(context.Context)
} }
type ControlHostInfo struct { type ControlHostInfo struct {
VpnAddrs []netip.Addr `json:"vpnAddrs"` VpnIp net.IP `json:"vpnIp"`
LocalIndex uint32 `json:"localIndex"` LocalIndex uint32 `json:"localIndex"`
RemoteIndex uint32 `json:"remoteIndex"` RemoteIndex uint32 `json:"remoteIndex"`
RemoteAddrs []netip.AddrPort `json:"remoteAddrs"` RemoteAddrs []*udp.Addr `json:"remoteAddrs"`
Cert cert.Certificate `json:"cert"` CachedPackets int `json:"cachedPackets"`
MessageCounter uint64 `json:"messageCounter"` Cert *cert.NebulaCertificate `json:"cert"`
CurrentRemote netip.AddrPort `json:"currentRemote"` MessageCounter uint64 `json:"messageCounter"`
CurrentRelaysToMe []netip.Addr `json:"currentRelaysToMe"` CurrentRemote *udp.Addr `json:"currentRemote"`
CurrentRelaysThroughMe []netip.Addr `json:"currentRelaysThroughMe"`
} }
// Start actually runs nebula, this is a nonblocking call. // Start actually runs nebula, this is a nonblocking call. To block use Control.ShutdownBlock()
// The returned function can be used to wait for nebula to fully stop. func (c *Control) Start() {
func (c *Control) Start() (func(), error) {
c.stateLock.Lock()
if c.state != Stopped {
c.stateLock.Unlock()
return nil, ErrAlreadyStarted
}
// Activate the interface // Activate the interface
err := c.f.activate() c.f.activate()
if err != nil {
c.stateLock.Unlock()
return nil, err
}
// Call all the delayed funcs that waited patiently for the interface to be created. // Call all the delayed funcs that waited patiently for the interface to be created.
if c.sshStart != nil { if c.sshStart != nil {
@@ -90,55 +53,22 @@ func (c *Control) Start() (func(), error) {
if c.dnsStart != nil { if c.dnsStart != nil {
go c.dnsStart() go c.dnsStart()
} }
if c.connectionManagerStart != nil {
go c.connectionManagerStart(c.ctx)
}
if c.lighthouseStart != nil {
c.lighthouseStart()
}
// Start reading packets. // Start reading packets.
c.state = Started c.f.run()
c.stateLock.Unlock()
return c.f.run(c.ctx)
} }
func (c *Control) State() RunState { // Stop signals nebula to shutdown, returns after the shutdown is complete
c.stateLock.Lock()
defer c.stateLock.Unlock()
return c.state
}
func (c *Control) Context() context.Context {
return c.ctx
}
// Stop is a non-blocking call that signals nebula to close all tunnels and shut down
func (c *Control) Stop() { func (c *Control) Stop() {
c.stateLock.Lock() //TODO: stop tun and udp routines, the lock on hostMap effectively does that though
if c.state != Started {
c.stateLock.Unlock()
// We are stopping or stopped already
return
}
c.state = Stopping
c.stateLock.Unlock()
// Stop the handshakeManager (and other services), to prevent new tunnels from
// being created while we're shutting them all down.
c.cancel()
c.CloseAllTunnels(false) c.CloseAllTunnels(false)
if err := c.f.Close(); err != nil { c.cancel()
c.l.WithError(err).Error("Close interface failed") c.l.Info("Goodbye")
}
c.state = Stopped
} }
// ShutdownBlock will listen for and block on term and interrupt signals, calling Control.Stop() once signalled // ShutdownBlock will listen for and block on term and interrupt signals, calling Control.Stop() once signalled
func (c *Control) ShutdownBlock() { func (c *Control) ShutdownBlock() {
sigChan := make(chan os.Signal, 1) sigChan := make(chan os.Signal)
signal.Notify(sigChan, syscall.SIGTERM) signal.Notify(sigChan, syscall.SIGTERM)
signal.Notify(sigChan, syscall.SIGINT) signal.Notify(sigChan, syscall.SIGINT)
@@ -153,105 +83,55 @@ func (c *Control) RebindUDPServer() {
_ = c.f.outside.Rebind() _ = c.f.outside.Rebind()
// Trigger a lighthouse update, useful for mobile clients that should have an update interval of 0 // Trigger a lighthouse update, useful for mobile clients that should have an update interval of 0
c.f.lightHouse.SendUpdate() c.f.lightHouse.SendUpdate(c.f)
// Let the main interface know that we rebound so that underlying tunnels know to trigger punches from their remotes // Let the main interface know that we rebound so that underlying tunnels know to trigger punches from their remotes
c.f.rebindCount++ c.f.rebindCount++
} }
// ListHostmapHosts returns details about the actual or pending (handshaking) hostmap by vpn ip // ListHostmap returns details about the actual or pending (handshaking) hostmap
func (c *Control) ListHostmapHosts(pendingMap bool) []ControlHostInfo { func (c *Control) ListHostmap(pendingMap bool) []ControlHostInfo {
if pendingMap { if pendingMap {
return listHostMapHosts(c.f.handshakeManager) return listHostMap(c.f.handshakeManager.pendingHostMap)
} else { } else {
return listHostMapHosts(c.f.hostMap) return listHostMap(c.f.hostMap)
} }
} }
// ListHostmapIndexes returns details about the actual or pending (handshaking) hostmap by local index id // GetHostInfoByVpnIp returns a single tunnels hostInfo, or nil if not found
func (c *Control) ListHostmapIndexes(pendingMap bool) []ControlHostInfo { func (c *Control) GetHostInfoByVpnIp(vpnIp iputil.VpnIp, pending bool) *ControlHostInfo {
if pendingMap { var hm *HostMap
return listHostMapIndexes(c.f.handshakeManager)
} else {
return listHostMapIndexes(c.f.hostMap)
}
}
// GetCertByVpnIp returns the authenticated certificate of the given vpn IP, or nil if not found
func (c *Control) GetCertByVpnIp(vpnIp netip.Addr) cert.Certificate {
if c.f.myVpnAddrsTable.Contains(vpnIp) {
// Only returning the default certificate since its impossible
// for any other host but ourselves to have more than 1
return c.f.pki.getCertState().GetDefaultCertificate().Copy()
}
hi := c.f.hostMap.QueryVpnAddr(vpnIp)
if hi == nil {
return nil
}
return hi.GetCert().Certificate.Copy()
}
// CreateTunnel creates a new tunnel to the given vpn ip.
func (c *Control) CreateTunnel(vpnIp netip.Addr) {
c.f.handshakeManager.StartHandshake(vpnIp, nil)
}
// PrintTunnel creates a new tunnel to the given vpn ip.
func (c *Control) PrintTunnel(vpnIp netip.Addr) *ControlHostInfo {
hi := c.f.hostMap.QueryVpnAddr(vpnIp)
if hi == nil {
return nil
}
chi := copyHostInfo(hi, c.f.hostMap.GetPreferredRanges())
return &chi
}
// QueryLighthouse queries the lighthouse.
func (c *Control) QueryLighthouse(vpnIp netip.Addr) *CacheMap {
hi := c.f.lightHouse.Query(vpnIp)
if hi == nil {
return nil
}
return hi.CopyCache()
}
// GetHostInfoByVpnAddr returns a single tunnels hostInfo, or nil if not found
// Caller should take care to Unmap() any 4in6 addresses prior to calling.
func (c *Control) GetHostInfoByVpnAddr(vpnAddr netip.Addr, pending bool) *ControlHostInfo {
var hl controlHostLister
if pending { if pending {
hl = c.f.handshakeManager hm = c.f.handshakeManager.pendingHostMap
} else { } else {
hl = c.f.hostMap hm = c.f.hostMap
} }
h := hl.QueryVpnAddr(vpnAddr) h, err := hm.QueryVpnIp(vpnIp)
if h == nil { if err != nil {
return nil return nil
} }
ch := copyHostInfo(h, c.f.hostMap.GetPreferredRanges()) ch := copyHostInfo(h, c.f.hostMap.preferredRanges)
return &ch return &ch
} }
// SetRemoteForTunnel forces a tunnel to use a specific remote // SetRemoteForTunnel forces a tunnel to use a specific remote
// Caller should take care to Unmap() any 4in6 addresses prior to calling. func (c *Control) SetRemoteForTunnel(vpnIp iputil.VpnIp, addr udp.Addr) *ControlHostInfo {
func (c *Control) SetRemoteForTunnel(vpnIp netip.Addr, addr netip.AddrPort) *ControlHostInfo { hostInfo, err := c.f.hostMap.QueryVpnIp(vpnIp)
hostInfo := c.f.hostMap.QueryVpnAddr(vpnIp) if err != nil {
if hostInfo == nil {
return nil return nil
} }
hostInfo.SetRemote(addr) hostInfo.SetRemote(addr.Copy())
ch := copyHostInfo(hostInfo, c.f.hostMap.GetPreferredRanges()) ch := copyHostInfo(hostInfo, c.f.hostMap.preferredRanges)
return &ch return &ch
} }
// CloseTunnel closes a fully established tunnel. If localOnly is false it will notify the remote end as well. // CloseTunnel closes a fully established tunnel. If localOnly is false it will notify the remote end as well.
// Caller should take care to Unmap() any 4in6 addresses prior to calling. func (c *Control) CloseTunnel(vpnIp iputil.VpnIp, localOnly bool) bool {
func (c *Control) CloseTunnel(vpnIp netip.Addr, localOnly bool) bool { hostInfo, err := c.f.hostMap.QueryVpnIp(vpnIp)
hostInfo := c.f.hostMap.QueryVpnAddr(vpnIp) if err != nil {
if hostInfo == nil {
return false return false
} }
@@ -261,103 +141,75 @@ func (c *Control) CloseTunnel(vpnIp netip.Addr, localOnly bool) bool {
0, 0,
hostInfo.ConnectionState, hostInfo.ConnectionState,
hostInfo, hostInfo,
hostInfo.remote,
[]byte{}, []byte{},
make([]byte, 12, 12), make([]byte, 12, 12),
make([]byte, mtu), make([]byte, mtu),
) )
} }
c.f.closeTunnel(hostInfo) c.f.closeTunnel(hostInfo, false)
return true return true
} }
// CloseAllTunnels is just like CloseTunnel except it goes through and shuts them all down, optionally you can avoid shutting down lighthouse tunnels // CloseAllTunnels is just like CloseTunnel except it goes through and shuts them all down, optionally you can avoid shutting down lighthouse tunnels
// the int returned is a count of tunnels closed // the int returned is a count of tunnels closed
func (c *Control) CloseAllTunnels(excludeLighthouses bool) (closed int) { func (c *Control) CloseAllTunnels(excludeLighthouses bool) (closed int) {
shutdown := func(h *HostInfo) { //TODO: this is probably better as a function in ConnectionManager or HostMap directly
if excludeLighthouses && c.f.lightHouse.IsAnyLighthouseAddr(h.vpnAddrs) { c.f.hostMap.Lock()
return for _, h := range c.f.hostMap.Hosts {
if excludeLighthouses {
if _, ok := c.f.lightHouse.lighthouses[h.vpnIp]; ok {
continue
}
} }
c.f.send(header.CloseTunnel, 0, h.ConnectionState, h, []byte{}, make([]byte, 12, 12), make([]byte, mtu))
c.f.closeTunnel(h)
c.l.WithField("vpnAddrs", h.vpnAddrs).WithField("udpAddr", h.remote). if h.ConnectionState.ready {
Debug("Sending close tunnel message") c.f.send(header.CloseTunnel, 0, h.ConnectionState, h, h.remote, []byte{}, make([]byte, 12, 12), make([]byte, mtu))
closed++ c.f.closeTunnel(h, true)
}
// Learn which hosts are being used as relays, so we can shut them down last. c.l.WithField("vpnIp", h.vpnIp).WithField("udpAddr", h.remote).
relayingHosts := map[netip.Addr]*HostInfo{} Debug("Sending close tunnel message")
// Grab the hostMap lock to access the Relays map closed++
c.f.hostMap.Lock()
for _, relayingHost := range c.f.hostMap.Relays {
relayingHosts[relayingHost.vpnAddrs[0]] = relayingHost
}
c.f.hostMap.Unlock()
hostInfos := []*HostInfo{}
// Grab the hostMap lock to access the Hosts map
c.f.hostMap.Lock()
for _, relayHost := range c.f.hostMap.Indexes {
if _, ok := relayingHosts[relayHost.vpnAddrs[0]]; !ok {
hostInfos = append(hostInfos, relayHost)
} }
} }
c.f.hostMap.Unlock() c.f.hostMap.Unlock()
for _, h := range hostInfos {
shutdown(h)
}
for _, h := range relayingHosts {
shutdown(h)
}
return return
} }
func (c *Control) Device() overlay.Device { func copyHostInfo(h *HostInfo, preferredRanges []*net.IPNet) ControlHostInfo {
return c.f.inside
}
func copyHostInfo(h *HostInfo, preferredRanges []netip.Prefix) ControlHostInfo {
chi := ControlHostInfo{ chi := ControlHostInfo{
VpnAddrs: make([]netip.Addr, len(h.vpnAddrs)), VpnIp: h.vpnIp.ToIP(),
LocalIndex: h.localIndexId, LocalIndex: h.localIndexId,
RemoteIndex: h.remoteIndexId, RemoteIndex: h.remoteIndexId,
RemoteAddrs: h.remotes.CopyAddrs(preferredRanges), RemoteAddrs: h.remotes.CopyAddrs(preferredRanges),
CurrentRelaysToMe: h.relayState.CopyRelayIps(), CachedPackets: len(h.packetStore),
CurrentRelaysThroughMe: h.relayState.CopyRelayForIps(),
CurrentRemote: h.remote,
}
for i, a := range h.vpnAddrs {
chi.VpnAddrs[i] = a
} }
if h.ConnectionState != nil { if h.ConnectionState != nil {
chi.MessageCounter = h.ConnectionState.messageCounter.Load() chi.MessageCounter = atomic.LoadUint64(&h.ConnectionState.atomicMessageCounter)
} }
if c := h.GetCert(); c != nil { if c := h.GetCert(); c != nil {
chi.Cert = c.Certificate.Copy() chi.Cert = c.Copy()
}
if h.remote != nil {
chi.CurrentRemote = h.remote.Copy()
} }
return chi return chi
} }
func listHostMapHosts(hl controlHostLister) []ControlHostInfo { func listHostMap(hm *HostMap) []ControlHostInfo {
hosts := make([]ControlHostInfo, 0) hm.RLock()
pr := hl.GetPreferredRanges() hosts := make([]ControlHostInfo, len(hm.Hosts))
hl.ForEachVpnAddr(func(hostinfo *HostInfo) { i := 0
hosts = append(hosts, copyHostInfo(hostinfo, pr)) for _, v := range hm.Hosts {
}) hosts[i] = copyHostInfo(v, hm.preferredRanges)
return hosts i++
} }
hm.RUnlock()
func listHostMapIndexes(hl controlHostLister) []ControlHostInfo {
hosts := make([]ControlHostInfo, 0)
pr := hl.GetPreferredRanges()
hl.ForEachIndex(func(hostinfo *HostInfo) {
hosts = append(hosts, copyHostInfo(hostinfo, pr))
})
return hosts return hosts
} }

View File

@@ -2,67 +2,66 @@ package nebula
import ( import (
"net" "net"
"net/netip"
"reflect" "reflect"
"testing" "testing"
"time"
"github.com/sirupsen/logrus" "github.com/sirupsen/logrus"
"github.com/slackhq/nebula/cert" "github.com/slackhq/nebula/cert"
"github.com/slackhq/nebula/test" "github.com/slackhq/nebula/iputil"
"github.com/slackhq/nebula/udp"
"github.com/slackhq/nebula/util"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
) )
func TestControl_GetHostInfoByVpnIp(t *testing.T) { func TestControl_GetHostInfoByVpnIp(t *testing.T) {
//TODO: CERT-V2 with multiple certificate versions we have a problem with this test l := util.NewTestLogger()
// Some certs versions have different characteristics and each version implements their own Copy() func
// which means this is not a good place to test for exposing memory
l := test.NewLogger()
// Special care must be taken to re-use all objects provided to the hostmap and certificate in the expectedInfo object // Special care must be taken to re-use all objects provided to the hostmap and certificate in the expectedInfo object
// To properly ensure we are not exposing core memory to the caller // To properly ensure we are not exposing core memory to the caller
hm := newHostMap(l) hm := NewHostMap(l, "test", &net.IPNet{}, make([]*net.IPNet, 0))
hm.preferredRanges.Store(&[]netip.Prefix{}) remote1 := udp.NewAddr(net.ParseIP("0.0.0.100"), 4444)
remote2 := udp.NewAddr(net.ParseIP("1:2:3:4:5:6:7:8"), 4444)
remote1 := netip.MustParseAddrPort("0.0.0.100:4444")
remote2 := netip.MustParseAddrPort("[1:2:3:4:5:6:7:8]:4444")
ipNet := net.IPNet{ ipNet := net.IPNet{
IP: remote1.Addr().AsSlice(), IP: net.IPv4(1, 2, 3, 4),
Mask: net.IPMask{255, 255, 255, 0}, Mask: net.IPMask{255, 255, 255, 0},
} }
ipNet2 := net.IPNet{ ipNet2 := net.IPNet{
IP: remote2.Addr().AsSlice(), IP: net.ParseIP("1:2:3:4:5:6:7:8"),
Mask: net.IPMask{255, 255, 255, 0}, Mask: net.IPMask{255, 255, 255, 0},
} }
remotes := NewRemoteList([]netip.Addr{netip.IPv4Unspecified()}, nil) crt := &cert.NebulaCertificate{
remotes.unlockedPrependV4(netip.IPv4Unspecified(), netAddrToProtoV4AddrPort(remote1.Addr(), remote1.Port())) Details: cert.NebulaCertificateDetails{
remotes.unlockedPrependV6(netip.IPv4Unspecified(), netAddrToProtoV6AddrPort(remote2.Addr(), remote2.Port())) Name: "test",
Ips: []*net.IPNet{&ipNet},
Subnets: []*net.IPNet{},
Groups: []string{"default-group"},
NotBefore: time.Unix(1, 0),
NotAfter: time.Unix(2, 0),
PublicKey: []byte{5, 6, 7, 8},
IsCA: false,
Issuer: "the-issuer",
InvertedGroups: map[string]struct{}{"default-group": {}},
},
Signature: []byte{1, 2, 1, 2, 1, 3},
}
vpnIp, ok := netip.AddrFromSlice(ipNet.IP) remotes := NewRemoteList()
assert.True(t, ok) remotes.unlockedPrependV4(0, NewIp4AndPort(remote1.IP, uint32(remote1.Port)))
remotes.unlockedPrependV6(0, NewIp6AndPort(remote2.IP, uint32(remote2.Port)))
crt := &dummyCert{} hm.Add(iputil.Ip2VpnIp(ipNet.IP), &HostInfo{
hm.unlockedAddHostInfo(&HostInfo{
remote: remote1, remote: remote1,
remotes: remotes, remotes: remotes,
ConnectionState: &ConnectionState{ ConnectionState: &ConnectionState{
peerCert: &cert.CachedCertificate{Certificate: crt}, peerCert: crt,
}, },
remoteIndexId: 200, remoteIndexId: 200,
localIndexId: 201, localIndexId: 201,
vpnAddrs: []netip.Addr{vpnIp}, vpnIp: iputil.Ip2VpnIp(ipNet.IP),
relayState: RelayState{ })
relays: nil,
relayForByAddr: map[netip.Addr]*Relay{},
relayForByIdx: map[uint32]*Relay{},
},
}, &Interface{})
vpnIp2, ok := netip.AddrFromSlice(ipNet2.IP) hm.Add(iputil.Ip2VpnIp(ipNet2.IP), &HostInfo{
assert.True(t, ok)
hm.unlockedAddHostInfo(&HostInfo{
remote: remote1, remote: remote1,
remotes: remotes, remotes: remotes,
ConnectionState: &ConnectionState{ ConnectionState: &ConnectionState{
@@ -70,13 +69,8 @@ func TestControl_GetHostInfoByVpnIp(t *testing.T) {
}, },
remoteIndexId: 200, remoteIndexId: 200,
localIndexId: 201, localIndexId: 201,
vpnAddrs: []netip.Addr{vpnIp2}, vpnIp: iputil.Ip2VpnIp(ipNet2.IP),
relayState: RelayState{ })
relays: nil,
relayForByAddr: map[netip.Addr]*Relay{},
relayForByIdx: map[uint32]*Relay{},
},
}, &Interface{})
c := Control{ c := Control{
f: &Interface{ f: &Interface{
@@ -85,32 +79,30 @@ func TestControl_GetHostInfoByVpnIp(t *testing.T) {
l: logrus.New(), l: logrus.New(),
} }
thi := c.GetHostInfoByVpnAddr(vpnIp, false) thi := c.GetHostInfoByVpnIp(iputil.Ip2VpnIp(ipNet.IP), false)
expectedInfo := ControlHostInfo{ expectedInfo := ControlHostInfo{
VpnAddrs: []netip.Addr{vpnIp}, VpnIp: net.IPv4(1, 2, 3, 4).To4(),
LocalIndex: 201, LocalIndex: 201,
RemoteIndex: 200, RemoteIndex: 200,
RemoteAddrs: []netip.AddrPort{remote2, remote1}, RemoteAddrs: []*udp.Addr{remote2, remote1},
Cert: crt.Copy(), CachedPackets: 0,
MessageCounter: 0, Cert: crt.Copy(),
CurrentRemote: remote1, MessageCounter: 0,
CurrentRelaysToMe: []netip.Addr{}, CurrentRemote: udp.NewAddr(net.ParseIP("0.0.0.100"), 4444),
CurrentRelaysThroughMe: []netip.Addr{},
} }
// Make sure we don't have any unexpected fields // Make sure we don't have any unexpected fields
assertFields(t, []string{"VpnAddrs", "LocalIndex", "RemoteIndex", "RemoteAddrs", "Cert", "MessageCounter", "CurrentRemote", "CurrentRelaysToMe", "CurrentRelaysThroughMe"}, thi) assertFields(t, []string{"VpnIp", "LocalIndex", "RemoteIndex", "RemoteAddrs", "CachedPackets", "Cert", "MessageCounter", "CurrentRemote"}, thi)
assert.Equal(t, &expectedInfo, thi) util.AssertDeepCopyEqual(t, &expectedInfo, thi)
test.AssertDeepCopyEqual(t, &expectedInfo, thi)
// Make sure we don't panic if the host info doesn't have a cert yet // Make sure we don't panic if the host info doesn't have a cert yet
assert.NotPanics(t, func() { assert.NotPanics(t, func() {
thi = c.GetHostInfoByVpnAddr(vpnIp2, false) thi = c.GetHostInfoByVpnIp(iputil.Ip2VpnIp(ipNet2.IP), false)
}) })
} }
func assertFields(t *testing.T, expected []string, actualStruct any) { func assertFields(t *testing.T, expected []string, actualStruct interface{}) {
val := reflect.ValueOf(actualStruct).Elem() val := reflect.ValueOf(actualStruct).Elem()
fields := make([]string, val.NumField()) fields := make([]string, val.NumField())
for i := 0; i < val.NumField(); i++ { for i := 0; i < val.NumField(); i++ {

View File

@@ -4,21 +4,21 @@
package nebula package nebula
import ( import (
"net/netip" "net"
"github.com/google/gopacket" "github.com/google/gopacket"
"github.com/google/gopacket/layers" "github.com/google/gopacket/layers"
"github.com/slackhq/nebula/header" "github.com/slackhq/nebula/header"
"github.com/slackhq/nebula/overlay" "github.com/slackhq/nebula/iputil"
"github.com/slackhq/nebula/udp" "github.com/slackhq/nebula/udp"
) )
// WaitForType will pipe all messages from this control device into the pipeTo control device // WaitForTypeByIndex will pipe all messages from this control device into the pipeTo control device
// returning after a message matching the criteria has been piped // returning after a message matching the criteria has been piped
func (c *Control) WaitForType(msgType header.MessageType, subType header.MessageSubType, pipeTo *Control) { func (c *Control) WaitForType(msgType header.MessageType, subType header.MessageSubType, pipeTo *Control) {
h := &header.H{} h := &header.H{}
for { for {
p := c.f.outside.(*udp.TesterConn).Get(true) p := c.f.outside.Get(true)
if err := h.Parse(p.Data); err != nil { if err := h.Parse(p.Data); err != nil {
panic(err) panic(err)
} }
@@ -34,7 +34,7 @@ func (c *Control) WaitForType(msgType header.MessageType, subType header.Message
func (c *Control) WaitForTypeByIndex(toIndex uint32, msgType header.MessageType, subType header.MessageSubType, pipeTo *Control) { func (c *Control) WaitForTypeByIndex(toIndex uint32, msgType header.MessageType, subType header.MessageSubType, pipeTo *Control) {
h := &header.H{} h := &header.H{}
for { for {
p := c.f.outside.(*udp.TesterConn).Get(true) p := c.f.outside.Get(true)
if err := h.Parse(p.Data); err != nil { if err := h.Parse(p.Data); err != nil {
panic(err) panic(err)
} }
@@ -47,92 +47,59 @@ func (c *Control) WaitForTypeByIndex(toIndex uint32, msgType header.MessageType,
// InjectLightHouseAddr will push toAddr into the local lighthouse cache for the vpnIp // InjectLightHouseAddr will push toAddr into the local lighthouse cache for the vpnIp
// This is necessary if you did not configure static hosts or are not running a lighthouse // This is necessary if you did not configure static hosts or are not running a lighthouse
func (c *Control) InjectLightHouseAddr(vpnIp netip.Addr, toAddr netip.AddrPort) { func (c *Control) InjectLightHouseAddr(vpnIp net.IP, toAddr *net.UDPAddr) {
c.f.lightHouse.Lock() c.f.lightHouse.Lock()
remoteList := c.f.lightHouse.unlockedGetRemoteList([]netip.Addr{vpnIp}) remoteList := c.f.lightHouse.unlockedGetRemoteList(iputil.Ip2VpnIp(vpnIp))
remoteList.Lock() remoteList.Lock()
defer remoteList.Unlock() defer remoteList.Unlock()
c.f.lightHouse.Unlock() c.f.lightHouse.Unlock()
if toAddr.Addr().Is4() { iVpnIp := iputil.Ip2VpnIp(vpnIp)
remoteList.unlockedPrependV4(vpnIp, netAddrToProtoV4AddrPort(toAddr.Addr(), toAddr.Port())) if v4 := toAddr.IP.To4(); v4 != nil {
remoteList.unlockedPrependV4(iVpnIp, NewIp4AndPort(v4, uint32(toAddr.Port)))
} else { } else {
remoteList.unlockedPrependV6(vpnIp, netAddrToProtoV6AddrPort(toAddr.Addr(), toAddr.Port())) remoteList.unlockedPrependV6(iVpnIp, NewIp6AndPort(toAddr.IP, uint32(toAddr.Port)))
} }
} }
// InjectRelays will push relayVpnIps into the local lighthouse cache for the vpnIp
// This is necessary to inform an initiator of possible relays for communicating with a responder
func (c *Control) InjectRelays(vpnIp netip.Addr, relayVpnIps []netip.Addr) {
c.f.lightHouse.Lock()
remoteList := c.f.lightHouse.unlockedGetRemoteList([]netip.Addr{vpnIp})
remoteList.Lock()
defer remoteList.Unlock()
c.f.lightHouse.Unlock()
remoteList.unlockedSetRelay(vpnIp, relayVpnIps)
}
// GetFromTun will pull a packet off the tun side of nebula // GetFromTun will pull a packet off the tun side of nebula
func (c *Control) GetFromTun(block bool) []byte { func (c *Control) GetFromTun(block bool) []byte {
return c.f.inside.(*overlay.TestTun).Get(block) return c.f.inside.(*Tun).Get(block)
} }
// GetFromUDP will pull a udp packet off the udp side of nebula // GetFromUDP will pull a udp packet off the udp side of nebula
func (c *Control) GetFromUDP(block bool) *udp.Packet { func (c *Control) GetFromUDP(block bool) *udp.Packet {
return c.f.outside.(*udp.TesterConn).Get(block) return c.f.outside.Get(block)
} }
func (c *Control) GetUDPTxChan() <-chan *udp.Packet { func (c *Control) GetUDPTxChan() <-chan *udp.Packet {
return c.f.outside.(*udp.TesterConn).TxPackets return c.f.outside.TxPackets
} }
func (c *Control) GetTunTxChan() <-chan []byte { func (c *Control) GetTunTxChan() <-chan []byte {
return c.f.inside.(*overlay.TestTun).TxPackets return c.f.inside.(*Tun).txPackets
} }
// InjectUDPPacket will inject a packet into the udp side of nebula // InjectUDPPacket will inject a packet into the udp side of nebula
func (c *Control) InjectUDPPacket(p *udp.Packet) { func (c *Control) InjectUDPPacket(p *udp.Packet) {
c.f.outside.(*udp.TesterConn).Send(p) c.f.outside.Send(p)
} }
// InjectTunUDPPacket puts a udp packet on the tun interface. Using UDP here because it's a simpler protocol // InjectTunUDPPacket puts a udp packet on the tun interface. Using UDP here because it's a simpler protocol
func (c *Control) InjectTunUDPPacket(toAddr netip.Addr, toPort uint16, fromAddr netip.Addr, fromPort uint16, data []byte) { func (c *Control) InjectTunUDPPacket(toIp net.IP, toPort uint16, fromPort uint16, data []byte) {
serialize := make([]gopacket.SerializableLayer, 0) ip := layers.IPv4{
var netLayer gopacket.NetworkLayer Version: 4,
if toAddr.Is6() { TTL: 64,
if !fromAddr.Is6() { Protocol: layers.IPProtocolUDP,
panic("Cant send ipv6 to ipv4") SrcIP: c.f.inside.CidrNet().IP,
} DstIP: toIp,
ip := &layers.IPv6{
Version: 6,
NextHeader: layers.IPProtocolUDP,
SrcIP: fromAddr.Unmap().AsSlice(),
DstIP: toAddr.Unmap().AsSlice(),
}
serialize = append(serialize, ip)
netLayer = ip
} else {
if !fromAddr.Is4() {
panic("Cant send ipv4 to ipv6")
}
ip := &layers.IPv4{
Version: 4,
TTL: 64,
Protocol: layers.IPProtocolUDP,
SrcIP: fromAddr.Unmap().AsSlice(),
DstIP: toAddr.Unmap().AsSlice(),
}
serialize = append(serialize, ip)
netLayer = ip
} }
udp := layers.UDP{ udp := layers.UDP{
SrcPort: layers.UDPPort(fromPort), SrcPort: layers.UDPPort(fromPort),
DstPort: layers.UDPPort(toPort), DstPort: layers.UDPPort(toPort),
} }
err := udp.SetNetworkLayerForChecksum(netLayer) err := udp.SetNetworkLayerForChecksum(&ip)
if err != nil { if err != nil {
panic(err) panic(err)
} }
@@ -142,42 +109,24 @@ func (c *Control) InjectTunUDPPacket(toAddr netip.Addr, toPort uint16, fromAddr
ComputeChecksums: true, ComputeChecksums: true,
FixLengths: true, FixLengths: true,
} }
err = gopacket.SerializeLayers(buffer, opt, &ip, &udp, gopacket.Payload(data))
serialize = append(serialize, &udp, gopacket.Payload(data))
err = gopacket.SerializeLayers(buffer, opt, serialize...)
if err != nil { if err != nil {
panic(err) panic(err)
} }
c.f.inside.(*overlay.TestTun).Send(buffer.Bytes()) c.f.inside.(*Tun).Send(buffer.Bytes())
} }
func (c *Control) GetVpnAddrs() []netip.Addr { func (c *Control) GetUDPAddr() string {
return c.f.myVpnAddrs return c.f.outside.Addr.String()
} }
func (c *Control) GetUDPAddr() netip.AddrPort { func (c *Control) KillPendingTunnel(vpnIp net.IP) bool {
return c.f.outside.(*udp.TesterConn).Addr hostinfo, ok := c.f.handshakeManager.pendingHostMap.Hosts[iputil.Ip2VpnIp(vpnIp)]
} if !ok {
func (c *Control) KillPendingTunnel(vpnIp netip.Addr) bool {
hostinfo := c.f.handshakeManager.QueryVpnAddr(vpnIp)
if hostinfo == nil {
return false return false
} }
c.f.handshakeManager.DeleteHostInfo(hostinfo) c.f.handshakeManager.pendingHostMap.DeleteHostInfo(hostinfo)
return true return true
} }
func (c *Control) GetHostmap() *HostMap {
return c.f.hostMap
}
func (c *Control) GetCertState() *CertState {
return c.f.pki.getCertState()
}
func (c *Control) ReHandshake(vpnIp netip.Addr) {
c.f.handshakeManager.StartHandshake(vpnIp, nil)
}

13
dist/arch/nebula.service vendored Normal file
View File

@@ -0,0 +1,13 @@
[Unit]
Description=nebula
Wants=basic.target network-online.target
After=basic.target network.target network-online.target
[Service]
SyslogIdentifier=nebula
ExecReload=/bin/kill -HUP $MAINPID
ExecStart=/usr/bin/nebula -config /etc/nebula/config.yml
Restart=always
[Install]
WantedBy=multi-user.target

15
dist/fedora/nebula.service vendored Normal file
View File

@@ -0,0 +1,15 @@
[Unit]
Description=Nebula overlay networking tool
After=basic.target network.target network-online.target
Before=sshd.service
Wants=basic.target network-online.target
[Service]
ExecReload=/bin/kill -HUP $MAINPID
ExecStart=/usr/bin/nebula -config /etc/nebula/config.yml
Restart=always
SyslogIdentifier=nebula
[Install]
WantedBy=multi-user.target

View File

@@ -1,84 +0,0 @@
Prebuilt Binaries License
-------------------------
1. DEFINITIONS. "Software" means the precise contents of the "wintun.dll"
files that are included in the .zip file that contains this document as
downloaded from wintun.net/builds.
2. LICENSE GRANT. WireGuard LLC grants to you a non-exclusive and
non-transferable right to use Software for lawful purposes under certain
obligations and limited rights as set forth in this agreement.
3. RESTRICTIONS. Software is owned and copyrighted by WireGuard LLC. It is
licensed, not sold. Title to Software and all associated intellectual
property rights are retained by WireGuard. You must not:
a. reverse engineer, decompile, disassemble, extract from, or otherwise
modify the Software;
b. modify or create derivative work based upon Software in whole or in
parts, except insofar as only the API interfaces of the "wintun.h" file
distributed alongside the Software (the "Permitted API") are used;
c. remove any proprietary notices, labels, or copyrights from the Software;
d. resell, redistribute, lease, rent, transfer, sublicense, or otherwise
transfer rights of the Software without the prior written consent of
WireGuard LLC, except insofar as the Software is distributed alongside
other software that uses the Software only via the Permitted API;
e. use the name of WireGuard LLC, the WireGuard project, the Wintun
project, or the names of its contributors to endorse or promote products
derived from the Software without specific prior written consent.
4. LIMITED WARRANTY. THE SOFTWARE IS PROVIDED "AS IS" AND WITHOUT WARRANTY OF
ANY KIND. WIREGUARD LLC HEREBY EXCLUDES AND DISCLAIMS ALL IMPLIED OR
STATUTORY WARRANTIES, INCLUDING ANY WARRANTIES OF MERCHANTABILITY, FITNESS
FOR A PARTICULAR PURPOSE, QUALITY, NON-INFRINGEMENT, TITLE, RESULTS,
EFFORTS, OR QUIET ENJOYMENT. THERE IS NO WARRANTY THAT THE PRODUCT WILL BE
ERROR-FREE OR WILL FUNCTION WITHOUT INTERRUPTION. YOU ASSUME THE ENTIRE
RISK FOR THE RESULTS OBTAINED USING THE PRODUCT. TO THE EXTENT THAT
WIREGUARD LLC MAY NOT DISCLAIM ANY WARRANTY AS A MATTER OF APPLICABLE LAW,
THE SCOPE AND DURATION OF SUCH WARRANTY WILL BE THE MINIMUM PERMITTED UNDER
SUCH LAW. ALL EXPRESS OR IMPLIED CONDITIONS, REPRESENTATIONS AND
WARRANTIES, INCLUDING ANY IMPLIED WARRANTY OF MERCHANTABILITY, FITNESS FOR
A PARTICULAR PURPOSE OR NON-INFRINGEMENT ARE DISCLAIMED, EXCEPT TO THE
EXTENT THAT THESE DISCLAIMERS ARE HELD TO BE LEGALLY INVALID.
5. LIMITATION OF LIABILITY. To the extent not prohibited by law, in no event
WireGuard LLC or any third-party-developer will be liable for any lost
revenue, profit or data or for special, indirect, consequential, incidental
or punitive damages, however caused regardless of the theory of liability,
arising out of or related to the use of or inability to use Software, even
if WireGuard LLC has been advised of the possibility of such damages.
Solely you are responsible for determining the appropriateness of using
Software and accept full responsibility for all risks associated with its
exercise of rights under this agreement, including but not limited to the
risks and costs of program errors, compliance with applicable laws, damage
to or loss of data, programs or equipment, and unavailability or
interruption of operations. The foregoing limitations will apply even if
the above stated warranty fails of its essential purpose. You acknowledge,
that it is in the nature of software that software is complex and not
completely free of errors. In no event shall WireGuard LLC or any
third-party-developer be liable to you under any theory for any damages
suffered by you or any user of Software or for any special, incidental,
indirect, consequential or similar damages (including without limitation
damages for loss of business profits, business interruption, loss of
business information or any other pecuniary loss) arising out of the use or
inability to use Software, even if WireGuard LLC has been advised of the
possibility of such damages and regardless of the legal or quitable theory
(contract, tort, or otherwise) upon which the claim is based.
6. TERMINATION. This agreement is affected until terminated. You may
terminate this agreement at any time. This agreement will terminate
immediately without notice from WireGuard LLC if you fail to comply with
the terms and conditions of this agreement. Upon termination, you must
delete Software and all copies of Software and cease all forms of
distribution of Software.
7. SEVERABILITY. If any provision of this agreement is held to be
unenforceable, this agreement will remain in effect with the provision
omitted, unless omission would frustrate the intent of the parties, in
which case this agreement will immediately terminate.
8. RESERVATION OF RIGHTS. All rights not expressly granted in this agreement
are reserved by WireGuard LLC. For example, WireGuard LLC reserves the
right at any time to cease development of Software, to alter distribution
details, features, specifications, capabilities, functions, licensing
terms, release dates, APIs, ABIs, general availability, or other
characteristics of the Software.

Some files were not shown because too many files have changed in this diff Show More