Compare commits

..

464 Commits

Author SHA1 Message Date
Wade Simmons
b418a081a8 cleanup 2025-07-25 14:57:49 -04:00
Wade Simmons
fd3fa57e79 comments 2025-07-25 14:42:54 -04:00
Wade Simmons
0eb92dcab4 WIP 2025-07-25 14:32:37 -04:00
Wade Simmons
f6b206d96c cleanup 2025-07-25 10:38:52 -04:00
Wade Simmons
31cc3a4169 Merge remote-tracking branch 'origin/master' into fips140 2025-07-24 13:57:12 -04:00
Wade Simmons
6da314aa6b WIP 2025-07-24 13:56:42 -04:00
Wade Simmons
3da3d41fb5 log if fips140 in use 2025-07-24 12:37:33 -04:00
brad-defined
91eff03418 Update slack OSS invite link (#1435)
Some checks failed
gofmt / Run gofmt (push) Successful in 38s
smoke-extra / Run extra smoke tests (push) Failing after 21s
smoke / Run multi node smoke test (push) Failing after 1m21s
Build and test / Build all and test on ubuntu-linux (push) Failing after 19m47s
Build and test / Build and test on linux with boringcrypto (push) Failing after 2m24s
Build and test / Build and test on linux with pkcs11 (push) Failing after 2m29s
Build and test / Build and test on macos-latest (push) Has been cancelled
Build and test / Build and test on windows-latest (push) Has been cancelled
2025-07-15 16:05:28 -04:00
Nate Brown
52623820c2 Drop inactive tunnels (#1427) 2025-07-03 09:58:37 -05:00
Nate Brown
c2420642a0 Darwin udp fix (#1428) 2025-07-02 15:50:22 -05:00
brad-defined
b3a1f7b0a3 Disable UDP receive error returns due to ICMP messages on Windows. (#1412) (#1415) 2025-07-02 11:37:41 -04:00
brad-defined
94142aded5 Fix relay migration panic by covering every possible relay state (#1414) 2025-07-02 08:48:02 -04:00
brad-defined
b158eb0c4c Use a list for relay IPs instead of a map (#1423)
* Use a list for relay IPs instead of a map

* linter
2025-07-02 08:47:05 -04:00
dependabot[bot]
e4b7dbcfb0 Bump dario.cat/mergo from 1.0.1 to 1.0.2 (#1408)
Bumps [dario.cat/mergo](https://github.com/imdario/mergo) from 1.0.1 to 1.0.2.
- [Release notes](https://github.com/imdario/mergo/releases)
- [Commits](https://github.com/imdario/mergo/compare/v1.0.1...v1.0.2)

---
updated-dependencies:
- dependency-name: dario.cat/mergo
  dependency-version: 1.0.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-01 23:30:40 -05:00
dependabot[bot]
882edf11d7 Bump github.com/vishvananda/netlink from 1.3.0 to 1.3.1 (#1407)
Bumps [github.com/vishvananda/netlink](https://github.com/vishvananda/netlink) from 1.3.0 to 1.3.1.
- [Release notes](https://github.com/vishvananda/netlink/releases)
- [Commits](https://github.com/vishvananda/netlink/compare/v1.3.0...v1.3.1)

---
updated-dependencies:
- dependency-name: github.com/vishvananda/netlink
  dependency-version: 1.3.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-01 23:29:15 -05:00
dependabot[bot]
d34c2b8e06 Bump golangci/golangci-lint-action from 7 to 8 (#1400)
* Bump golangci/golangci-lint-action from 7 to 8

Bumps [golangci/golangci-lint-action](https://github.com/golangci/golangci-lint-action) from 7 to 8.
- [Release notes](https://github.com/golangci/golangci-lint-action/releases)
- [Commits](https://github.com/golangci/golangci-lint-action/compare/v7...v8)

---
updated-dependencies:
- dependency-name: golangci/golangci-lint-action
  dependency-version: '8'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

* bump golangci-lint version

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Wade Simmons <wsimmons@slack-corp.com>
2025-07-01 23:25:24 -05:00
brad-defined
442a52879b Fix off by one error in IPv6 packet parser (#1419)
Some checks failed
gofmt / Run gofmt (push) Successful in 39s
smoke-extra / Run extra smoke tests (push) Failing after 21s
smoke / Run multi node smoke test (push) Failing after 1m21s
Build and test / Build all and test on ubuntu-linux (push) Failing after 18m35s
Build and test / Build and test on linux with boringcrypto (push) Failing after 2m36s
Build and test / Build and test on linux with pkcs11 (push) Failing after 2m28s
Build and test / Build and test on macos-latest (push) Has been cancelled
Build and test / Build and test on windows-latest (push) Has been cancelled
2025-06-11 15:15:15 -04:00
Ian VanSchooten
061e733007 Fix slack invitation link in issue template (#1406)
Some checks failed
gofmt / Run gofmt (push) Successful in 37s
smoke-extra / Run extra smoke tests (push) Failing after 22s
smoke / Run multi node smoke test (push) Failing after 1m22s
Build and test / Build all and test on ubuntu-linux (push) Failing after 18m29s
Build and test / Build and test on linux with boringcrypto (push) Failing after 2m33s
Build and test / Build and test on linux with pkcs11 (push) Failing after 2m30s
Build and test / Build and test on macos-latest (push) Has been cancelled
Build and test / Build and test on windows-latest (push) Has been cancelled
2025-05-13 12:00:22 -04:00
Andy George
92a9248083 Minor fixes to Readme shell snippets (#1389)
Some checks failed
gofmt / Run gofmt (push) Successful in 12s
smoke-extra / Run extra smoke tests (push) Failing after 20s
smoke / Run multi node smoke test (push) Failing after 1m26s
Build and test / Build all and test on ubuntu-linux (push) Failing after 21m14s
Build and test / Build and test on linux with boringcrypto (push) Failing after 2m36s
Build and test / Build and test on linux with pkcs11 (push) Failing after 2m38s
Build and test / Build and test on macos-latest (push) Has been cancelled
Build and test / Build and test on windows-latest (push) Has been cancelled
2025-05-02 16:32:00 -04:00
John Maguire
83ff2461e2 Mention CA expiration in the README (#1378)
Some checks failed
gofmt / Run gofmt (push) Successful in 10s
smoke-extra / Run extra smoke tests (push) Failing after 20s
smoke / Run multi node smoke test (push) Failing after 1m21s
Build and test / Build all and test on ubuntu-linux (push) Failing after 17m44s
Build and test / Build and test on linux with boringcrypto (push) Failing after 2m20s
Build and test / Build and test on linux with pkcs11 (push) Failing after 2m27s
Build and test / Build and test on macos-latest (push) Has been cancelled
Build and test / Build and test on windows-latest (push) Has been cancelled
2025-04-28 13:36:06 -04:00
maggie44
8536c57645 Allow configuration of logger and build version in gvisor service library (#1239)
Some checks failed
gofmt / Run gofmt (push) Successful in 11s
smoke-extra / Run extra smoke tests (push) Failing after 20s
smoke / Run multi node smoke test (push) Failing after 1m23s
Build and test / Build all and test on ubuntu-linux (push) Failing after 18m26s
Build and test / Build and test on linux with boringcrypto (push) Failing after 2m30s
Build and test / Build and test on linux with pkcs11 (push) Failing after 2m35s
Build and test / Build and test on macos-latest (push) Has been cancelled
Build and test / Build and test on windows-latest (push) Has been cancelled
2025-04-21 13:45:59 -04:00
John Maguire
15b5a43300 Update issue and PR templates (#1376) 2025-04-21 13:45:48 -04:00
Andriyanov Nikita
e5ce8966d6 add netlink options (#1326)
* add netlink options

* force use buffer

* fix namings and add config examples

* fix linter
2025-04-21 13:44:33 -04:00
John Maguire
2dc30fc300 Support 32-bit machines in crypto test (#1394) 2025-04-21 13:28:43 -04:00
Wade Simmons
b8ea55eb90 optimize usage of bart (#1395)
Some checks failed
gofmt / Run gofmt (push) Successful in 9s
smoke-extra / Run extra smoke tests (push) Failing after 19s
smoke / Run multi node smoke test (push) Failing after 1m19s
Build and test / Build all and test on ubuntu-linux (push) Failing after 18m41s
Build and test / Build and test on linux with boringcrypto (push) Failing after 2m47s
Build and test / Build and test on linux with pkcs11 (push) Failing after 2m47s
Build and test / Build and test on macos-latest (push) Has been cancelled
Build and test / Build and test on windows-latest (push) Has been cancelled
Use `bart.Lite` and `.Contains` as suggested by the bart maintainer:

- 9455952eed (commitcomment-155362580)
2025-04-18 12:37:20 -04:00
dependabot[bot]
4eb056af9d Bump github.com/prometheus/client_golang from 1.21.1 to 1.22.0 (#1393)
Some checks failed
gofmt / Run gofmt (push) Successful in 24s
smoke-extra / Run extra smoke tests (push) Failing after 19s
smoke / Run multi node smoke test (push) Failing after 1m20s
Build and test / Build all and test on ubuntu-linux (push) Failing after 18m9s
Build and test / Build and test on linux with boringcrypto (push) Failing after 4m11s
Build and test / Build and test on linux with pkcs11 (push) Failing after 3m28s
Build and test / Build and test on macos-latest (push) Has been cancelled
Build and test / Build and test on windows-latest (push) Has been cancelled
Bumps [github.com/prometheus/client_golang](https://github.com/prometheus/client_golang) from 1.21.1 to 1.22.0.
- [Release notes](https://github.com/prometheus/client_golang/releases)
- [Changelog](https://github.com/prometheus/client_golang/blob/main/CHANGELOG.md)
- [Commits](https://github.com/prometheus/client_golang/compare/v1.21.1...v1.22.0)

---
updated-dependencies:
- dependency-name: github.com/prometheus/client_golang
  dependency-version: 1.22.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-17 06:43:55 -04:00
dependabot[bot]
e49f279004 Bump golang.org/x/net in the golang-x-dependencies group (#1392)
Bumps the golang-x-dependencies group with 1 update: [golang.org/x/net](https://github.com/golang/net).


Updates `golang.org/x/net` from 0.38.0 to 0.39.0
- [Commits](https://github.com/golang/net/compare/v0.38.0...v0.39.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-version: 0.39.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-17 06:41:53 -04:00
dependabot[bot]
459cb38a6d Bump github.com/gaissmai/bart from 0.20.1 to 0.20.4 (#1391)
Some checks failed
gofmt / Run gofmt (push) Successful in 24s
smoke-extra / Run extra smoke tests (push) Failing after 28s
smoke / Run multi node smoke test (push) Failing after 1m24s
Build and test / Build all and test on ubuntu-linux (push) Failing after 18m16s
Build and test / Build and test on linux with boringcrypto (push) Failing after 2m49s
Build and test / Build and test on linux with pkcs11 (push) Failing after 4m13s
Build and test / Build and test on macos-latest (push) Has been cancelled
Build and test / Build and test on windows-latest (push) Has been cancelled
* Bump github.com/gaissmai/bart from 0.20.1 to 0.20.4

Bumps [github.com/gaissmai/bart](https://github.com/gaissmai/bart) from 0.20.1 to 0.20.4.
- [Release notes](https://github.com/gaissmai/bart/releases)
- [Commits](https://github.com/gaissmai/bart/compare/v0.20.1...v0.20.4)

---
updated-dependencies:
- dependency-name: github.com/gaissmai/bart
  dependency-version: 0.20.4
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* set back to go 1.23.0

We were only on 1.23.6 because of bart in the first place.

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Wade Simmons <wsimmons@slack-corp.com>
2025-04-16 11:46:46 -04:00
dependabot[bot]
18279ed17b Bump github.com/miekg/dns from 1.1.64 to 1.1.65 (#1384)
Some checks failed
gofmt / Run gofmt (push) Successful in 25s
smoke-extra / Run extra smoke tests (push) Failing after 19s
smoke / Run multi node smoke test (push) Failing after 1m27s
Build and test / Build all and test on ubuntu-linux (push) Failing after 26m1s
Build and test / Build and test on linux with boringcrypto (push) Failing after 3m4s
Build and test / Build and test on linux with pkcs11 (push) Failing after 2m46s
Build and test / Build and test on macos-latest (push) Has been cancelled
Build and test / Build and test on windows-latest (push) Has been cancelled
Bumps [github.com/miekg/dns](https://github.com/miekg/dns) from 1.1.64 to 1.1.65.
- [Changelog](https://github.com/miekg/dns/blob/master/Makefile.release)
- [Commits](https://github.com/miekg/dns/compare/v1.1.64...v1.1.65)

---
updated-dependencies:
- dependency-name: github.com/miekg/dns
  dependency-version: 1.1.65
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-08 11:40:34 -04:00
dependabot[bot]
c7fb3ad9cf Bump the golang-x-dependencies group with 4 updates (#1382)
Bumps the golang-x-dependencies group with 4 updates: [golang.org/x/crypto](https://github.com/golang/crypto), [golang.org/x/sync](https://github.com/golang/sync), [golang.org/x/sys](https://github.com/golang/sys) and [golang.org/x/term](https://github.com/golang/term).


Updates `golang.org/x/crypto` from 0.36.0 to 0.37.0
- [Commits](https://github.com/golang/crypto/compare/v0.36.0...v0.37.0)

Updates `golang.org/x/sync` from 0.12.0 to 0.13.0
- [Commits](https://github.com/golang/sync/compare/v0.12.0...v0.13.0)

Updates `golang.org/x/sys` from 0.31.0 to 0.32.0
- [Commits](https://github.com/golang/sys/compare/v0.31.0...v0.32.0)

Updates `golang.org/x/term` from 0.30.0 to 0.31.0
- [Commits](https://github.com/golang/term/compare/v0.30.0...v0.31.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-version: 0.37.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
- dependency-name: golang.org/x/sync
  dependency-version: 0.13.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
- dependency-name: golang.org/x/sys
  dependency-version: 0.32.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
- dependency-name: golang.org/x/term
  dependency-version: 0.31.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-08 11:39:31 -04:00
John Maguire
d4a7df3083 Rename pki.default_version to pki.initiating_version (#1381)
Some checks failed
gofmt / Run gofmt (push) Successful in 9s
smoke-extra / Run extra smoke tests (push) Failing after 20s
smoke / Run multi node smoke test (push) Failing after 1m26s
Build and test / Build all and test on ubuntu-linux (push) Failing after 21m13s
Build and test / Build and test on linux with boringcrypto (push) Failing after 3m19s
Build and test / Build and test on linux with pkcs11 (push) Failing after 2m47s
Build and test / Build and test on macos-latest (push) Has been cancelled
Build and test / Build and test on windows-latest (push) Has been cancelled
2025-04-07 18:08:29 -04:00
Zeroday BYTE
e83a1c6c84 Update config.go (#1353)
Some checks failed
gofmt / Run gofmt (push) Successful in 10s
smoke-extra / Run extra smoke tests (push) Failing after 19s
smoke / Run multi node smoke test (push) Failing after 1m27s
Build and test / Build all and test on ubuntu-linux (push) Failing after 21m57s
Build and test / Build and test on linux with boringcrypto (push) Failing after 3m23s
Build and test / Build and test on linux with pkcs11 (push) Failing after 3m2s
Build and test / Build and test on macos-latest (push) Has been cancelled
Build and test / Build and test on windows-latest (push) Has been cancelled
2025-04-03 13:11:20 -05:00
Wade Simmons
f5d096dd2b move to golang.org/x/term (#1372)
Some checks failed
gofmt / Run gofmt (push) Successful in 35s
smoke-extra / Run extra smoke tests (push) Failing after 20s
smoke / Run multi node smoke test (push) Failing after 1m28s
Build and test / Build all and test on ubuntu-linux (push) Failing after 19m52s
Build and test / Build and test on linux with boringcrypto (push) Failing after 2m34s
Build and test / Build and test on linux with pkcs11 (push) Failing after 2m44s
Build and test / Build and test on macos-latest (push) Has been cancelled
Build and test / Build and test on windows-latest (push) Has been cancelled
The `golang.org/x/crypto/ssh/terminal` was deprecated and moved to
`golang.org/x/term`. We already use the new package in
`cmd/nebula-cert`, so fix our remaining reference here.

See:

- https://github.com/golang/go/issues/31044
2025-04-02 09:11:34 -04:00
dependabot[bot]
e2d6f4e444 Bump github.com/miekg/dns from 1.1.63 to 1.1.64 (#1363)
Some checks failed
gofmt / Run gofmt (push) Successful in 25s
smoke-extra / Run extra smoke tests (push) Failing after 19s
smoke / Run multi node smoke test (push) Failing after 1m25s
Build and test / Build all and test on ubuntu-linux (push) Failing after 19m34s
Build and test / Build and test on linux with boringcrypto (push) Failing after 2m35s
Build and test / Build and test on linux with pkcs11 (push) Failing after 2m24s
Build and test / Build and test on macos-latest (push) Has been cancelled
Build and test / Build and test on windows-latest (push) Has been cancelled
2025-04-01 16:28:27 -05:00
dependabot[bot]
d99fd60e06 Bump Apple-Actions/import-codesign-certs from 3 to 5 (#1364) 2025-04-01 16:26:23 -05:00
dependabot[bot]
e4bae15825 Bump google.golang.org/protobuf in the protobuf-dependencies group (#1365) 2025-04-01 16:23:35 -05:00
dependabot[bot]
58ead4116f Bump github.com/gaissmai/bart from 0.18.1 to 0.20.1 (#1369) 2025-04-01 16:10:20 -05:00
John Maguire
e136d1d47a Update example config with default_local_cidr_any changes (#1373) 2025-04-01 16:08:03 -05:00
dependabot[bot]
d2adebf26d Bump golangci/golangci-lint-action from 6 to 7 (#1361)
Some checks failed
gofmt / Run gofmt (push) Successful in 10s
smoke-extra / Run extra smoke tests (push) Failing after 19s
smoke / Run multi node smoke test (push) Failing after 1m26s
Build and test / Build all and test on ubuntu-linux (push) Failing after 19m6s
Build and test / Build and test on linux with boringcrypto (push) Failing after 2m32s
Build and test / Build and test on linux with pkcs11 (push) Failing after 2m36s
Build and test / Build and test on macos-latest (push) Has been cancelled
Build and test / Build and test on windows-latest (push) Has been cancelled
* Bump golangci/golangci-lint-action from 6 to 7

Bumps [golangci/golangci-lint-action](https://github.com/golangci/golangci-lint-action) from 6 to 7.
- [Release notes](https://github.com/golangci/golangci-lint-action/releases)
- [Commits](https://github.com/golangci/golangci-lint-action/compare/v6...v7)

---
updated-dependencies:
- dependency-name: golangci/golangci-lint-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

* use latest golangci-lint

* pin to v2.0

* golangci-lint migrate

* make the tests happy

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Wade Simmons <wsimmons@slack-corp.com>
2025-04-01 13:24:19 -04:00
Wade Simmons
36bc9dd261 fix parseUnsafeRoutes for yaml.v3 (#1371)
We switched to yaml.v3 with #1148, but missed this spot that was still
casting into `map[any]any` when yaml.v3 makes it `map[string]any`. Also
clean up a few more `interface{}` that were added as we changed them all
to `any` with #1148.
2025-04-01 09:49:26 -04:00
Wade Simmons
879852c32a upgrade to yaml.v3 (#1148)
Some checks failed
gofmt / Run gofmt (push) Successful in 37s
smoke-extra / Run extra smoke tests (push) Failing after 20s
smoke / Run multi node smoke test (push) Failing after 1m25s
Build and test / Build all and test on ubuntu-linux (push) Failing after 18m51s
Build and test / Build and test on linux with boringcrypto (push) Failing after 2m44s
Build and test / Build and test on linux with pkcs11 (push) Failing after 2m27s
Build and test / Build and test on macos-latest (push) Has been cancelled
Build and test / Build and test on windows-latest (push) Has been cancelled
* upgrade to yaml.v3

The main nice fix here is that maps unmarshal into `map[string]any`
instead of `map[any]any`, so it cleans things up a bit.

* add config.AsBool

Since yaml.v3 doesn't automatically convert yes to bool now, for
backwards compat

* use type aliases for m

* more cleanup

* more cleanup

* more cleanup

* go mod cleanup
2025-03-31 16:08:34 -04:00
dependabot[bot]
75faa5f2e5 Bump golang.org/x/net in the golang-x-dependencies group (#1370)
Bumps the golang-x-dependencies group with 1 update: [golang.org/x/net](https://github.com/golang/net).


Updates `golang.org/x/net` from 0.37.0 to 0.38.0
- [Commits](https://github.com/golang/net/compare/v0.37.0...v0.38.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-31 16:05:07 -04:00
Wade Simmons
4485c47641 WIP support new Go fips140 module
This will replace boring crypto at some point.

We should modify our protocol a bit and instead change to
NewGCMWithRandomNonce.
2025-03-31 12:08:58 -04:00
Caleb Jasik
4444ed166a Add certVersion field to logs when logging the cert name in handshakes (#1359)
Some checks failed
gofmt / Run gofmt (push) Successful in 10s
smoke-extra / Run extra smoke tests (push) Failing after 19s
smoke / Run multi node smoke test (push) Failing after 1m24s
Build and test / Build all and test on ubuntu-linux (push) Failing after 19m24s
Build and test / Build and test on linux with boringcrypto (push) Failing after 4m13s
Build and test / Build and test on linux with pkcs11 (push) Failing after 3m27s
Build and test / Build and test on macos-latest (push) Has been cancelled
Build and test / Build and test on windows-latest (push) Has been cancelled
2025-03-25 16:08:36 -05:00
dioss-Machiel
f86953ca56 Implement ECMP for unsafe_routes (#1332)
Some checks failed
gofmt / Run gofmt (push) Successful in 27s
smoke-extra / Run extra smoke tests (push) Failing after 18s
smoke / Run multi node smoke test (push) Failing after 1m26s
Build and test / Build all and test on ubuntu-linux (push) Failing after 21m43s
Build and test / Build and test on linux with boringcrypto (push) Failing after 3m45s
Build and test / Build and test on linux with pkcs11 (push) Failing after 2m59s
Build and test / Build and test on macos-latest (push) Has been cancelled
Build and test / Build and test on windows-latest (push) Has been cancelled
2025-03-24 17:15:59 -05:00
Wade Simmons
3de36c99b6 build with go1.24 (#1338)
Some checks failed
gofmt / Run gofmt (push) Successful in 40s
smoke-extra / Run extra smoke tests (push) Failing after 20s
smoke / Run multi node smoke test (push) Failing after 1m32s
Build and test / Build all and test on ubuntu-linux (push) Failing after 20m31s
Build and test / Build and test on linux with boringcrypto (push) Failing after 2m48s
Build and test / Build and test on linux with pkcs11 (push) Failing after 2m57s
Build and test / Build and test on macos-latest (push) Has been cancelled
Build and test / Build and test on windows-latest (push) Has been cancelled
This doesn't change our go.mod, which still only requires go1.22 as a minimum. It only changes our builds to use go1.24 so we have the latest improvements.
2025-03-14 13:49:27 -04:00
Caleb Jasik
50473bd2a8 Update example config to listen on :: by default (#1351)
Some checks failed
gofmt / Run gofmt (push) Successful in 10s
smoke-extra / Run extra smoke tests (push) Failing after 19s
smoke / Run multi node smoke test (push) Failing after 1m27s
Build and test / Build all and test on ubuntu-linux (push) Failing after 19m16s
Build and test / Build and test on linux with boringcrypto (push) Failing after 2m41s
Build and test / Build and test on linux with pkcs11 (push) Failing after 2m56s
Build and test / Build and test on macos-latest (push) Has been cancelled
Build and test / Build and test on windows-latest (push) Has been cancelled
2025-03-12 22:53:16 -05:00
jampe
1d3c85338c add so_mark sockopt support (#1331)
Some checks failed
gofmt / Run gofmt (push) Successful in 10s
smoke-extra / Run extra smoke tests (push) Failing after 20s
smoke / Run multi node smoke test (push) Failing after 1m29s
Build and test / Build all and test on ubuntu-linux (push) Failing after 19m23s
Build and test / Build and test on linux with boringcrypto (push) Failing after 2m45s
Build and test / Build and test on linux with pkcs11 (push) Failing after 3m39s
Build and test / Build and test on macos-latest (push) Has been cancelled
Build and test / Build and test on windows-latest (push) Has been cancelled
2025-03-12 09:35:33 -05:00
Aleksandr Zykov
2fb018ced8 Fixed homebrew formula path (#1219)
Some checks failed
gofmt / Run gofmt (push) Successful in 12s
smoke-extra / Run extra smoke tests (push) Failing after 19s
smoke / Run multi node smoke test (push) Failing after 1m28s
Build and test / Build all and test on ubuntu-linux (push) Failing after 19m40s
Build and test / Build and test on linux with boringcrypto (push) Failing after 2m56s
Build and test / Build and test on linux with pkcs11 (push) Failing after 2m47s
Build and test / Build and test on macos-latest (push) Has been cancelled
Build and test / Build and test on windows-latest (push) Has been cancelled
2025-03-11 22:58:52 -05:00
Caleb Jasik
088af8edb2 Enable running testifylint in CI (#1350)
Some checks failed
gofmt / Run gofmt (push) Successful in 10s
smoke-extra / Run extra smoke tests (push) Failing after 18s
smoke / Run multi node smoke test (push) Failing after 1m28s
Build and test / Build all and test on ubuntu-linux (push) Failing after 19m44s
Build and test / Build and test on linux with boringcrypto (push) Failing after 3m1s
Build and test / Build and test on linux with pkcs11 (push) Failing after 3m6s
Build and test / Build and test on macos-latest (push) Has been cancelled
Build and test / Build and test on windows-latest (push) Has been cancelled
2025-03-10 17:38:14 -05:00
Caleb Jasik
612637f529 Fix testifylint lint errors (#1321)
Some checks failed
gofmt / Run gofmt (push) Successful in 11s
smoke-extra / Run extra smoke tests (push) Failing after 19s
smoke / Run multi node smoke test (push) Failing after 1m28s
Build and test / Build all and test on ubuntu-linux (push) Failing after 19m3s
Build and test / Build and test on linux with boringcrypto (push) Failing after 2m44s
Build and test / Build and test on linux with pkcs11 (push) Failing after 2m54s
Build and test / Build and test on macos-latest (push) Has been cancelled
Build and test / Build and test on windows-latest (push) Has been cancelled
* Fix bool-compare

* Fix empty

* Fix encoded-compare

* Fix error-is-as

* Fix error-nil

* Fix expected-actual

* Fix len
2025-03-10 10:18:34 -04:00
Wade Simmons
94e89a1045 smoke-tests: guess the lighthouse container IP better (#1347)
Currently we just assume you are using the default Docker bridge network
config of `172.17.0.0/24`. This change works to try to detect if you are
using a different config, but still only works if you are using a `/24`
and aren't running any other containers. A future PR could make this
better by launching the lighthouse container first and then fetching
what the IP address is before continuing with the configuration.
2025-03-10 10:17:54 -04:00
Caleb Jasik
f7540ad355 Remove commented out metadata.go (#1320)
Some checks failed
gofmt / Run gofmt (push) Successful in 27s
smoke-extra / Run extra smoke tests (push) Failing after 20s
smoke / Run multi node smoke test (push) Failing after 1m26s
Build and test / Build all and test on ubuntu-linux (push) Failing after 18m30s
Build and test / Build and test on linux with boringcrypto (push) Failing after 2m52s
Build and test / Build and test on linux with pkcs11 (push) Failing after 2m35s
Build and test / Build and test on macos-latest (push) Has been cancelled
Build and test / Build and test on windows-latest (push) Has been cancelled
2025-03-07 14:37:07 -06:00
dependabot[bot]
096179a8c9 Bump github.com/miekg/dns from 1.1.62 to 1.1.63 (#1346)
Bumps [github.com/miekg/dns](https://github.com/miekg/dns) from 1.1.62 to 1.1.63.
- [Changelog](https://github.com/miekg/dns/blob/master/Makefile.release)
- [Commits](https://github.com/miekg/dns/compare/v1.1.62...v1.1.63)

---
updated-dependencies:
- dependency-name: github.com/miekg/dns
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-07 12:05:36 -05:00
Nate Brown
f8734ffa43 Improve logging when handshaking with an invalid cert (#1345) 2025-03-07 10:45:31 -06:00
dependabot[bot]
c58e223b3d Bump github.com/prometheus/client_golang from 1.20.4 to 1.21.1 (#1340)
Some checks failed
gofmt / Run gofmt (push) Successful in 26s
smoke-extra / Run extra smoke tests (push) Failing after 19s
smoke / Run multi node smoke test (push) Failing after 1m28s
Build and test / Build all and test on ubuntu-linux (push) Failing after 18m50s
Build and test / Build and test on linux with boringcrypto (push) Failing after 3m13s
Build and test / Build and test on linux with pkcs11 (push) Failing after 3m10s
Build and test / Build and test on macos-latest (push) Has been cancelled
Build and test / Build and test on windows-latest (push) Has been cancelled
Bumps [github.com/prometheus/client_golang](https://github.com/prometheus/client_golang) from 1.20.4 to 1.21.1.
- [Release notes](https://github.com/prometheus/client_golang/releases)
- [Changelog](https://github.com/prometheus/client_golang/blob/main/CHANGELOG.md)
- [Commits](https://github.com/prometheus/client_golang/compare/v1.20.4...v1.21.1)

---
updated-dependencies:
- dependency-name: github.com/prometheus/client_golang
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-07 09:44:30 -05:00
Wade Simmons
c46ef43590 smoke-test-extra: cleanup ncat references (#1343)
Some checks failed
gofmt / Run gofmt (push) Successful in 39s
smoke-extra / Run extra smoke tests (push) Failing after 30s
smoke / Run multi node smoke test (push) Failing after 1m29s
Build and test / Build all and test on ubuntu-linux (push) Failing after 18m40s
Build and test / Build and test on linux with boringcrypto (push) Failing after 2m36s
Build and test / Build and test on linux with pkcs11 (push) Failing after 2m50s
Build and test / Build and test on macos-latest (push) Has been cancelled
Build and test / Build and test on windows-latest (push) Has been cancelled
* smoke-extra: cleanup ncat references

We can't run the `ncat` tests unless we can make sure to install it to
all of the vagrant boxes.

* more ncat

* more cleanup
2025-03-06 15:44:41 -05:00
dependabot[bot]
775c6bc83d Bump google.golang.org/protobuf (#1344)
Bumps the protobuf-dependencies group with 1 update in the / directory: google.golang.org/protobuf.


Updates `google.golang.org/protobuf` from 1.35.1 to 1.36.5

---
updated-dependencies:
- dependency-name: google.golang.org/protobuf
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: protobuf-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-06 15:43:55 -05:00
dependabot[bot]
13799f425d Bump the golang-x-dependencies group with 5 updates (#1339)
Bumps the golang-x-dependencies group with 5 updates:

| Package | From | To |
| --- | --- | --- |
| [golang.org/x/crypto](https://github.com/golang/crypto) | `0.32.0` | `0.36.0` |
| [golang.org/x/net](https://github.com/golang/net) | `0.34.0` | `0.37.0` |
| [golang.org/x/sync](https://github.com/golang/sync) | `0.10.0` | `0.12.0` |
| [golang.org/x/sys](https://github.com/golang/sys) | `0.29.0` | `0.31.0` |
| [golang.org/x/term](https://github.com/golang/term) | `0.28.0` | `0.30.0` |


Updates `golang.org/x/crypto` from 0.32.0 to 0.36.0
- [Commits](https://github.com/golang/crypto/compare/v0.32.0...v0.36.0)

Updates `golang.org/x/net` from 0.34.0 to 0.37.0
- [Commits](https://github.com/golang/net/compare/v0.34.0...v0.37.0)

Updates `golang.org/x/sync` from 0.10.0 to 0.12.0
- [Commits](https://github.com/golang/sync/compare/v0.10.0...v0.12.0)

Updates `golang.org/x/sys` from 0.29.0 to 0.31.0
- [Commits](https://github.com/golang/sys/compare/v0.29.0...v0.31.0)

Updates `golang.org/x/term` from 0.28.0 to 0.30.0
- [Commits](https://github.com/golang/term/compare/v0.28.0...v0.30.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
- dependency-name: golang.org/x/net
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
- dependency-name: golang.org/x/sync
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
- dependency-name: golang.org/x/sys
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
- dependency-name: golang.org/x/term
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-06 14:30:20 -05:00
dependabot[bot]
8a090e59d7 Bump github.com/gaissmai/bart from 0.13.0 to 0.18.1 (#1341)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Nate Brown <nbrown.us@gmail.com>
2025-03-06 13:26:29 -06:00
Wade Simmons
9feda811a6 bump go.mod to go1.23 (#1342)
* bump go.mod to go1.23

* 1.23.6
2025-03-06 13:21:49 -05:00
dependabot[bot]
750e4a81bf Bump the golang-x-dependencies group across 1 directory with 5 updates (#1303)
Bumps the golang-x-dependencies group with 3 updates in the / directory: [golang.org/x/crypto](https://github.com/golang/crypto), [golang.org/x/net](https://github.com/golang/net) and [golang.org/x/sync](https://github.com/golang/sync).


Updates `golang.org/x/crypto` from 0.28.0 to 0.32.0
- [Commits](https://github.com/golang/crypto/compare/v0.28.0...v0.32.0)

Updates `golang.org/x/net` from 0.30.0 to 0.34.0
- [Commits](https://github.com/golang/net/compare/v0.30.0...v0.34.0)

Updates `golang.org/x/sync` from 0.8.0 to 0.10.0
- [Commits](https://github.com/golang/sync/compare/v0.8.0...v0.10.0)

Updates `golang.org/x/sys` from 0.26.0 to 0.29.0
- [Commits](https://github.com/golang/sys/compare/v0.26.0...v0.29.0)

Updates `golang.org/x/term` from 0.25.0 to 0.28.0
- [Commits](https://github.com/golang/term/compare/v0.25.0...v0.28.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
- dependency-name: golang.org/x/net
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
- dependency-name: golang.org/x/sync
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
- dependency-name: golang.org/x/sys
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
- dependency-name: golang.org/x/term
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-06 12:57:05 -05:00
Wade Simmons
32d3a6e091 build with go1.23 (#1198)
* make boringcrypto: add checklinkname flag for go1.23

Starting with go1.23, we need to set -checklinkname=0 when building for
boringcrypto because we need to use go:linkname to access `newGCMTLS`.

Note that this does break builds when using a go version less than
go1.23.0. We can probably assume that someone using this Makefile and
manually building is using the latest release of Go though.

See:

- https://go.dev/doc/go1.23#linker

* build with go1.23

This doesn't change our go.mod, which still only requires go1.22 as
a minimum, only changes our builds to use go1.23 so we have the latest
improvements.

* fix `make test-boringcrypto` as well

* also fix boringcrypto e2e test
2025-03-06 12:54:20 -05:00
Wade Simmons
351dbd6059 smoke-extra: support Ubuntu 24.04 (#1311)
Ubuntu 24.04 doesn't include vagrant anymore, so add the hashicorp
source
2025-03-06 12:29:38 -05:00
Nate Brown
d97ed57a19 V2 certificate format (#1216)
Co-authored-by: Nate Brown <nbrown.us@gmail.com>
Co-authored-by: Jack Doan <jackdoan@rivian.com>
Co-authored-by: brad-defined <77982333+brad-defined@users.noreply.github.com>
Co-authored-by: Jack Doan <me@jackdoan.com>
2025-03-06 11:28:26 -06:00
Ian VanSchooten
2b427a7e89 Update slack invitation link (#1308)
Some checks failed
gofmt / Run gofmt (push) Successful in 46s
smoke-extra / Run extra smoke tests (push) Failing after 31s
smoke / Run multi node smoke test (push) Failing after 1m12s
Build and test / Build all and test on ubuntu-linux (push) Failing after 15m11s
Build and test / Build and test on linux with boringcrypto (push) Failing after 2m10s
Build and test / Build and test on linux with pkcs11 (push) Failing after 1m58s
Build and test / Build and test on ${{ matrix.os }} (macos-latest) (push) Has been cancelled
Build and test / Build and test on ${{ matrix.os }} (windows-latest) (push) Has been cancelled
2025-01-13 13:35:53 -05:00
Nate Brown
3e6c75573f Fix static host map wrong responder situations, correct logging (#1259) 2024-10-23 14:28:02 -05:00
dependabot[bot]
3f6a7cb250 Bump google.golang.org/protobuf in the protobuf-dependencies group (#1250)
Bumps the protobuf-dependencies group with 1 update: google.golang.org/protobuf.


Updates `google.golang.org/protobuf` from 1.34.2 to 1.35.1

---
updated-dependencies:
- dependency-name: google.golang.org/protobuf
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: protobuf-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-11 13:05:58 -04:00
dependabot[bot]
37415d57d0 Bump github.com/prometheus/client_golang from 1.19.1 to 1.20.4 (#1228)
Bumps [github.com/prometheus/client_golang](https://github.com/prometheus/client_golang) from 1.19.1 to 1.20.4.
- [Release notes](https://github.com/prometheus/client_golang/releases)
- [Changelog](https://github.com/prometheus/client_golang/blob/main/CHANGELOG.md)
- [Commits](https://github.com/prometheus/client_golang/compare/v1.19.1...v1.20.4)

---
updated-dependencies:
- dependency-name: github.com/prometheus/client_golang
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-11 12:50:31 -04:00
dependabot[bot]
d3bf09ef8e Bump github.com/gaissmai/bart from 0.11.1 to 0.13.0 (#1231)
Bumps [github.com/gaissmai/bart](https://github.com/gaissmai/bart) from 0.11.1 to 0.13.0.
- [Release notes](https://github.com/gaissmai/bart/releases)
- [Commits](https://github.com/gaissmai/bart/compare/v0.11.1...v0.13.0)

---
updated-dependencies:
- dependency-name: github.com/gaissmai/bart
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-11 11:38:40 -04:00
dependabot[bot]
999418d0e9 Bump github.com/miekg/dns from 1.1.61 to 1.1.62 (#1201)
Bumps [github.com/miekg/dns](https://github.com/miekg/dns) from 1.1.61 to 1.1.62.
- [Changelog](https://github.com/miekg/dns/blob/master/Makefile.release)
- [Commits](https://github.com/miekg/dns/compare/v1.1.61...v1.1.62)

---
updated-dependencies:
- dependency-name: github.com/miekg/dns
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-11 11:36:43 -04:00
dependabot[bot]
97dd8c53c9 Bump github.com/vishvananda/netlink from 1.2.1-beta.2 to 1.3.0 (#1211)
Bumps [github.com/vishvananda/netlink](https://github.com/vishvananda/netlink) from 1.2.1-beta.2 to 1.3.0.
- [Release notes](https://github.com/vishvananda/netlink/releases)
- [Commits](https://github.com/vishvananda/netlink/compare/v1.2.1-beta.2...v1.3.0)

---
updated-dependencies:
- dependency-name: github.com/vishvananda/netlink
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-11 11:35:43 -04:00
dependabot[bot]
9c175b4faf Bump the golang-x-dependencies group across 1 directory with 4 updates (#1237)
Bumps the golang-x-dependencies group with 2 updates in the / directory: [golang.org/x/crypto](https://github.com/golang/crypto) and [golang.org/x/net](https://github.com/golang/net).


Updates `golang.org/x/crypto` from 0.26.0 to 0.28.0
- [Commits](https://github.com/golang/crypto/compare/v0.26.0...v0.28.0)

Updates `golang.org/x/net` from 0.28.0 to 0.30.0
- [Commits](https://github.com/golang/net/compare/v0.28.0...v0.30.0)

Updates `golang.org/x/sys` from 0.24.0 to 0.26.0
- [Commits](https://github.com/golang/sys/compare/v0.24.0...v0.26.0)

Updates `golang.org/x/term` from 0.23.0 to 0.25.0
- [Commits](https://github.com/golang/term/compare/v0.23.0...v0.25.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
- dependency-name: golang.org/x/net
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
- dependency-name: golang.org/x/sys
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
- dependency-name: golang.org/x/term
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-11 09:01:42 -04:00
Nate Brown
08ac65362e Cert interface (#1212) 2024-10-10 18:00:22 -05:00
dependabot[bot]
16eaae306a Bump dario.cat/mergo from 1.0.0 to 1.0.1 (#1200)
Bumps [dario.cat/mergo](https://github.com/imdario/mergo) from 1.0.0 to 1.0.1.
- [Release notes](https://github.com/imdario/mergo/releases)
- [Commits](https://github.com/imdario/mergo/compare/v1.0.0...v1.0.1)

---
updated-dependencies:
- dependency-name: dario.cat/mergo
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-09 17:53:26 -04:00
Jack Doan
35603d1c39 add PKCS11 support (#1153)
* add PKCS11 support

* add pkcs11 build option to the makefile, add a stub pkclient to avoid forcing CGO onto people

* don't print the pkcs11 option on nebula-cert keygen if not compiled in

* remove linux-arm64-pkcs11 from the all target to fix CI

* correctly serialize ec keys

* nebula-cert: support PKCS#11 for sign and ca

* fix gofmt lint

* clean up some logic with regard to closing sessions

* pkclient: handle empty correctly for TPM2

* Update Makefile and Actions

---------

Co-authored-by: Morgan Jones <me@numin.it>
Co-authored-by: John Maguire <contact@johnmaguire.me>
2024-09-09 17:51:58 -04:00
Wade Simmons
ab81b62ea0 v1.9.4 (#1210)
Update CHANGELOG for Nebula v1.9.4
2024-09-09 14:11:44 -04:00
dependabot[bot]
45bbad2f21 Bump the golang-x-dependencies group with 4 updates (#1195)
Bumps the golang-x-dependencies group with 4 updates: [golang.org/x/crypto](https://github.com/golang/crypto), [golang.org/x/net](https://github.com/golang/net), [golang.org/x/sys](https://github.com/golang/sys) and [golang.org/x/term](https://github.com/golang/term).


Updates `golang.org/x/crypto` from 0.25.0 to 0.26.0
- [Commits](https://github.com/golang/crypto/compare/v0.25.0...v0.26.0)

Updates `golang.org/x/net` from 0.27.0 to 0.28.0
- [Commits](https://github.com/golang/net/compare/v0.27.0...v0.28.0)

Updates `golang.org/x/sys` from 0.23.0 to 0.24.0
- [Commits](https://github.com/golang/sys/compare/v0.23.0...v0.24.0)

Updates `golang.org/x/term` from 0.22.0 to 0.23.0
- [Commits](https://github.com/golang/term/compare/v0.22.0...v0.23.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
- dependency-name: golang.org/x/net
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
- dependency-name: golang.org/x/sys
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
- dependency-name: golang.org/x/term
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-03 16:47:36 -04:00
Jack Doan
3dc56e1184 Support UDP dialling with gvisor (#1181) 2024-08-26 12:38:32 -05:00
Wade Simmons
0736cfa562 udp: fix endianness for port (#1194)
If the host OS is already big endian, we were swapping bytes when we
shouldn't have. Use the Go helper to make sure we do the endianness
correctly

Fixes: #1189
2024-08-14 12:53:00 -04:00
Jack Doan
248cf194cd fix integer wraparound in the calculation of handshake timeouts on 32-bit targets (#1185)
Fixes: #1169
2024-08-13 09:25:18 -04:00
dependabot[bot]
8a6a0f0636 Bump the golang-x-dependencies group with 2 updates (#1190)
Bumps the golang-x-dependencies group with 2 updates: [golang.org/x/sync](https://github.com/golang/sync) and [golang.org/x/sys](https://github.com/golang/sys).


Updates `golang.org/x/sync` from 0.7.0 to 0.8.0
- [Commits](https://github.com/golang/sync/compare/v0.7.0...v0.8.0)

Updates `golang.org/x/sys` from 0.22.0 to 0.23.0
- [Commits](https://github.com/golang/sys/compare/v0.22.0...v0.23.0)

---
updated-dependencies:
- dependency-name: golang.org/x/sync
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
- dependency-name: golang.org/x/sys
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-08-07 11:58:46 -04:00
Wade Simmons
f5f6c269ac fix rare panic when local index collision happens (#1191)
A local index collision happens when two tunnels attempt to use the same
random int32 index ID. This is a rare chance, and we have code to deal
with it, but we have a panic because we return the wrong thing in this
case. This change should fix the panic.
2024-08-07 11:53:32 -04:00
brad-defined
9a63fa0a07 Make some Nebula state programmatically available via control object (#1188) 2024-08-01 13:40:05 -04:00
Nate Brown
e264a0ff88 Switch most everything to netip in prep for ipv6 in the overlay (#1173) 2024-07-31 10:18:56 -05:00
dependabot[bot]
00458302ca Bump the golang-x-dependencies group with 4 updates (#1174)
Bumps the golang-x-dependencies group with 4 updates: [golang.org/x/crypto](https://github.com/golang/crypto), [golang.org/x/net](https://github.com/golang/net), [golang.org/x/sys](https://github.com/golang/sys) and [golang.org/x/term](https://github.com/golang/term).


Updates `golang.org/x/crypto` from 0.24.0 to 0.25.0
- [Commits](https://github.com/golang/crypto/compare/v0.24.0...v0.25.0)

Updates `golang.org/x/net` from 0.26.0 to 0.27.0
- [Commits](https://github.com/golang/net/compare/v0.26.0...v0.27.0)

Updates `golang.org/x/sys` from 0.21.0 to 0.22.0
- [Commits](https://github.com/golang/sys/compare/v0.21.0...v0.22.0)

Updates `golang.org/x/term` from 0.21.0 to 0.22.0
- [Commits](https://github.com/golang/term/compare/v0.21.0...v0.22.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
- dependency-name: golang.org/x/net
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
- dependency-name: golang.org/x/sys
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
- dependency-name: golang.org/x/term
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-07-29 11:42:33 -04:00
Wade Simmons
e6009b8491 github actions: use macos-latest (#1171)
macos-11 was deprecated and removed:

> The macos-11 label has been deprecated and will no longer be available after 28 June 2024.

We can just use macos-latest instead.
2024-07-02 11:50:51 -04:00
dependabot[bot]
b9aace1e58 Bump github.com/prometheus/client_golang from 1.19.0 to 1.19.1 (#1147)
Bumps [github.com/prometheus/client_golang](https://github.com/prometheus/client_golang) from 1.19.0 to 1.19.1.
- [Release notes](https://github.com/prometheus/client_golang/releases)
- [Changelog](https://github.com/prometheus/client_golang/blob/main/CHANGELOG.md)
- [Commits](https://github.com/prometheus/client_golang/compare/v1.19.0...v1.19.1)

---
updated-dependencies:
- dependency-name: github.com/prometheus/client_golang
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-06-24 14:54:51 -04:00
dependabot[bot]
a76723eaf5 Bump Apple-Actions/import-codesign-certs from 2 to 3 (#1146)
Bumps [Apple-Actions/import-codesign-certs](https://github.com/apple-actions/import-codesign-certs) from 2 to 3.
- [Release notes](https://github.com/apple-actions/import-codesign-certs/releases)
- [Commits](https://github.com/apple-actions/import-codesign-certs/compare/v2...v3)

---
updated-dependencies:
- dependency-name: Apple-Actions/import-codesign-certs
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-06-24 14:54:05 -04:00
Caleb Jasik
8109cf2170 Add puncuation to doc comment (#1164)
* Add puncuation to doc comment

* Fix list formatting inside `EncryptDanger` doc comment
2024-06-24 14:50:17 -04:00
Wade Simmons
97e9834f82 cleanup SK_MEMINFO vars (#1162)
We had to manually define these types before, but the latest release of
`golang.org/x/sys` adds these definitions:

- 6dfb94eaa3

Since we just updated with this PR, we can clean this up now:

- https://github.com/slackhq/nebula/pull/1161
2024-06-24 14:47:14 -04:00
dependabot[bot]
506ba5ab5b Bump github.com/miekg/dns from 1.1.59 to 1.1.61 (#1168)
Bumps [github.com/miekg/dns](https://github.com/miekg/dns) from 1.1.59 to 1.1.61.
- [Changelog](https://github.com/miekg/dns/blob/master/Makefile.release)
- [Commits](https://github.com/miekg/dns/compare/v1.1.59...v1.1.61)

---
updated-dependencies:
- dependency-name: github.com/miekg/dns
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-06-24 14:46:27 -04:00
dependabot[bot]
d372df56ab Bump google.golang.org/protobuf in the protobuf-dependencies group (#1167)
Bumps the protobuf-dependencies group with 1 update: google.golang.org/protobuf.


Updates `google.golang.org/protobuf` from 1.34.1 to 1.34.2

---
updated-dependencies:
- dependency-name: google.golang.org/protobuf
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: protobuf-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-06-24 14:45:52 -04:00
dependabot[bot]
40cfd00e87 Bump the golang-x-dependencies group with 4 updates (#1161)
Bumps the golang-x-dependencies group with 4 updates: [golang.org/x/crypto](https://github.com/golang/crypto), [golang.org/x/net](https://github.com/golang/net), [golang.org/x/sys](https://github.com/golang/sys) and [golang.org/x/term](https://github.com/golang/term).


Updates `golang.org/x/crypto` from 0.23.0 to 0.24.0
- [Commits](https://github.com/golang/crypto/compare/v0.23.0...v0.24.0)

Updates `golang.org/x/net` from 0.25.0 to 0.26.0
- [Commits](https://github.com/golang/net/compare/v0.25.0...v0.26.0)

Updates `golang.org/x/sys` from 0.20.0 to 0.21.0
- [Commits](https://github.com/golang/sys/compare/v0.20.0...v0.21.0)

Updates `golang.org/x/term` from 0.20.0 to 0.21.0
- [Commits](https://github.com/golang/term/compare/v0.20.0...v0.21.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
- dependency-name: golang.org/x/net
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
- dependency-name: golang.org/x/sys
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
- dependency-name: golang.org/x/term
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-06-10 16:08:43 -04:00
Wade Simmons
b14bad586a v1.9.3 (#1160)
Update CHANGELOG for Nebula v1.9.3
2024-06-06 13:17:07 -04:00
Wade Simmons
4c066d8c32 initialize messageCounter to 2 instead of verifying later (#1156)
Clean up the messageCounter checks added in #1154. Instead of checking that
messageCounter is still at 2, just initialize it to 2 and only increment for
non-handshake messages. Handshake packets will always be packets 1 and 2.
2024-06-06 13:03:07 -04:00
Wade Simmons
249ae41fec v1.9.2 (#1155)
Update CHANGELOG for Nebula v1.9.2
2024-06-03 15:50:02 -04:00
Wade Simmons
d9cae9e062 ensure messageCounter is set before handshake is complete (#1154)
Ensure we set messageCounter to 2 before the handshake is marked as
complete.
2024-06-03 15:40:51 -04:00
Wade Simmons
a92056a7db v1.9.1 (#1152)
Update CHANGELOG for Nebula v1.9.1
2024-05-29 14:06:46 -04:00
Wade Simmons
4eb1da0958 remove deadlock in GetOrHandshake (#1151)
We had a rare deadlock in GetOrHandshake because we kept the hostmap
lock when we do the call to StartHandshake. StartHandshake can block
while sending to the lighthouse query worker channel, and that worker
needs to be able to grab the hostmap lock to do its work. Other calls
for StartHandshake don't hold the hostmap lock so we should be able to
drop it here.

This lock was originally added with: https://github.com/slackhq/nebula/pull/954
2024-05-29 12:52:52 -04:00
Wade Simmons
50b24c102e v1.9.0 (#1137)
Update CHANGELOG for Nebula v1.9.0

Co-authored-by: John Maguire <john@defined.net>
2024-05-08 10:31:24 -04:00
dependabot[bot]
c0130f8161 Bump the golang-x-dependencies group with 4 updates (#1138)
Bumps the golang-x-dependencies group with 4 updates: [golang.org/x/crypto](https://github.com/golang/crypto), [golang.org/x/net](https://github.com/golang/net), [golang.org/x/sys](https://github.com/golang/sys) and [golang.org/x/term](https://github.com/golang/term).


Updates `golang.org/x/crypto` from 0.22.0 to 0.23.0
- [Commits](https://github.com/golang/crypto/compare/v0.22.0...v0.23.0)

Updates `golang.org/x/net` from 0.24.0 to 0.25.0
- [Commits](https://github.com/golang/net/compare/v0.24.0...v0.25.0)

Updates `golang.org/x/sys` from 0.19.0 to 0.20.0
- [Commits](https://github.com/golang/sys/compare/v0.19.0...v0.20.0)

Updates `golang.org/x/term` from 0.19.0 to 0.20.0
- [Commits](https://github.com/golang/term/compare/v0.19.0...v0.20.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
- dependency-name: golang.org/x/net
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
- dependency-name: golang.org/x/sys
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
- dependency-name: golang.org/x/term
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-05-06 16:17:50 -04:00
dependabot[bot]
f19a28645e Bump google.golang.org/protobuf in the protobuf-dependencies group (#1139)
Bumps the protobuf-dependencies group with 1 update: google.golang.org/protobuf.


Updates `google.golang.org/protobuf` from 1.34.0 to 1.34.1

---
updated-dependencies:
- dependency-name: google.golang.org/protobuf
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: protobuf-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-05-06 16:17:05 -04:00
Jack Doan
fd1906b16f minor text fixes (#1135) 2024-05-03 20:43:40 -05:00
Wade Simmons
d6e4b88bb5 release: use download-action v4 in docker section (#1134)
We missed this upgrade in #1047
2024-05-03 11:35:55 -04:00
dependabot[bot]
18f69af455 Bump actions/download-artifact from 3 to 4 (#1047)
Bumps [actions/download-artifact](https://github.com/actions/download-artifact) from 3 to 4.
- [Release notes](https://github.com/actions/download-artifact/releases)
- [Commits](https://github.com/actions/download-artifact/compare/v3...v4)

---
updated-dependencies:
- dependency-name: actions/download-artifact
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-05-02 11:25:22 -04:00
dependabot[bot]
aa18d7fa4f Bump actions/upload-artifact from 3 to 4 (#1046)
* Bump actions/upload-artifact from 3 to 4

Bumps [actions/upload-artifact](https://github.com/actions/upload-artifact) from 3 to 4.
- [Release notes](https://github.com/actions/upload-artifact/releases)
- [Commits](https://github.com/actions/upload-artifact/compare/v3...v4)

---
updated-dependencies:
- dependency-name: actions/upload-artifact
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

* try to fix upload conflict

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Wade Simmons <wsimmons@slack-corp.com>
2024-05-02 11:24:58 -04:00
John Maguire
b5c3486796 Push Docker images as part of the release workflow (#1037) 2024-05-02 09:37:11 -04:00
dependabot[bot]
f39bfbb7fa Bump google.golang.org/protobuf in the protobuf-dependencies group (#1133)
Bumps the protobuf-dependencies group with 1 update: google.golang.org/protobuf.


Updates `google.golang.org/protobuf` from 1.33.0 to 1.34.0

---
updated-dependencies:
- dependency-name: google.golang.org/protobuf
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: protobuf-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-30 11:45:05 -04:00
Wade Simmons
4f4941e187 Add Vagrant based smoke tests (#1067)
* WIP smoke test freebsd

* fix bitrot

We now test that the firewall blocks inbound on host3 from host2

* WIP ipv6 test

* cleanup

* rename to make clear

* fix filename

* restore

* no sudo docker

* WIP

* WIP

* WIP

* WIP

* extra smoke tests

* WIP

* WIP

* add over improvements made in smoke.sh

* more tests

* use generic/freebsd14

* cleanup from test

* smoke test openbsd-amd64

* add netbsd-amd64

* try to fix vagrant
2024-04-30 11:02:16 -04:00
fyl
5f17db5dfa Add support for LoongArch64 (#1003) 2024-04-30 09:55:44 -05:00
John Maguire
f31bab5f1a Add support for SSH CAs (#1098)
- Accept certs signed by trusted CAs
- Username must match the cert principal if set
- Any username can be used if cert principal is empty
- Don't allow removed pubkeys/CAs to be used after reload
2024-04-30 10:50:17 -04:00
kindknow
9cd944d320 chore: fix function name in comment (#1111) 2024-04-30 09:43:38 -05:00
John Maguire
f7db0eb5cc Remove Vagrant example (#1129) 2024-04-30 09:40:24 -05:00
dependabot[bot]
7e7d5e00ca Bump github.com/prometheus/client_golang from 1.18.0 to 1.19.0 (#1086)
Bumps [github.com/prometheus/client_golang](https://github.com/prometheus/client_golang) from 1.18.0 to 1.19.0.
- [Release notes](https://github.com/prometheus/client_golang/releases)
- [Changelog](https://github.com/prometheus/client_golang/blob/main/CHANGELOG.md)
- [Commits](https://github.com/prometheus/client_golang/compare/v1.18.0...v1.19.0)

---
updated-dependencies:
- dependency-name: github.com/prometheus/client_golang
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-30 10:30:18 -04:00
Wade Simmons
24f336ec56 switch off deprecated elliptic.Marshal (#1108)
elliptic.Marshal was deprecated, we can replace it with the ECDH methods
even though we aren't using ECDH here. See:

- f03fb147d7

We still using elliptic.Unmarshal because this issue needs to be
resolved:

- https://github.com/golang/go/issues/63963
2024-04-30 10:02:49 -04:00
John Maguire
d7f52dec41 Fix errant capitalisation in DNS TXT response (#1127)
Co-authored-by: Oliver Marriott <hello@omarriott.com>
2024-04-30 09:58:56 -04:00
NODA Kai
e54f9dd206 dns_server.go: parseQuery: set NXDOMAIN if there's no Answer to return (#845) 2024-04-30 09:56:57 -04:00
Andrew Kraut
df78158cfa Create service script for open-rc (#711) 2024-04-30 09:53:00 -04:00
Robin Candau
8b55caa15e Remove Arch nebula.service file (#1132) 2024-04-30 07:45:23 -04:00
Jon Rafkind
7ed9f2a688 add ssh command to print device info (#763) 2024-04-29 16:09:34 -05:00
Wade Simmons
3aca576b07 update to go1.22 (#981)
* update to go1.21

Since the first minor version update has already been released, we can
probably feel comfortable updating to go1.21. This version now enforces
that the go version on the system is compatible with the version
specified in go.mod, so we can remove the old logic around checking the
minimum version in the Makefile.

- https://go.dev/doc/go1.21#tools

> To improve forwards compatibility, Go 1.21 now reads the go line in a go.work or go.mod file as a strict minimum requirement: go 1.21.0 means that the workspace or module cannot be used with Go 1.20 or with Go 1.21rc1. This allows projects that depend on fixes made in later versions of Go to ensure that they are not used with earlier versions. It also gives better error reporting for projects that make use of new Go features: when the problem is that a newer Go version is needed, that problem is reported clearly, instead of attempting to build the code and printing errors about unresolved imports or syntax errors.

* update to go1.22

* bump gvisor

* fix merge conflicts

* use latest gvisor `go` branch

Need to use the latest commit on the `go` branch, see:

- https://github.com/google/gvisor?tab=readme-ov-file#using-go-get

* mod tidy

* more fixes

* give smoketest more time

Is this why it is failing?

* also a little more sleep here

---------

Co-authored-by: Jack Doan <me@jackdoan.com>
2024-04-29 16:44:42 -04:00
Nate Brown
a99618e95c Don't log invalid certificates (#1116) 2024-04-29 15:21:00 -05:00
Caleb Jasik
8e94eb974e Add suggested filenames for collected profiles in the ssh commands (#1109) 2024-04-29 15:20:46 -05:00
John Maguire
41e2e1de02 Remove Fedora nebula.service file (#1128) 2024-04-29 15:30:22 -04:00
dependabot[bot]
d95fb4a314 Bump the golang-x-dependencies group with 5 updates (#1110)
Bumps the golang-x-dependencies group with 5 updates:

| Package | From | To |
| --- | --- | --- |
| [golang.org/x/crypto](https://github.com/golang/crypto) | `0.21.0` | `0.22.0` |
| [golang.org/x/net](https://github.com/golang/net) | `0.22.0` | `0.24.0` |
| [golang.org/x/sync](https://github.com/golang/sync) | `0.6.0` | `0.7.0` |
| [golang.org/x/sys](https://github.com/golang/sys) | `0.18.0` | `0.19.0` |
| [golang.org/x/term](https://github.com/golang/term) | `0.18.0` | `0.19.0` |


Updates `golang.org/x/crypto` from 0.21.0 to 0.22.0
- [Commits](https://github.com/golang/crypto/compare/v0.21.0...v0.22.0)

Updates `golang.org/x/net` from 0.22.0 to 0.24.0
- [Commits](https://github.com/golang/net/compare/v0.22.0...v0.24.0)

Updates `golang.org/x/sync` from 0.6.0 to 0.7.0
- [Commits](https://github.com/golang/sync/compare/v0.6.0...v0.7.0)

Updates `golang.org/x/sys` from 0.18.0 to 0.19.0
- [Commits](https://github.com/golang/sys/compare/v0.18.0...v0.19.0)

Updates `golang.org/x/term` from 0.18.0 to 0.19.0
- [Commits](https://github.com/golang/term/compare/v0.18.0...v0.19.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
- dependency-name: golang.org/x/net
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
- dependency-name: golang.org/x/sync
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
- dependency-name: golang.org/x/sys
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
- dependency-name: golang.org/x/term
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-23 13:50:53 -04:00
dependabot[bot]
cdcea00669 Bump github.com/miekg/dns from 1.1.58 to 1.1.59 (#1126)
Bumps [github.com/miekg/dns](https://github.com/miekg/dns) from 1.1.58 to 1.1.59.
- [Changelog](https://github.com/miekg/dns/blob/master/Makefile.release)
- [Commits](https://github.com/miekg/dns/compare/v1.1.58...v1.1.59)

---
updated-dependencies:
- dependency-name: github.com/miekg/dns
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-23 11:08:08 -04:00
dependabot[bot]
9bd92a7fc2 Bump golang.org/x/net from 0.22.0 to 0.23.0 (#1123)
Bumps [golang.org/x/net](https://github.com/golang/net) from 0.22.0 to 0.23.0.
- [Commits](https://github.com/golang/net/compare/v0.22.0...v0.23.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-23 11:06:15 -04:00
Nate Brown
a5a07cc760 Allow :: in lighthouse.dns.host config (#1115) 2024-04-11 21:44:36 -05:00
Nate Brown
c1711bc9c5 Remove tcp rtt tracking from the firewall (#1114) 2024-04-11 21:44:22 -05:00
Wade Simmons
7efa750aef avoid deadlock in lighthouse queryWorker (#1112)
* avoid deadlock in lighthouse queryWorker

If the lighthouse queryWorker tries to grab to call StartHandshake on
a lighthouse vpnIp, we can deadlock on the handshake_manager lock. This
change drops the handshake_manager lock before we send on the lighthouse
queryChan (which could block), and also avoids sending to the channel if
this is a lighthouse IP itself.

* need to hold lock during cacheCb
2024-04-11 17:00:01 -04:00
Nate Brown
a390125935 Support reloading preferred_ranges (#1043) 2024-04-03 22:14:51 -05:00
Nate Brown
bbb15f8cb1 Unsafe route reload (#1083) 2024-03-28 15:17:28 -05:00
John Maguire
8b68a08723 Fix "any" firewall rules for unsafe_routes (#1099) 2024-03-28 15:17:12 -05:00
dependabot[bot]
f8fb9759e9 Bump the golang-x-dependencies group with 1 update (#1094)
Bumps the golang-x-dependencies group with 1 update: [golang.org/x/net](https://github.com/golang/net).


Updates `golang.org/x/net` from 0.21.0 to 0.22.0
- [Commits](https://github.com/golang/net/compare/v0.21.0...v0.22.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-03-22 12:58:13 -04:00
dependabot[bot]
1f1d660200 Bump google.golang.org/protobuf from 1.32.0 to 1.33.0 (#1092)
Bumps google.golang.org/protobuf from 1.32.0 to 1.33.0.

---
updated-dependencies:
- dependency-name: google.golang.org/protobuf
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-03-18 11:12:13 -04:00
dependabot[bot]
279265058f Bump github.com/stretchr/testify from 1.8.4 to 1.9.0 (#1087)
Bumps [github.com/stretchr/testify](https://github.com/stretchr/testify) from 1.8.4 to 1.9.0.
- [Release notes](https://github.com/stretchr/testify/releases)
- [Commits](https://github.com/stretchr/testify/compare/v1.8.4...v1.9.0)

---
updated-dependencies:
- dependency-name: github.com/stretchr/testify
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-03-18 11:06:18 -04:00
dependabot[bot]
2a778de07e Bump github.com/flynn/noise from 1.0.1 to 1.1.0 (#1072)
Bumps [github.com/flynn/noise](https://github.com/flynn/noise) from 1.0.1 to 1.1.0.
- [Commits](https://github.com/flynn/noise/compare/v1.0.1...v1.1.0)

---
updated-dependencies:
- dependency-name: github.com/flynn/noise
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-03-18 10:47:53 -04:00
dependabot[bot]
2affd371e3 Bump the golang-x-dependencies group with 4 updates (#1085)
Bumps the golang-x-dependencies group with 4 updates: [golang.org/x/crypto](https://github.com/golang/crypto), [golang.org/x/net](https://github.com/golang/net), [golang.org/x/sys](https://github.com/golang/sys) and [golang.org/x/term](https://github.com/golang/term).


Updates `golang.org/x/crypto` from 0.18.0 to 0.21.0
- [Commits](https://github.com/golang/crypto/compare/v0.18.0...v0.21.0)

Updates `golang.org/x/net` from 0.20.0 to 0.21.0
- [Commits](https://github.com/golang/net/compare/v0.20.0...v0.21.0)

Updates `golang.org/x/sys` from 0.16.0 to 0.18.0
- [Commits](https://github.com/golang/sys/compare/v0.16.0...v0.18.0)

Updates `golang.org/x/term` from 0.16.0 to 0.18.0
- [Commits](https://github.com/golang/term/compare/v0.16.0...v0.18.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
- dependency-name: golang.org/x/net
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
- dependency-name: golang.org/x/sys
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
- dependency-name: golang.org/x/term
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-03-18 10:43:17 -04:00
Nate Brown
cc8b3cc961 Add config option for local_cidr control 2024-02-15 11:46:45 -06:00
Nate Brown
f346cf4109 At the end 2024-02-05 10:23:10 -06:00
Nate Brown
8f44f22c37 In the middle 2024-02-05 10:23:10 -06:00
John Maguire
8822f1366c Add link to logs guide in bug report template (#1065) 2024-02-01 12:40:23 -05:00
brad-defined
e3f5a129c1 Return full error context from ContextualError.Error() (#1069) 2024-01-31 15:31:46 -05:00
mrx
0f0534d739 Fix UDP listener on IPv4-only Linux (#787)
On some systems, IPv6 is disabled (for example, CIS benchmark recommends to disable it when not used), but currently all UDP connections are using AF_INET6 sockets.
When we are binding AF_INET6 socket to an address like ::ffff:1.2.3.4 (IPv4 addresses are parsed by net.ParseIP this way), we can't send or receive IPv6 packets anyway, so this will not break any scenarios.

---------

Co-authored-by: Wade Simmons <wsimmons@slack-corp.com>
2024-01-30 15:08:14 -05:00
dependabot[bot]
c5a403b7a8 Bump github.com/vishvananda/netlink (#1034)
Bumps [github.com/vishvananda/netlink](https://github.com/vishvananda/netlink) from 1.1.1-0.20211118161826-650dca95af54 to 1.2.1-beta.2.
- [Release notes](https://github.com/vishvananda/netlink/releases)
- [Commits](https://github.com/vishvananda/netlink/commits/v1.2.1-beta.2)

---
updated-dependencies:
- dependency-name: github.com/vishvananda/netlink
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-23 10:40:29 -05:00
dependabot[bot]
f23d328561 Bump the protobuf-dependencies group with 1 update (#1053)
Bumps the protobuf-dependencies group with 1 update: google.golang.org/protobuf.


Updates `google.golang.org/protobuf` from 1.31.0 to 1.32.0

---
updated-dependencies:
- dependency-name: google.golang.org/protobuf
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: protobuf-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-23 10:39:53 -05:00
dependabot[bot]
a977ee653d Bump github.com/miekg/dns from 1.1.57 to 1.1.58 (#1063)
Bumps [github.com/miekg/dns](https://github.com/miekg/dns) from 1.1.57 to 1.1.58.
- [Changelog](https://github.com/miekg/dns/blob/master/Makefile.release)
- [Commits](https://github.com/miekg/dns/compare/v1.1.57...v1.1.58)

---
updated-dependencies:
- dependency-name: github.com/miekg/dns
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-23 10:37:53 -05:00
Lingfeng Zhang
1f83d1758d Support inlined sshd host key (#1054) 2024-01-22 13:58:44 -05:00
dependabot[bot]
3210198276 Bump github.com/prometheus/client_golang from 1.17.0 to 1.18.0 (#1055)
Bumps [github.com/prometheus/client_golang](https://github.com/prometheus/client_golang) from 1.17.0 to 1.18.0.
- [Release notes](https://github.com/prometheus/client_golang/releases)
- [Changelog](https://github.com/prometheus/client_golang/blob/main/CHANGELOG.md)
- [Commits](https://github.com/prometheus/client_golang/compare/v1.17.0...v1.18.0)

---
updated-dependencies:
- dependency-name: github.com/prometheus/client_golang
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-12 10:26:39 -05:00
dependabot[bot]
0cef634635 Bump github.com/miekg/dns from 1.1.56 to 1.1.57 (#1022)
Bumps [github.com/miekg/dns](https://github.com/miekg/dns) from 1.1.56 to 1.1.57.
- [Changelog](https://github.com/miekg/dns/blob/master/Makefile.release)
- [Commits](https://github.com/miekg/dns/compare/v1.1.56...v1.1.57)

---
updated-dependencies:
- dependency-name: github.com/miekg/dns
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-12 09:57:38 -05:00
dependabot[bot]
637dc18bf8 Bump the golang-x-dependencies group with 3 updates (#1059)
Bumps the golang-x-dependencies group with 3 updates: [golang.org/x/crypto](https://github.com/golang/crypto), [golang.org/x/net](https://github.com/golang/net) and [golang.org/x/sync](https://github.com/golang/sync).


Updates `golang.org/x/crypto` from 0.17.0 to 0.18.0
- [Commits](https://github.com/golang/crypto/compare/v0.17.0...v0.18.0)

Updates `golang.org/x/net` from 0.19.0 to 0.20.0
- [Commits](https://github.com/golang/net/compare/v0.19.0...v0.20.0)

Updates `golang.org/x/sync` from 0.5.0 to 0.6.0
- [Commits](https://github.com/golang/sync/compare/v0.5.0...v0.6.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
- dependency-name: golang.org/x/net
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
- dependency-name: golang.org/x/sync
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-12 09:55:41 -05:00
Wade Simmons
ea36949d8a v1.8.2 (#1058)
Update CHANGELOG for Nebula v1.8.2
2024-01-08 15:40:04 -05:00
Wade Simmons
0564d0a2cf when listen.port is zero, fix multiple routines (#1057)
This used to work correctly because when the multiple routines work was
first added in #382, but an important part to discover the listen port
before opening the other listeners on the same socket was lost in this
PR: #653.

This change should fix the regression and allow multiple routines to
work correctly when listen.port is set to `0`.

Thanks to @rawdigits for tracking down and discovering this regression.
2024-01-08 13:49:44 -05:00
nezu
b22ba6eb49 Update Arch Linux package link (#1024) 2023-12-27 10:38:24 -06:00
Wade Simmons
3a221812f6 test: build all non-main modules for mobile (#1036)
Ensure that we don't break the build for mobile by doing a `go build`
for all of the non-main modules in the repo. Should hopefully catch
issues like #1035 sooner.
2023-12-21 11:59:21 -05:00
dependabot[bot]
927ff4cc03 Bump github.com/flynn/noise from 1.0.0 to 1.0.1 (#1038)
Bumps [github.com/flynn/noise](https://github.com/flynn/noise) from 1.0.0 to 1.0.1.
- [Commits](https://github.com/flynn/noise/compare/v1.0.0...v1.0.1)

---
updated-dependencies:
- dependency-name: github.com/flynn/noise
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-21 11:57:53 -05:00
Wade Simmons
e5945a60aa v1.8.1 (#1049)
Update CHANGELOG for Nebula v1.8.1
2023-12-19 15:11:25 -05:00
Nate Brown
072edd56b3 Fix re-entrant GetOrHandshake issues (#1044) 2023-12-19 11:58:31 -06:00
dependabot[bot]
beb5f6bddc Bump golang.org/x/crypto from 0.16.0 to 0.17.0 (#1048)
Bumps [golang.org/x/crypto](https://github.com/golang/crypto) from 0.16.0 to 0.17.0.
- [Commits](https://github.com/golang/crypto/compare/v0.16.0...v0.17.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-19 10:57:09 -05:00
dependabot[bot]
8be9792059 Bump actions/setup-go from 4 to 5 (#1039)
Bumps [actions/setup-go](https://github.com/actions/setup-go) from 4 to 5.
- [Release notes](https://github.com/actions/setup-go/releases)
- [Commits](https://github.com/actions/setup-go/compare/v4...v5)

---
updated-dependencies:
- dependency-name: actions/setup-go
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-13 22:45:09 -06:00
John Maguire
af2fc48378 Fix mobile builds (#1035) 2023-12-06 16:18:21 -05:00
Wade Simmons
1d2f95e718 v1.8.0 (#1017)
Update CHANGELOG for Nebula v1.8.0
2023-12-06 14:38:58 -05:00
Lars Lehtonen
3a8743d511 cmd/nebula-cert: fix clobbered error (#1032)
* cmd/nebula-cert: fix clobbered error

Signed-off-by: Lars Lehtonen <lars.lehtonen@gmail.com>

* apply suggestions from Nate

This makes it much clearer what is happening in the code

---------

Signed-off-by: Lars Lehtonen <lars.lehtonen@gmail.com>
Co-authored-by: Wade Simmons <wsimmons@slack-corp.com>
2023-12-06 13:20:49 -05:00
Dave Russell
0209402942 SIGHUP is only useful when config was loaded from a file (#1030)
Have (*config.C).CatchHUP() return early when there is no file
path available from which to reload.
This will allow wrapping service to manage their own signal
trapping (which is particularly important if they've used
config from a string.
2023-12-06 10:13:38 -05:00
dependabot[bot]
fb55f5b762 Bump the golang-x-dependencies group with 3 updates (#1028)
Bumps the golang-x-dependencies group with 3 updates: [golang.org/x/crypto](https://github.com/golang/crypto), [golang.org/x/net](https://github.com/golang/net) and [golang.org/x/sync](https://github.com/golang/sync).


Updates `golang.org/x/crypto` from 0.14.0 to 0.16.0
- [Commits](https://github.com/golang/crypto/compare/v0.14.0...v0.16.0)

Updates `golang.org/x/net` from 0.17.0 to 0.19.0
- [Commits](https://github.com/golang/net/compare/v0.17.0...v0.19.0)

Updates `golang.org/x/sync` from 0.3.0 to 0.5.0
- [Commits](https://github.com/golang/sync/compare/v0.3.0...v0.5.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
- dependency-name: golang.org/x/net
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
- dependency-name: golang.org/x/sync
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-04 11:12:52 -05:00
Ben Ritcey
01cddb8013 Added firewall.rules.hash metric (#1010)
* Added firewall.rules.hash metric

Added a FNV-1 hash of the firewall rules as a Prometheus value.

* Switch FNV has to int64, include both hashes in log messages

* Use a uint32 for the FNV hash

Let go-metrics cast the uint32 to a int64, so it won't be lossy
when it eventually emits a float64 Prometheus metric.
2023-11-28 11:56:47 -05:00
Tristan Rice
1083279a45 add gvisor based service library (#965)
* add service/ library
2023-11-21 11:50:18 -05:00
Wade Simmons
fe16ea566d firewall reject packets: cleanup error cases (#957) 2023-11-13 12:43:51 -06:00
Nate Brown
3356e03d85 Default pki.disconnect_invalid to true and make it reloadable (#859) 2023-11-13 12:39:38 -06:00
dependabot[bot]
f41db52560 Bump the golang-x-dependencies group with 1 update (#1006)
Bumps the golang-x-dependencies group with 1 update: [golang.org/x/sys](https://github.com/golang/sys).

- [Commits](https://github.com/golang/sys/compare/v0.13.0...v0.14.0)

---
updated-dependencies:
- dependency-name: golang.org/x/sys
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-13 07:58:45 -08:00
Nate Brown
5181cb0474 Use generics for CIDRTrees to avoid casting issues (#1004) 2023-11-02 17:05:08 -05:00
Nate Brown
a44e1b8b05 Clean up a hostinfo to reduce memory usage (#955) 2023-11-02 16:53:59 -05:00
guangwu
276978377a chore: remove refs to deprecated io/ioutil (#987)
Signed-off-by: guoguangwu <guoguangwu@magic-shield.com>
2023-10-31 10:35:13 -04:00
dependabot[bot]
777eb96aea Bump github.com/prometheus/client_golang from 1.16.0 to 1.17.0 (#984)
Bumps [github.com/prometheus/client_golang](https://github.com/prometheus/client_golang) from 1.16.0 to 1.17.0.
- [Release notes](https://github.com/prometheus/client_golang/releases)
- [Changelog](https://github.com/prometheus/client_golang/blob/main/CHANGELOG.md)
- [Commits](https://github.com/prometheus/client_golang/compare/v1.16.0...v1.17.0)

---
updated-dependencies:
- dependency-name: github.com/prometheus/client_golang
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-10-31 10:33:04 -04:00
Wade Simmons
0912ef14f4 github actions smoke-test: run with data race detector (#988)
Run the github actions smoke tests with data race detector enabled, so
we can detect if a PR introduces a simple data race.
2023-10-31 10:32:39 -04:00
Lars Lehtonen
77a8ce1712 main: fix dropped error (#1002)
This isn't an actual issue because the current implementation of NewSSHServer never returns an error (https://github.com/slackhq/nebula/blob/v1.7.2/sshd/server.go#L56), but still good to fix so no surprises happen in the future.
2023-10-31 10:32:08 -04:00
John Maguire
87b628ba24 Fix truncated comment in config.yml (#999) 2023-10-27 08:39:34 -04:00
Nate Brown
50d6a1e8ca QueryServer needs to be done outside of the lock (#996) 2023-10-17 15:43:51 -05:00
dependabot[bot]
e78fe0b9ef Bump golang.org/x/net from 0.15.0 to 0.17.0 (#990)
Bumps [golang.org/x/net](https://github.com/golang/net) from 0.15.0 to 0.17.0.
- [Commits](https://github.com/golang/net/compare/v0.15.0...v0.17.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-10-16 13:28:59 -04:00
Nate Brown
5fccbb8676 Retry wintun creation (#985) 2023-10-16 10:06:43 -05:00
dependabot[bot]
c289c7a7ca Bump github.com/miekg/dns from 1.1.55 to 1.1.56 (#979)
Bumps [github.com/miekg/dns](https://github.com/miekg/dns) from 1.1.55 to 1.1.56.
- [Changelog](https://github.com/miekg/dns/blob/master/Makefile.release)
- [Commits](https://github.com/miekg/dns/compare/v1.1.55...v1.1.56)

---
updated-dependencies:
- dependency-name: github.com/miekg/dns
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-09-22 09:48:26 -04:00
dependabot[bot]
e3fbfbfd4d Bump golang.org/x/net from 0.14.0 to 0.15.0 (#977)
Bumps [golang.org/x/net](https://github.com/golang/net) from 0.14.0 to 0.15.0.
- [Commits](https://github.com/golang/net/compare/v0.14.0...v0.15.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-09-22 09:47:45 -04:00
dependabot[bot]
282ca4368e Bump golang.org/x/crypto from 0.12.0 to 0.13.0 (#976)
Bumps [golang.org/x/crypto](https://github.com/golang/crypto) from 0.12.0 to 0.13.0.
- [Commits](https://github.com/golang/crypto/compare/v0.12.0...v0.13.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-09-22 09:47:00 -04:00
Wade Simmons
280fa026ea smoke-test: don't assume docker needs sudo (#958)
Let the host deal with this detail if necessary
2023-09-07 13:57:41 -04:00
Lars Lehtonen
dbdb48f182 cert: fix dropped errors (#961) 2023-09-07 13:54:01 -04:00
Nate Brown
f7e392995a Fix rebind to not put the socket in blocking mode (#972) 2023-09-07 11:56:09 -05:00
dependabot[bot]
d271df8da8 Bump golang.org/x/term from 0.11.0 to 0.12.0 (#967)
Bumps [golang.org/x/term](https://github.com/golang/term) from 0.11.0 to 0.12.0.
- [Commits](https://github.com/golang/term/compare/v0.11.0...v0.12.0)

---
updated-dependencies:
- dependency-name: golang.org/x/term
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-09-05 12:47:55 -04:00
dependabot[bot]
eea5e6a5df Bump actions/checkout from 3 to 4 (#969)
Bumps [actions/checkout](https://github.com/actions/checkout) from 3 to 4.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v3...v4)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-09-05 11:43:56 -04:00
dependabot[bot]
790268a176 Bump golang.org/x/sys from 0.11.0 to 0.12.0 (#968)
Bumps [golang.org/x/sys](https://github.com/golang/sys) from 0.11.0 to 0.12.0.
- [Commits](https://github.com/golang/sys/compare/v0.11.0...v0.12.0)

---
updated-dependencies:
- dependency-name: golang.org/x/sys
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-09-05 11:42:08 -04:00
brad-defined
06b480e177 Fix relay migration (#964)
* Fix for relay migration on rehandshaking issue. On rehandshake, the relay tunnel doesn't migrate to the new hostinfo object correctly, due to an incorrect Nebula IP sent in the CreateRelayRequest message.
* Add a test for this case

---------

Co-authored-by: Nate Brown <nbrown.us@gmail.com>
2023-09-05 09:29:27 -04:00
Nate Brown
076ebc6c6e Simplify getting a hostinfo or starting a handshake with one (#954) 2023-08-21 18:51:45 -05:00
Nate Brown
7edcf620c0 We only need the certificate in ConnectionState (#953) 2023-08-21 14:11:06 -05:00
Nate Brown
5a131b2975 Combine ca, cert, and key handling (#952) 2023-08-14 21:32:40 -05:00
Nate Brown
223cc6e660 Limit how often a busy tunnel can requery the lighthouse (#940)
Co-authored-by: Wade Simmons <wadey@slack-corp.com>
2023-08-08 13:26:41 -05:00
Wade Simmons
5671c6607c dependabot: group together common deps (#950)
Group together deps that are often updated together.

- https://docs.github.com/en/code-security/dependabot/dependabot-version-updates/configuration-options-for-the-dependabot.yml-file#groups
2023-08-08 13:15:42 -04:00
dependabot[bot]
7ecafbe61d Bump golang.org/x/net from 0.13.0 to 0.14.0 (#947)
Bumps [golang.org/x/net](https://github.com/golang/net) from 0.13.0 to 0.14.0.
- [Commits](https://github.com/golang/net/compare/v0.13.0...v0.14.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-08-08 10:04:46 -05:00
dependabot[bot]
546eb3bfbc Bump golang.org/x/crypto from 0.11.0 to 0.12.0 (#949)
Bumps [golang.org/x/crypto](https://github.com/golang/crypto) from 0.11.0 to 0.12.0.
- [Commits](https://github.com/golang/crypto/compare/v0.11.0...v0.12.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-08-07 21:28:06 -05:00
dependabot[bot]
7364d99e34 Bump golang.org/x/term from 0.10.0 to 0.11.0 (#946)
Bumps [golang.org/x/term](https://github.com/golang/term) from 0.10.0 to 0.11.0.
- [Commits](https://github.com/golang/term/compare/v0.10.0...v0.11.0)

---
updated-dependencies:
- dependency-name: golang.org/x/term
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-08-07 21:07:30 -05:00
dependabot[bot]
83b6dc7b16 Bump golang.org/x/net from 0.12.0 to 0.13.0 (#943)
Bumps [golang.org/x/net](https://github.com/golang/net) from 0.12.0 to 0.13.0.
- [Commits](https://github.com/golang/net/compare/v0.12.0...v0.13.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-08-02 14:28:32 -04:00
Wade Simmons
3d0da7c859 update mergo to 1.0.0 (#941)
The mergo package has moved to a vanity URL. This causes fun issues with
dependabot. Update to the new release:

- https://github.com/darccio/mergo/releases/tag/v1.0.0
- https://github.com/darccio/mergo/compare/v0.3.15...v1.0.0
2023-08-02 14:00:20 -04:00
Caleb Jasik
ed00f5d530 Remove unused config code (last edited 4yrs ago) (#938) 2023-07-31 15:59:20 -05:00
dependabot[bot]
38e56a4858 Bump golang.org/x/net from 0.9.0 to 0.12.0 (#931) 2023-07-27 15:43:16 -05:00
dependabot[bot]
fce93ccb54 Bump google.golang.org/protobuf from 1.30.0 to 1.31.0 (#930) 2023-07-27 15:42:33 -05:00
dependabot[bot]
0d715effbc Bump Apple-Actions/import-codesign-certs from 1 to 2 (#923) 2023-07-27 15:31:36 -05:00
dependabot[bot]
0c003b64f1 Bump golang.org/x/term from 0.8.0 to 0.10.0 (#928) 2023-07-27 14:38:36 -05:00
Nate Brown
14d0106716 Send the lh update worker into its own routine instead of taking over the reload routine (#935) 2023-07-27 14:38:10 -05:00
dependabot[bot]
959b015b3b Bump github.com/sirupsen/logrus from 1.9.0 to 1.9.3 (#933) 2023-07-27 14:36:36 -05:00
Nate Brown
0bffa76b5e Build for openbsd (#812) 2023-07-27 14:27:35 -05:00
c0repwn3r
03e70210a5 Add support for NetBSD (#916) 2023-07-27 13:44:47 -05:00
Nate Brown
9c6592b159 Guard e2e udp and tun channels when closed (#934) 2023-07-26 12:52:14 -05:00
dependabot[bot]
e5af94e27a Bump github.com/prometheus/client_golang from 1.15.1 to 1.16.0 (#927)
Bumps [github.com/prometheus/client_golang](https://github.com/prometheus/client_golang) from 1.15.1 to 1.16.0.
- [Release notes](https://github.com/prometheus/client_golang/releases)
- [Changelog](https://github.com/prometheus/client_golang/blob/main/CHANGELOG.md)
- [Commits](https://github.com/prometheus/client_golang/compare/v1.15.1...v1.16.0)

---
updated-dependencies:
- dependency-name: github.com/prometheus/client_golang
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-24 13:56:09 -04:00
dependabot[bot]
96f51f78ea Bump golang.org/x/sys from 0.8.0 to 0.10.0 (#926)
Bumps [golang.org/x/sys](https://github.com/golang/sys) from 0.8.0 to 0.10.0.
- [Commits](https://github.com/golang/sys/compare/v0.8.0...v0.10.0)

---
updated-dependencies:
- dependency-name: golang.org/x/sys
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-24 13:53:39 -04:00
Nate Brown
a10baeee92 Pull hostmap and pending hostmap apart, remove unused functions (#843) 2023-07-24 12:37:52 -05:00
dependabot[bot]
52c9e360e7 Bump github.com/miekg/dns from 1.1.54 to 1.1.55 (#925)
Bumps [github.com/miekg/dns](https://github.com/miekg/dns) from 1.1.54 to 1.1.55.
- [Changelog](https://github.com/miekg/dns/blob/master/Makefile.release)
- [Commits](https://github.com/miekg/dns/compare/v1.1.54...v1.1.55)

---
updated-dependencies:
- dependency-name: github.com/miekg/dns
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-24 12:52:29 -04:00
dependabot[bot]
8caaff7109 Bump github.com/stretchr/testify from 1.8.2 to 1.8.4 (#924)
Bumps [github.com/stretchr/testify](https://github.com/stretchr/testify) from 1.8.2 to 1.8.4.
- [Release notes](https://github.com/stretchr/testify/releases)
- [Commits](https://github.com/stretchr/testify/compare/v1.8.2...v1.8.4)

---
updated-dependencies:
- dependency-name: github.com/stretchr/testify
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-24 12:51:31 -04:00
Nate Brown
1e3c155896 Attempt to notify systemd of service readiness on linux (#929) 2023-07-24 11:30:18 -05:00
Wade Simmons
f5db03c834 add dependabot config (#922)
This should give us PRs weekly with dependency updates, and also let us
manually check for updates when needed.

- https://docs.github.com/en/code-security/dependabot/dependabot-version-updates/configuration-options-for-the-dependabot.yml-file
2023-07-21 17:21:58 -04:00
Nate Brown
c5ce945852 Update README to include a link to go install docs (#919) 2023-07-20 21:30:38 -05:00
John Maguire
7e380bde7e Document new DNS config options (#879) 2023-07-10 15:19:05 -04:00
Nate Brown
a3e59a38ef Use registered io on Windows when possible (#905) 2023-07-10 12:43:48 -05:00
John Maguire
8ba5d64dbc Add support for naming FreeBSD tun devices (#903) 2023-06-22 12:13:31 -04:00
Nate Brown
3bbf5f4e67 Use an interface for udp conns (#901) 2023-06-14 10:48:52 -05:00
Wade Simmons
928731acfe fix up the release workflow (#891)
actions/create-release is deprecated, just switch to using `gh` cli.
This is actually much easier anyways!
2023-06-14 11:45:01 -04:00
Nate Brown
57eb80e9fb v1.7.2 (#887)
Update CHANGELOG for Nebula v1.7.2
2023-06-01 11:05:07 -04:00
brad-defined
96f4dcaab8 Fix reconfig freeze attempting to send to an unbuffered, unread channel (#886)
* Fixes a reocnfig freeze where the reconfig attempts to send to an unbuffered channel with no readers.
Only create stop channel when a DNS goroutine is created, and only send when the channel exists.
Buffer to size 1 so that the stop message can be immediately sent even if the goroutine is busy doing DNS lookups.
2023-05-31 16:05:46 -04:00
Wade Simmons
6d8c5f437c GitHub actions update setup-go (#881)
This does caching for us, so we can remove our manual caching of modules
2023-05-23 13:24:33 -04:00
John Maguire
165b671e70 v1.7.1 (#878)
Update CHANGELOG for Nebula v1.7.1
2023-05-18 15:39:24 -04:00
brad-defined
6be0bad68a Fix static_host_map DNS lookup Linux issue - put v4 addr into v6 slice(#877) 2023-05-18 14:13:32 -04:00
Wade Simmons
7ae3cd25f8 v1.7.0 (#870)
Update CHANGELOG for Nebula v1.7.0
2023-05-17 11:02:53 -04:00
Wade Simmons
9a7ed57a3f Cache cert verification methods (#871)
* cache cert verification

CheckSignature and Verify are expensive methods, and certificates are
static. Cache the results.

* use atomics

* make sure public key bytes match

* add VerifyWithCache and ResetCache

* cleanup

* use VerifyWithCache

* doc
2023-05-17 10:14:26 -04:00
Wade Simmons
eb9f22a8fa fix mismerge of P256 and encrypted private keys (#869)
The private key length is checked in a switch statement below these
lines, these lines should have been removed.
2023-05-09 14:05:55 -04:00
Nate Brown
54a8499c7b Fix go vet (#868) 2023-05-09 11:01:30 -05:00
Wade Simmons
419aaf2e36 issue templates: remove Report Security Vulnerability (#867)
This is redundant as Github automatically adds a section for this near the top.
2023-05-09 11:37:48 -04:00
Ilya Lukyanov
1701087035 Add destination CIDR checking (#507) 2023-05-09 10:37:23 -05:00
Nate Brown
a9cb2e06f4 Add ability to respect the system route table for unsafe route on linux (#839) 2023-05-09 10:36:55 -05:00
Wade Simmons
115b4b70b1 add SECURITY.md (#864)
* add SECURITY.md

Fixes: #699

* add Security mention to New issue template

* cleanup
2023-05-09 11:25:21 -04:00
Wade Simmons
0707caedb4 document P256 and BoringCrypto (#865)
* document P256 and BoringCrypto

Some basic descriptions of P256 and BoringCrypto added to the bottom of
README.md so that their prupose is not a mystery.

* typo
2023-05-09 11:24:52 -04:00
brad-defined
bd9cc01d62 Dns static lookerupper (#796)
* Support lighthouse DNS names, and regularly resolve the name in a background goroutine to discover DNS updates.
2023-05-09 11:22:08 -04:00
Nate Brown
d1f786419c Try rehandshaking a main hostinfo after releasing hostmap locks (#863) 2023-05-08 14:43:03 -05:00
Wade Simmons
31ed9269d7 add test for GOEXPERIMENT=boringcrypto (#861)
* add test for GOEXPERIMENT=boringcrypto

* fix NebulaCertificate.Sign

Set the PublicKey field in a more compatible way for the tests. The
current method grabs the public key from the certificate, but the
correct thing to do is to derive it from the private key. Either way
doesn't really matter as I don't think the Sign method actually even
uses the PublicKey field.

* assert boring

* cleanup tests
2023-05-08 13:27:01 -04:00
Nate Brown
48eb63899f Have lighthouses ack updates to reduce test packet traffic (#851) 2023-05-05 14:44:03 -05:00
Nate Brown
b26c13336f Fix test on master (#860) 2023-05-04 20:11:33 -05:00
Wade Simmons
e0185c4b01 Support NIST curve P256 (#769)
* Support NIST curve P256

This change adds support for NIST curve P256. When you use `nebula-cert ca`
or `nebula-cert keygen`, you can specify `-curve P256` to enable it. The
curve to use is based on the curve defined in your CA certificate.

Internally, we use ECDSA P256 to sign certificates, and ECDH P256 to do
Noise handshakes. P256 is not supported natively in Noise Protocol, so
we define `DHP256` in the `noiseutil` package to implement support for
it.

You cannot have a mixed network of Curve25519 and P256 certificates,
since the Noise protocol will only attempt to parse using the Curve
defined in the host's certificate.

* verify the curves match in VerifyPrivateKey

This would have failed anyways once we tried to actually use the bytes
in the private key, but its better to detect the issue up front with
a better error message.

* add cert.Curve argument to Sign method

* fix mismerge

* use crypto/ecdh

This is the preferred method for doing ECDH functions now, and also has
a boringcrypto specific codepath.

* remove other ecdh uses of crypto/elliptic

use crypto/ecdh instead
2023-05-04 17:50:23 -04:00
Nate Brown
702e1c59bd Always disconnect block listed hosts (#858) 2023-05-04 16:09:42 -05:00
Nate Brown
5fe8f45d05 Clear lighthouse cache for a vpn ip on a dead connection when its the final hostinfo (#857) 2023-05-04 15:42:12 -05:00
Nate Brown
03e4a7f988 Rehandshaking (#838)
Co-authored-by: Brad Higgins <brad@defined.net>
Co-authored-by: Wade Simmons <wadey@slack-corp.com>
2023-05-04 15:16:37 -05:00
Wade Simmons
0b67b19771 add boringcrypto Makefile targets (#856)
This adds a few build targets to compile with `GOEXPERIMENT=boringcrypto`:

- `bin-boringcrypto`
- `release-boringcrypto`

It also adds a field to the intial start up log indicating if
boringcrypto is enabled in the binary.
2023-05-04 15:42:45 -04:00
Wade Simmons
a0d3b93ae5 update dependencies: 2023-05 (#855)
Updates that end up in the final binaries (go version -m):

    Updated  github.com/imdario/mergo             https://github.com/imdario/mergo/compare/v0.3.13...v0.3.15
    Updated  github.com/miekg/dns                 https://github.com/miekg/dns/compare/v1.1.52...v1.1.54
    Updated  github.com/prometheus/client_golang  https://github.com/prometheus/client_golang/compare/v1.14.0...v1.15.1
    Updated  github.com/prometheus/client_model   https://github.com/prometheus/client_model/compare/v0.3.0...v0.4.0
    Updated  golang.org/x/crypto                  https://github.com/golang/crypto/compare/v0.7.0...v0.8.0
    Updated  golang.org/x/net                     https://github.com/golang/net/compare/v0.8.0...v0.9.0
    Updated  golang.org/x/sys                     https://github.com/golang/sys/compare/v0.6.0...v0.8.0
    Updated  golang.org/x/term                    https://github.com/golang/term/compare/v0.6.0...v0.8.0
    Updated  google.golang.org/protobuf           v1.29.0...v1.30.0
2023-05-04 15:42:15 -04:00
Wade Simmons
58ec1f7a7b build with go1.20 (#854)
* build with go1.20

This has been out for a bit and is up to go1.20.4. We have been using
go1.20 for the Slack builds and have seen no issues.

* need the quotes

* use go install
2023-05-04 11:35:03 -04:00
Nate Brown
397fe5f879 Add ability to skip installing unsafe routes on the os routing table (#831) 2023-04-10 12:32:37 -05:00
brad-defined
9b03053191 update EncReader and EncWriter interface function args to have concrete types (#844)
* Update LightHouseHandlerFunc to remove EncWriter param.
* Move EncWriter to interface
* EncReader, too
2023-04-07 14:28:37 -04:00
Nate Brown
3cb4e0ef57 Allow listen.host to contain names (#825) 2023-04-05 11:29:26 -05:00
Wade Simmons
e0553822b0 Use NewGCMTLS (when using experiment boringcrypto) (#803)
* Use NewGCMTLS (when using experiment boringcrypto)

This change only affects builds built using `GOEXPERIMENT=boringcrypto`.
When built with this experiment, we use the NewGCMTLS() method exposed by
goboring, which validates that the nonce is strictly monotonically increasing.
This is the TLS 1.2 specification for nonce generation (which also matches the
method used by the Noise Protocol)

- https://github.com/golang/go/blob/go1.19/src/crypto/tls/cipher_suites.go#L520-L522
- https://github.com/golang/go/blob/go1.19/src/crypto/internal/boring/aes.go#L235-L237
- https://github.com/golang/go/blob/go1.19/src/crypto/internal/boring/aes.go#L250
- ae223d6138/include/openssl/aead.h (L379-L381)
- ae223d6138/crypto/fipsmodule/cipher/e_aes.c (L1082-L1093)

* need to lock around EncryptDanger in SendVia

* fix link to test vector
2023-04-05 11:08:23 -04:00
Nate Brown
d3fe3efcb0 Fix handshake retry regression (#842) 2023-04-05 10:04:30 -05:00
Nate Brown
fd99ce9a71 Use fewer test packets (#840) 2023-04-04 13:42:24 -05:00
Wade Simmons
6685856b5d emit certificate.expiration_ttl_seconds metric (#782) 2023-04-03 20:18:16 -05:00
John Maguire
a56a97e5c3 Add ability to encrypt CA private key at rest (#386)
Fixes #8.

`nebula-cert ca` now supports encrypting the CA's private key with a
passphrase. Pass `-encrypt` in order to be prompted for a passphrase.
Encryption is performed using AES-256-GCM and Argon2id for KDF. KDF
parameters default to RFC recommendations, but can be overridden via CLI
flags `-argon-memory`, `-argon-parallelism`, and `-argon-iterations`.
2023-04-03 13:59:38 -04:00
Nate Brown
ee8e1348e9 Use connection manager to drive NAT maintenance (#835)
Co-authored-by: brad-defined <77982333+brad-defined@users.noreply.github.com>
2023-03-31 15:45:05 -05:00
Nate Brown
1a6c657451 Normalize logs (#837) 2023-03-30 15:07:31 -05:00
Nate Brown
6b3d42efa5 Use atomic.Pointer for certState (#833) 2023-03-30 13:04:09 -05:00
brad-defined
2801fb2286 Fix relay (#827)
Co-authored-by: Nate Brown <nbrown.us@gmail.com>
2023-03-30 11:09:20 -05:00
Ryan Huber
e28336c5db probes to the lh are not generally useful as recv_error should catch (#408) 2023-03-29 15:09:36 -05:00
Wade Simmons
3e5c7e6860 add punchy.respond_delay config option (#721) 2023-03-29 14:32:35 -05:00
Wade Simmons
8a82e0fb16 ssh: add save-mutex-profile (#737) 2023-03-29 14:30:28 -05:00
Nate Brown
f0ef80500d Remove dead code and re-order transit from pending to main hostmap on stage 2 (#828) 2023-03-17 15:36:24 -05:00
Wade Simmons
61b784d2bb Update dependencies 2023-03 (#824)
List of dependency updates that appear in the final binaries (other are
only used in tests, or don't actually get used by the modules we import):

    Updated	github.com/cespare/xxhash	https://github.com/cespare/xxhash/compare/v2.1.2...v2.2.0
    Updated	github.com/golang/protobuf	https://github.com/golang/protobuf/compare/v1.5.2...v1.5.3
    Updated	github.com/miekg/dns	https://github.com/miekg/dns/compare/v1.1.50...v1.1.52
    Updated	github.com/prometheus/common	https://github.com/prometheus/common/compare/v0.37.0...v0.42.0
    Updated	github.com/prometheus/procfs	https://github.com/prometheus/procfs/compare/v0.8.0...v0.9.0
    Updated	github.com/vishvananda/netns	https://github.com/vishvananda/netns/compare/v0.0.1...v0.0.4
    Updated	golang.org/x/crypto	https://github.com/golang/crypto/compare/v0.3.0...v0.7.0
    Updated	golang.org/x/net	https://github.com/golang/net/compare/v0.2.0...v0.8.0
    Updated	golang.org/x/sys	https://github.com/golang/sys/compare/v0.2.0...v0.6.0
    Updated	golang.org/x/term	https://github.com/golang/term/compare/v0.2.0...v0.6.0
    Updated	golang.zx2c4.com/wintun	415007cec224...0fa3db229ce2
    Updated	google.golang.org/protobuf	v1.28.1...v1.29.0
2023-03-13 15:37:32 -04:00
Caleb Jasik
5da79e2a4c Run make vet in CI (#693) 2023-03-13 15:35:12 -04:00
Wade Simmons
e1af37e46d add calculated_remotes (#759)
* add calculated_remotes

This setting allows us to "guess" what the remote might be for a host
while we wait for the lighthouse response. For networks that hard
designed with in mind, it can help speed up handshake performance, as well as
improve resiliency in the case that all lighthouses are down.

Example:

    lighthouse:
      # ...

      calculated_remotes:
        # For any Nebula IPs in 10.0.10.0/24, this will apply the mask and add
        # the calculated IP as an initial remote (while we wait for the response
        # from the lighthouse). Both CIDRs must have the same mask size.
        # For example, Nebula IP 10.0.10.123 will have a calculated remote of
        # 192.168.1.123

        10.0.10.0/24:
          - mask: 192.168.1.0/24
            port: 4242

* figure out what is up with this test

* add test

* better logic for sending handshakes

Keep track of the last light of hosts we sent handshakes to. Only log
handshake sent messages if the list has changed.

Remove the test Test_NewHandshakeManagerTrigger because it is faulty and
makes no sense. It relys on the fact that no handshake packets actually
get sent, but with these changes we would send packets now (which it
should!)

* use atomic.Pointer

* cleanup to make it clearer

* fix typo in example
2023-03-13 15:09:08 -04:00
Wade Simmons
6e0ae4f9a3 firewall: add option to send REJECT replies (#738)
* firewall: add option to send REJECT replies

This change allows you to configure the firewall to send REJECT packets
when a packet is denied.

    firewall:
      # Action to take when a packet is not allowed by the firewall rules.
      # Can be one of:
      #   `drop` (default): silently drop the packet.
      #   `reject`: send a reject reply.
      #     - For TCP, this will be a RST "Connection Reset" packet.
      #     - For other protocols, this will be an ICMP port unreachable packet.
      outbound_action: drop
      inbound_action: drop

These packets are only sent to established tunnels, and only on the
overlay network (currently IPv4 only).

    $ ping -c1 192.168.100.3
    PING 192.168.100.3 (192.168.100.3) 56(84) bytes of data.
    From 192.168.100.3 icmp_seq=2 Destination Port Unreachable

    --- 192.168.100.3 ping statistics ---
    2 packets transmitted, 0 received, +1 errors, 100% packet loss, time 31ms

    $ nc -nzv 192.168.100.3 22
    (UNKNOWN) [192.168.100.3] 22 (?) : Connection refused

This change also modifies the smoke test to capture tcpdump pcaps from
both the inside and outside to inspect what is going on over the wire.
It also now does TCP and UDP packet tests using the Nmap version of
ncat.

* calculate seq and ack the same was as the kernel

The logic a bit confusing, so we copy it straight from how the kernel
does iptables `--reject-with tcp-reset`:

- https://github.com/torvalds/linux/blob/v5.19/net/ipv4/netfilter/nf_reject_ipv4.c#L193-L221

* cleanup
2023-03-13 15:08:40 -04:00
Caleb Jasik
f0ac61c1f0 Add nebula.plist based on the homebrew nebula LaunchDaemon plist (#762) 2023-03-13 13:16:46 -05:00
Nate Brown
92cc32f844 Remove handshake race avoidance (#820)
Co-authored-by: Wade Simmons <wadey@slack-corp.com>
2023-03-13 12:35:14 -05:00
Nate Brown
2ea360e5e2 Render hostmaps as mermaid graphs in e2e tests (#815) 2023-02-16 13:23:33 -06:00
Caleb Jasik
469ae78748 Add homebrew install method to readme (#630) 2023-02-13 14:42:58 -06:00
Nate Brown
a06977bbd5 Track connections by local index id instead of vpn ip (#807) 2023-02-13 14:41:05 -06:00
John Maguire
5bd8712946 Immediately forward packets from self to self on FreeBSD (#808) 2023-01-23 15:51:54 -06:00
Tricia
0fc4d8192f log network as String to match the other log event in interface.go that emits network (#811)
Co-authored-by: Tricia Bogen <tbogen@slack-corp.com>
2023-01-23 14:05:35 -05:00
Nate Brown
5278b6f926 Generic timerwheel (#804) 2023-01-18 10:56:42 -06:00
Nate Brown
c177126ed0 Fix possible panic in the timerwheels (#802) 2023-01-11 19:35:19 -06:00
John Maguire
c44da3abee Make DNS queries case insensitive (#793) 2022-12-20 16:59:11 -05:00
John Maguire
b7e73da943 Add note indicating modes have usage text (#794) 2022-12-20 16:53:56 -05:00
John Maguire
ff54bfd9f3 Add nebula-cert.exe and cert files to .gitignore (#722) 2022-12-20 16:52:51 -05:00
John Maguire
b5a85a6eb8 Update example config with IPv6 note for allow lists (#742) 2022-12-20 16:50:02 -05:00
Fabio Alessandro Locati
3ae242fa5f Add nss-lookup to the systemd wants (#791)
* Add nss-lookup to the systemd wants to ensure DNS is running before starting nebula

* Add Ansible & example service scripts

* Fix #797

* Align Ansible scripts and examples

Co-authored-by: John Maguire <contact@johnmaguire.me>
2022-12-19 14:42:07 -05:00
Fabio Alessandro Locati
cb2ec861ea Nebula is now in Fedora official repositories (#719) 2022-12-19 14:40:53 -05:00
John Maguire
a3e6edf9c7 Use config.yml consistently (not config.yaml) (#789) 2022-12-19 11:45:15 -06:00
John Maguire
ad7222509d Add a link to mobile nebula in the new issue form (#790) 2022-12-19 11:28:49 -06:00
Caleb Jasik
12dbbd3dd3 Fix typos found by https://github.com/crate-ci/typos (#735) 2022-12-19 11:28:27 -06:00
John Maguire
ec48298fe8 Update config to show aes cipher instead of chacha (#788) 2022-12-07 11:38:56 -06:00
Ian VanSchooten
77769de1e6 Docs: Update doc links (#751)
* Update documentation links

* Update links
2022-11-29 11:32:43 -05:00
Alexander Averyanov
022ae83a4a Fix typo: my -> may (#758) 2022-11-28 13:59:57 -05:00
Wade Simmons
d4f9500ca5 Update dependencies (2022-11) (#780)
* update dependencies

Update to latest dependencies on Nov 21, 2022.

Here are the diffs for deps that actually end up in the binaries (based
on `go version -m`)

    Updated  github.com/imdario/mergo                          https://github.com/imdario/mergo/compare/v0.3.12...v0.3.13
    Updated  github.com/matttproud/golang_protobuf_extensions  https://github.com/matttproud/golang_protobuf_extensions/compare/v1.0.1...v1.0.4
    Updated  github.com/miekg/dns                              https://github.com/miekg/dns/compare/v1.1.48...v1.1.50
    Updated  github.com/prometheus/client_golang               https://github.com/prometheus/client_golang/compare/v1.12.1...v1.14.0
    Updated  github.com/prometheus/client_model                https://github.com/prometheus/client_model/compare/v0.2.0...v0.3.0
    Updated  github.com/prometheus/common                      https://github.com/prometheus/common/compare/v0.33.0...v0.37.0
    Updated  github.com/prometheus/procfs                      https://github.com/prometheus/procfs/compare/v0.7.3...v0.8.0
    Updated  github.com/sirupsen/logrus                        https://github.com/sirupsen/logrus/compare/v1.8.1...v1.9.0
    Updated  github.com/vishvananda/netns                      https://github.com/vishvananda/netns/compare/50045581ed74...v0.0.1
    Updated  golang.org/x/crypto                               https://github.com/golang/crypto/compare/ae2d96664a29...v0.3.0
    Updated  golang.org/x/net                                  https://github.com/golang/net/compare/749bd193bc2b...v0.2.0
    Updated  golang.org/x/sys                                  https://github.com/golang/sys/compare/289d7a0edf71...v0.2.0
    Updated  golang.org/x/term                                 https://github.com/golang/term/compare/03fcf44c2211...v0.2.0
    Updated  google.golang.org/protobuf                        v1.28.0...v1.28.1

* test that mergo merges like we expect
2022-11-23 10:46:41 -05:00
brad-defined
9a8892c526 Fix 756 SSH command line parsing error to write to user instead of stderr (#757) 2022-11-22 20:55:27 -06:00
brad-defined
813b64ffb1 Remove unused variables from connection manager (#677) 2022-11-15 20:33:09 -06:00
John Maguire
85f5849d0b Fix a hang when shutting down Android (#772) 2022-11-11 10:18:43 -06:00
Wade Simmons
9af242dc47 switch to new sync/atomic helpers in go1.19 (#728)
These new helpers make the code a lot cleaner. I confirmed that the
simple helpers like `atomic.Int64` don't add any extra overhead as they
get inlined by the compiler. `atomic.Pointer` adds an extra method call
as it no longer gets inlined, but we aren't using these on the hot path
so it is probably okay.
2022-10-31 13:37:41 -04:00
Wade Simmons
a800a48857 v1.6.1 (#752)
Update CHANGELOG for Nebula v1.6.1
2022-09-26 13:38:18 -04:00
Nate Brown
4c0ae3df5e Refuse to process double encrypted packets (#741) 2022-09-19 12:47:48 -05:00
Nate Brown
feb3e1317f Add a simple benchmark to e2e tests (#739) 2022-09-01 09:44:58 -05:00
Jon Rafkind
c2259f14a7 explicitly reload config from ssh command (#725) 2022-08-08 12:44:09 -05:00
Nate Brown
b1eeb5f3b8 Support unsafe_routes on mobile again (#729) 2022-08-05 09:58:10 -05:00
Nate Brown
2adf0ca1d1 Use issue templates to improve bug reports (#726) 2022-07-29 12:57:05 -05:00
Nate Brown
92dfccf01a v1.6.0 (#701)
Update CHANGELOG for Nebula v1.6.0

Co-authored-by: Wade Simmons <wsimmons@slack-corp.com>
Co-authored-by: brad-defined <77982333+brad-defined@users.noreply.github.com>
2022-06-30 16:15:18 -04:00
brad-defined
38e495e0d2 Remove EXPERIMENTAL text from routines example config. (#702) 2022-06-30 11:20:41 -04:00
brad-defined
78a0255c91 typeos (#700) 2022-06-29 11:19:20 -04:00
brad-defined
169cdbbd35 Immediately forward packets received on the nebula TUN device from self to self (#501)
* Immediately forward packets received on the nebula TUN device with a destination of our Nebula VPN IP right back out that same TUN device on MacOS.
2022-06-27 14:36:10 -04:00
Nate Brown
0d1ee4214a Add relay e2e tests and output some mermaid sequence diagrams (#691) 2022-06-27 12:33:29 -05:00
Wade Simmons
7b9287709c add listen.send_recv_error config option (#670)
By default, Nebula replies to packets it has no tunnel for with a `recv_error` packet. This packet helps speed up re-connection
in the case that Nebula on either side did not shut down cleanly. This response can be abused as a way to discover if Nebula is running
on a host though. This option lets you configure if you want to send `recv_error` packets always, never, or only to private network remotes.
valid values: always, never, private

This setting is reloadable with SIGHUP.
2022-06-27 12:37:54 -04:00
Wade Simmons
85ec807b7e reserve NebulaHandshakeDetails fields for multiport (#674)
We are currently testing changes for multiport (related to #497) that
use fields 6 and 7 in the protobuf. Reserved these fields so that when
we eventually open the PR we are backwards compatible with any future
changes.
2022-06-27 12:07:05 -04:00
John Maguire
a0b280621d Remove firewall.conntrack.max_connections from examples (#684) 2022-06-23 10:29:54 -05:00
Caleb Jasik
527f953c2c Remove x509 config loading code (#685) 2022-06-23 10:27:34 -05:00
brad-defined
1a7c575011 Relay (#678)
Co-authored-by: Wade Simmons <wsimmons@slack-corp.com>
2022-06-21 13:35:23 -05:00
Don Stephan
332fa2b825 fix panic in handleInvalidCertificate (#675)
* fix panic in handleInvalidCertificate

when HandleMonitorTick fires, the hostmap can be nil which causes a panic to occur when trying to clean up the hostmap in handleInvalidCertificate. This fix just stops the invalidation from continuing if the hostmap doesn't exist.

* removed conditional for disconnectInvalid in HandleDeletionTick
2022-05-16 13:29:57 -04:00
Wade Simmons
45d1d2b6c6 Update dependencies - 2022-04 (#664)
Updated  github.com/kardianos/service         https://github.com/kardianos/service/compare/v1.2.0...v1.2.1
    Updated  github.com/miekg/dns                 https://github.com/miekg/dns/compare/v1.1.43...v1.1.48
    Updated  github.com/prometheus/client_golang  https://github.com/prometheus/client_golang/compare/v1.11.0...v1.12.1
    Updated  github.com/prometheus/common         https://github.com/prometheus/common/compare/v0.32.1...v0.33.0
    Updated  github.com/stretchr/testify          https://github.com/stretchr/testify/compare/v1.7.0...v1.7.1
    Updated  golang.org/x/crypto                  5770296d90...ae2d96664a
    Updated  golang.org/x/net                     69e39bad7d...749bd193bc
    Updated  golang.org/x/sys                     7861aae155...289d7a0edf
    Updated  golang.zx2c4.com/wireguard/windows   v0.5.1...v0.5.3
    Updated  google.golang.org/protobuf           v1.27.1...v1.28.0
2022-04-18 12:12:25 -04:00
Wade Simmons
3913062c43 build and test with go1.18 (#656)
- https://go.dev/doc/go1.18
2022-04-05 17:08:00 -04:00
Wade Simmons
b38bd36766 fix connection manager check when disconnect_invalid set (#658)
This restores the hostMap.QueryVpnIP block to how it looked before #370
was merged. I'm not sure why the patch from #370 wanted to continue on
if there was no match found in the hostmap, since there isn't anything
to do at that point (the tunnel has already been closed).

This was causing a crash because the handleInvalidCertificate check
expects the hostinfo to be passed in (but it is nil since there was no
hostinfo in the hostmap).

Fixes: #657
2022-04-04 13:38:36 -04:00
Nate Brown
d85e24f49f Allow for self reported ips to the lighthouse (#650) 2022-04-04 12:35:23 -05:00
bitshop
7672c7087a Add to build all windows-arm64 / bin-windows-arm64 build option (#638)
* Add to build all windows-arm64 / bin-winarm64 builds

* update release to build for windows-arm64

* cleanup

Co-authored-by: Wade Simmons <wsimmons@slack-corp.com>
2022-03-18 13:23:10 -04:00
Caleb Jasik
730a5c4a23 Update link to nebula docs (#655) 2022-03-18 11:15:16 -04:00
brad-defined
03498a0cb2 Make nebula advertise its dynamic port to lighthouses (#653) 2022-03-15 18:03:56 -05:00
Nate Brown
312a01dc09 Lighthouse reload support (#649)
Co-authored-by: John Maguire <contact@johnmaguire.me>
2022-03-14 12:35:13 -05:00
Nate Brown
bbe0a032bb Fix windows unsafe_routes regression (#648) 2022-03-09 13:23:29 -06:00
Wade Simmons
b5b9d33ee7 v1.5.2 (#612)
Update CHANGELOG for Nebula v1.5.2
2021-12-14 16:48:56 -05:00
Wade Simmons
e434ba6523 fix unsafe routes darwin (#610)
With Nebula 1.4.0, if you create an unsafe_route that has a collision with an existing route on the system, the unsafe_route will be silently dropped (and the existing system route remains).

With Nebula 1.5.0, this same situation will cause Nebula to fail to start with an error (EEXIST).

This change restores the Nebula 1.4.0 behavior (but with a WARN log as well).
2021-12-14 11:52:49 -05:00
Wade Simmons
068a93d1f4 fix makeRouteTree allowMTU (#611)
With the previous implementation, we check if route.MTU is greater than zero,
but it will always be because we set it to the default MTU in
parseUnsafeRoutes. This change leaves it as zero in parseUnsafeRoutes so
it can be examined later.
2021-12-14 11:52:28 -05:00
Nate Brown
15fdabc3ab v1.5.1 (#606)
Update CHANGELOG for Nebula v1.5.1
2021-12-13 20:43:25 -05:00
forfuncsake
1110756f0f Allow setup of a CA pool from bytes that contain expired certs (#599)
Co-authored-by: Nate Brown <nbrown.us@gmail.com>
2021-12-09 21:24:56 -06:00
Nate Brown
e31006d546 Be more clear about ipv4 in nebula-cert (#604) 2021-12-07 21:40:30 -06:00
Wade Simmons
949ec78653 don't set ConnectionState to nil (#590)
* don't set ConnectionState to nil

We might have packets processing in another thread, so we can't safely
just set this to nil. Since we removed it from the hostmaps, the next
packets to process should start the handshake over again.

I believe this comment is outdated or incorrect, since the next
handshake will start over with a new HostInfo, I don't think there is
any way a counter reuse could happen:

> We must null the connectionstate or a counter reuse may happen

Here is a panic we saw that I think is related:

    panic: runtime error: invalid memory address or nil pointer dereference
    [signal SIGSEGV: segmentation violation code=0x1 addr=0x20 pc=0x93a037]
    goroutine 59 [running, locked to thread]:
    github.com/slackhq/nebula.(*Firewall).Drop(...)
            github.com/slackhq/nebula/firewall.go:380
    github.com/slackhq/nebula.(*Interface).consumeInsidePacket(...)
            github.com/slackhq/nebula/inside.go:59
    github.com/slackhq/nebula.(*Interface).listenIn(...)
            github.com/slackhq/nebula/interface.go:233
    created by github.com/slackhq/nebula.(*Interface).run
            github.com/slackhq/nebula/interface.go:191

* use closeTunnel
2021-12-06 14:09:05 -05:00
Wade Simmons
127a116bfd update golang.org/x/crypto (#603)
> Version v0.0.0-20211202192323-5770296d904e of golang.org/x/crypto fixes a vulnerability in the golang.org/x/crypto/ssh package which allowed unauthenticated clients to cause a panic in SSH servers.
>
> This issue was discovered and reported by Rod Hynes, Psiphon Inc., and is tracked as CVE-2021-43565 and Issue golang/go#49932.

    Updated  golang.org/x/crypto  089bfa5675...5770296d90
    Updated  golang.org/x/net     4a448f8816...69e39bad7d
2021-12-06 14:07:05 -05:00
Wade Simmons
befce3f990 fix crash with -test (#602)
When running in `-test` mode, `tun` is set to nil. So we should move the
defer into the `!configTest` if block.

    panic: runtime error: invalid memory address or nil pointer dereference
    [signal SIGSEGV: segmentation violation code=0x1 addr=0x28 pc=0x54855c]

    goroutine 1 [running]:
    github.com/slackhq/nebula.Main.func3(0x4000135e80, {0x0, 0x0})
            github.com/slackhq/nebula/main.go:176 +0x2c
    github.com/slackhq/nebula.Main(0x400022e060, 0x1, {0x76faa0, 0x5}, 0x4000230000, 0x0)
            github.com/slackhq/nebula/main.go:316 +0x2414
    main.main()
            github.com/slackhq/nebula/cmd/nebula/main.go:54 +0x540
2021-12-06 14:06:16 -05:00
Wade Simmons
f60ed2b36d overlay: fix tun.RouteFor getting *net.IP (#595)
tun.RouteFor expects the routeTree to have an iputil.VpnIp inside of it
instead of a *net.IP.
2021-12-06 09:35:31 -05:00
Nate Brown
48c47f5841 Warn if no lighthouses were configured on a non lighthouse node (#587) 2021-11-30 10:31:33 -06:00
Wade Simmons
75306487c5 fix wintun package to have // +build comments (#598)
Without these comments, gofmt 1.16.9 will complain. Since we otherwise
still support building with go1.16, lets add the comments to make it
easier to compile and gofmt.

Related: #588
2021-11-30 11:14:15 -05:00
Nate Brown
78d0d46bae Remove WriteRaw, cidrTree -> routeTree to better describe its purpose, remove redundancy from field names (#582) 2021-11-12 12:47:09 -06:00
Nate Brown
467e605d5e Push route handling into overlay, a few more nits fixed (#581) 2021-11-12 11:19:28 -06:00
Nate Brown
2f1f0d602f Cleanup most of the remaining nits (#578) 2021-11-12 10:47:36 -06:00
Nate Brown
e07524a654 Move all of tun into overlay (#577) 2021-11-11 16:37:29 -06:00
Nate Brown
88ce0edf76 Start the overlay package with the old Inside interface (#576) 2021-11-10 21:52:26 -06:00
Nate Brown
4453964e34 Move util to test, contextual errors to util (#575) 2021-11-10 21:47:38 -06:00
Wade Simmons
19a9a4221e v1.5.0 (#574)
Update CHANGELOG for Nebula v1.5.0
2021-11-10 22:32:26 -05:00
Chad Harp
1915fab619 tun_darwin (#163)
- Remove water and replace with syscalls for tun setup
- Support named interfaces
- Set up routes with syscalls instead of os/exec

Co-authored-by: Wade Simmons <wade@wades.im>
2021-11-09 20:24:24 -05:00
Nate Brown
7801b589b6 Sign and notarize darwin universal binaries (#571) 2021-11-09 10:49:54 -06:00
Nate Brown
b6391292d1 Move wintun distributable into release zip for windows (#572) 2021-11-08 21:55:10 -06:00
Terry Wang
999efdb2e8 docs: improve grammar and readability for README.md (#225) 2021-11-08 17:32:31 -06:00
Wade Simmons
304b12f63f create ConnectionState before adding to HostMap (#535)
We have a few small race conditions with creating the HostInfo.ConnectionState
since we add the host info to the pendingHostMap before we set this
field. We can make everything a lot easier if we just add an "init"
function so that we can set this field in the hostinfo before we add it
to the hostmap.
2021-11-08 14:46:22 -05:00
CzBiX
16be0ce566 Add Wintun support (#289) 2021-11-08 12:36:31 -06:00
John Maguire
0577c097fb Fix flaky test (#567) 2021-11-04 14:49:56 -05:00
Jake Howard
eb66e13dc4 Use CGO_ENABLED=0 (#421)
Set `CGO_ENABLED` to 0 when building
2021-11-04 14:20:44 -04:00
Wade Simmons
a22c134bf5 Update dependencies, November 2021 (#564)
*Direct Dependencies*

    Updated  github.com/anmitsu/go-shlex                648efa6222...38f4b401e2
    Updated  github.com/flynn/noise                     https://github.com/flynn/noise/compare/4bdb43be3117...v1.0.0
    Updated  github.com/golang/protobuf                 https://github.com/golang/protobuf/compare/v1.5.0...v1.5.2
    Updated  github.com/kardianos/service               https://github.com/kardianos/service/compare/v1.1.0...v1.2.0
    Updated  github.com/miekg/dns                       https://github.com/miekg/dns/compare/v1.1.25...v1.1.43
    Updated  github.com/nbrownus/go-metrics-prometheus  https://github.com/nbrownus/go-metrics-prometheus/compare/6e6d5173d99c...974a6260965f
    Updated  github.com/prometheus/client_golang        https://github.com/prometheus/client_golang/compare/v1.2.1...v1.11.0
    Updated  github.com/rcrowley/go-metrics             https://github.com/rcrowley/go-metrics/compare/cac0b30c2563...cf1acfcdf475
    Updated  github.com/sirupsen/logrus                 https://github.com/sirupsen/logrus/compare/v1.4.2...v1.8.1
    Updated  github.com/songgao/water                   https://github.com/songgao/water/compare/fd331bda3f4b...2b4b6d7c09d8
    Updated  github.com/stretchr/testify                https://github.com/stretchr/testify/compare/v1.6.1...v1.7.0
    Updated  github.com/vishvananda/netlink             https://github.com/vishvananda/netlink/compare/00009fb8606a...v1.1.0
    Updated  golang.org/x/crypto                        https://github.com/golang/crypto/compare/0c34fe9e7dc2...089bfa567519
    Updated  golang.org/x/net                           https://github.com/golang/net/compare/e18ecbb05110...4a448f8816b3
    Updated  golang.org/x/sys                           https://github.com/golang/sys/compare/f84b799fce68...4dd72447c267
    Updated  google.golang.org/protobuf                 v1.26.0...v1.27.1
    Updated  gopkg.in/yaml.v2                           v2.2.7...v2.4.0

*Indirect Dependencies*

    Updated  github.com/alecthomas/units                         https://github.com/alecthomas/units/compare/c3de453c63f4...f65c72e2690d
    Updated  github.com/cespare/xxhash                           https://github.com/cespare/xxhash/compare/v2.1.1...v2.1.2
    Updated  github.com/go-logfmt/logfmt                         https://github.com/go-logfmt/logfmt/compare/v0.4.0...v0.5.0
    Updated  github.com/json-iterator/go                         https://github.com/json-iterator/go/compare/v1.1.7...v1.1.11
    Updated  github.com/julienschmidt/httprouter                 https://github.com/julienschmidt/httprouter/compare/v1.2.0...v1.3.0
    Updated  github.com/konsorten/go-windows-terminal-sequences  https://github.com/konsorten/go-windows-terminal-sequences/compare/v1.0.2...v1.0.3
    Updated  github.com/mwitkow/go-conntrack                     https://github.com/mwitkow/go-conntrack/compare/cc309e4a2223...2f068394615f
    Updated  github.com/pkg/errors                               https://github.com/pkg/errors/compare/v0.8.1...v0.9.1
    Updated  github.com/prometheus/client_model                  https://github.com/prometheus/client_model/compare/d1d2010b5bee...v0.2.0
    Updated  github.com/prometheus/common                        https://github.com/prometheus/common/compare/v0.7.0...v0.32.1
    Updated  github.com/prometheus/procfs                        https://github.com/prometheus/procfs/compare/v0.0.8...v0.7.3
    Updated  github.com/vishvananda/netns                        https://github.com/vishvananda/netns/compare/0a2b9b5464df...50045581ed74
    Updated  golang.org/x/sync                                   https://github.com/golang/sync/compare/67f06af15bc9...036812b2e83c
    Updated  golang.org/x/term                                   https://github.com/golang/term/compare/7de9c90e9dd1...03fcf44c2211
    Updated  golang.org/x/text                                   https://github.com/golang/text/compare/v0.3.3...v0.3.6
    Added    cloud.google.com/go                                 v0.65.0
    Added    cloud.google.com/go/bigquery                        v1.8.0
    Added    cloud.google.com/go/datastore                       v1.1.0
    Added    cloud.google.com/go/pubsub                          v1.3.1
    Added    cloud.google.com/go/storage                         v1.10.0
    Added    dmitri.shuralyov.com/gpu/mtl                        666a987793e9
    Added    github.com/BurntSushi/toml                          https://github.com/BurntSushi/toml/tree/v0.3.1
    Added    github.com/BurntSushi/xgb                           https://github.com/BurntSushi/xgb/tree/27f122750802
    Added    github.com/census-instrumentation/opencensus-proto  https://github.com/census-instrumentation/opencensus-proto/tree/v0.2.1
    Added    github.com/chzyer/logex                             https://github.com/chzyer/logex/tree/v1.1.10
    Added    github.com/chzyer/readline                          https://github.com/chzyer/readline/tree/2972be24d48e
    Added    github.com/chzyer/test                              https://github.com/chzyer/test/tree/a1ea475d72b1
    Added    github.com/client9/misspell                         https://github.com/client9/misspell/tree/v0.3.4
    Added    github.com/cncf/udpa/go                             https://github.com/cncf/udpa/go/tree/269d4d468f6f
    Added    github.com/envoyproxy/go-control-plane              https://github.com/envoyproxy/go-control-plane/tree/v0.9.4
    Added    github.com/envoyproxy/protoc-gen-validate           https://github.com/envoyproxy/protoc-gen-validate/tree/v0.1.0
    Added    github.com/go-gl/glfw                               https://github.com/go-gl/glfw/tree/e6da0acd62b1
    Added    github.com/go-gl/glfw/v3.3/glfw                     https://github.com/go-gl/glfw/v3.3/glfw/tree/6f7a984d4dc4
    Added    github.com/go-kit/log                               https://github.com/go-kit/log/tree/v0.1.0
    Added    github.com/golang/glog                              https://github.com/golang/glog/tree/23def4e6c14b
    Added    github.com/golang/groupcache                        https://github.com/golang/groupcache/tree/8c9f03a8e57e
    Added    github.com/golang/mock                              https://github.com/golang/mock/tree/v1.4.4
    Added    github.com/google/btree                             https://github.com/google/btree/tree/v1.0.0
    Added    github.com/google/martian                           https://github.com/google/martian/tree/v2.1.0+incompatible
    Added    github.com/google/martian                           https://github.com/google/martian/tree/v3.0.0
    Added    github.com/google/pprof                             https://github.com/google/pprof/tree/1a94d8640e99
    Added    github.com/google/renameio                          https://github.com/google/renameio/tree/v0.1.0
    Added    github.com/googleapis/gax-go                        https://github.com/googleapis/gax-go/tree/v2.0.5
    Added    github.com/hashicorp/golang-lru                     https://github.com/hashicorp/golang-lru/tree/v0.5.1
    Added    github.com/ianlancetaylor/demangle                  https://github.com/ianlancetaylor/demangle/tree/5e5cf60278f6
    Added    github.com/jpillora/backoff                         https://github.com/jpillora/backoff/tree/v1.0.0
    Added    github.com/jstemmer/go-junit-report                 https://github.com/jstemmer/go-junit-report/tree/v0.9.1
    Added    github.com/rogpeppe/go-internal                     https://github.com/rogpeppe/go-internal/tree/v1.3.0
    Added    go.opencensus.io                                    v0.22.4
    Added    golang.org/x/exp                                    https://github.com/golang/exp/tree/6cc2880d07d6
    Added    golang.org/x/image                                  https://github.com/golang/image/tree/cff245a6509b
    Added    golang.org/x/mobile                                 https://github.com/golang/mobile/tree/d2bd2a29d028
    Added    golang.org/x/oauth2                                 https://github.com/golang/oauth2/tree/f6687ab2804c
    Added    golang.org/x/time                                   https://github.com/golang/time/tree/555d28b269f0
    Added    google.golang.org/api                               v0.30.0
    Added    google.golang.org/appengine                         v1.6.6
    Added    google.golang.org/genproto                          8632dd797987
    Added    google.golang.org/grpc                              v1.31.0
    Added    gopkg.in/errgo.v2                                   v2.1.0
    Added    honnef.co/go/tools                                  v0.0.1-2020.1.4
    Added    rsc.io/binaryregexp                                 v0.2.0
    Added    rsc.io/quote                                        v3.1.0
    Added    rsc.io/sampler                                      v1.3.0
    Removed  github.com/flynn/go-shlex                           https://github.com/flynn/go-shlex/tree/3f9db97f8568
2021-11-04 10:25:13 -04:00
Nate Brown
94aaab042f Fix race between punchback and lighthouse handler reset (#566) 2021-11-03 21:54:27 -05:00
Donatas Abraitis
b358bbab80 Add an ability to specify metric for unsafe routes (#474) 2021-11-03 21:53:28 -05:00
Nate Brown
bcabcfdaca Rework some things into packages (#489) 2021-11-03 20:54:04 -05:00
Nate Brown
1f75fb3c73 Add link to further documentation (#563) 2021-11-02 20:55:34 -05:00
brad-defined
6ae8ba26f7 Add a context object in nebula.Main to clean up on error (#550) 2021-11-02 13:14:26 -05:00
Nate Brown
32cd9a93f1 Bump to go1.17 (#553) 2021-10-21 16:24:11 -05:00
Nate Brown
97afe2ec48 Update changelog for #370 (#551) 2021-10-20 14:36:56 -05:00
Donatas Abraitis
32e2619323 Teardown tunnel automatically if peer's certificate expired (#370) 2021-10-20 13:23:33 -05:00
Wade Simmons
e8b08e49e6 update CHANGELOG for 532, 540 and 541 (#549)
- #532
- #540
- #541

Also fix some whitespace
2021-10-19 11:07:31 -04:00
Wade Simmons
ea2c186a77 remote_allow_ranges: allow inside CIDR specific remote_allow_lists (#540)
This allows you to configure remote allow lists specific to different
subnets of the inside CIDR. Example:

    remote_allow_ranges:
      10.42.42.0/24:
        192.168.0.0/16: true

This would only allow hosts with a VPN IP in the 10.42.42.0/24 range to
have private IPs (and thus don't connect over public IPs).

The PR also refactors AllowList into RemoteAllowList and LocalAllowList to make it clearer which methods are allowed on which allow list.
2021-10-19 10:54:30 -04:00
Wade Simmons
ae5505bc74 handshake: update to preferred remote (#532)
If we receive a handshake packet for a tunnel that has already been
completed, check to see if the new remote is preferred. If so, update to
the preferred remote and send a test packet to influence the other side
to do the same.
2021-10-19 10:53:55 -04:00
Wade Simmons
afda79feac documented "preferred_ranges" (#541)
Document the preferred config variable, and deprecate "local_range".
2021-10-19 10:53:36 -04:00
rvalue
0e7bc290f8 Fix build on riscv64 (#542)
Add riscv64 build tag for udp_linux_64.go to fix build on riscv64

Co-authored-by: Wade Simmons <wade@wades.im>
2021-10-13 10:55:32 -04:00
Manuel Romei
3a8f533b24 refactor: use X25519 instead of ScalarBaseMult (#533)
As suggested in https://pkg.go.dev/golang.org/x/crypto/curve25519#ScalarBaseMult,
use X25519 instead of ScalarBaseMult. When using Basepoint, it may employ
some precomputed values, enhancing performance.

Co-authored-by: Wade Simmons <wade@wades.im>
Co-authored-by: Wade Simmons <wadey@slack-corp.com>
2021-10-12 12:03:43 -04:00
John Maguire
34d002d695 Check CA cert and key match in nebula-cert sign (#503)
`func (nc *NebulaCertificate) VerifyPrivateKey(key []byte) error` would
previously return an error even if passed the correct private key for a
CA certificate `nc`.

That function has been updated to support CA certificates, and
nebula-cert now calls it before signing a new certificate. Previously,
it would perform all constraint checks against the CA certificate
provided, take a SHA256 fingerprint of the provided certificate, insert
it into the new node certificate, and then finally sign it with the
mismatching private key provided.
2021-10-01 12:43:33 -04:00
Ben Yanke
9f34c5e2ba Typo Fix (#523) 2021-09-16 00:12:08 -05:00
Joe Doss
3f5caf67ff Add info about Distribution Packages. (#414) 2021-09-15 17:57:35 -05:00
Stan Grishin
e01213cd21 Update README.md (#378)
Add missing period.
2021-09-15 17:50:01 -05:00
Jack Adamson
af3674ac7b add peer cert issuer to handshake log entries (#510)
Co-authored-by: Jack Adamson <jackadamson@users.noreply.github.com>
2021-08-31 11:57:38 +10:00
Nate Brown
c726d20578 Fix single command ssh exec (#483) 2021-06-07 17:06:59 -05:00
Andrii Chubatiuk
d13f4b5948 fixed recv_errors spoofing condition (#482)
Hi @nbrownus
Fixed a small bug that was introduced in
df7c7ee#diff-5d05d02296a1953fd5fbcb3f4ab486bc5f7c34b14c3bdedb068008ec8ff5beb4
having problems due to it
2021-06-03 13:04:04 -04:00
Nate Brown
2e1d6743be v1.4.0 (#458)
Update CHANGELOG for Nebula v1.4.0

Co-authored-by: Wade Simmons <wade@wades.im>
2021-05-10 21:23:49 -04:00
Nate Brown
d004fae4f9 Unlock the hostmap quickly, lock hostinfo instead (#459) 2021-05-05 13:10:55 -05:00
Nate Brown
95f4c8a01b Don't check for rebind if we are closing the tunnel (#457) 2021-05-04 19:15:24 -05:00
Nate Brown
9ff73cb02f Increase the timestamp resolution for handshakes (#453) 2021-05-03 14:10:00 -05:00
John Maguire
98c391396c Remove log when no handshake message is sent (#452) 2021-04-30 18:19:40 -05:00
Nate Brown
1bc6f5fe6c Minor windows focused improvements (#443)
Co-authored-by: Wade Simmons <wadey@slack-corp.com>
2021-04-30 15:04:47 -05:00
Wade Simmons
44cb697552 Add more metrics (#450)
* Add more metrics

This change adds the following counter metrics:

Metrics to track packets dropped at the firewall:

    firewall.dropped.local_ip
    firewall.dropped.remote_ip
    firewall.dropped.no_rule

Metrics to track handshakes attempts that have been initiated and ones
that have timed out (ones that have completed are tracked by the
existing "handshakes" histogram).

    handshake_manager.initiated
    handshake_manager.timed_out

Metrics to track when cached_packets are dropped because we run out of
buffer space, and how many are sent once the handshake completes.

    hostinfo.cached_packets.dropped
    hostinfo.cached_packets.sent

This change also notes how many cached packets we have when we log the
final "Handshake received" message for either stage1 for stage2.

* separate incoming/outgoing metrics

* remove "allowed" firewall metrics

We don't need this on the hotpath, they aren't worh it.

* don't need pointers here
2021-04-27 22:23:18 -04:00
Nathan Brown
db23fdf9bc Dont apply race avoidance to existing handshakes, use the handshake time to determine who wins (#451)
Co-authored-by: Wade Simmons <wadey@slack-corp.com>
2021-04-27 21:15:34 -05:00
Nathan Brown
df7c7eec4a Get out faster on nil udpAddr (#449) 2021-04-26 20:21:47 -05:00
Nathan Brown
6f37280e8e Fully close tunnels when CloseAllTunnels is called (#448) 2021-04-26 10:42:24 -05:00
Nathan Brown
a0735dd7d5 Add locking around ssh conns to avoid concurrent map access on reload (#447) 2021-04-23 14:43:16 -05:00
Nathan Brown
1deb5d98e8 Fix tun funcs for ios and android (#446) 2021-04-22 15:23:40 -05:00
Nathan Brown
a1ee521d79 Fix a failed return in an error case (#445) 2021-04-17 18:47:31 -05:00
brad-defined
7859140711 Only set serveDns if the host is also configured to be a lighthouse. (#433) 2021-04-16 13:33:56 -05:00
brad-defined
17106f83a0 Ensure the Nebula device exists before attempting to bind to the Nebula IP (#375) 2021-04-16 10:34:28 -05:00
Nathan Brown
ab08be1e3e Don't panic on a nil response from the lighthouse (#442) 2021-04-15 09:12:21 -05:00
Nathan Brown
710df6a876 Refactor remotes and handshaking to give every address a fair shot (#437) 2021-04-14 13:50:09 -05:00
John Maguire
20bef975cd Remove obsolete systemd unit settings (take 2) (#438) 2021-04-07 12:02:40 -05:00
Nathan Brown
480036fbc8 Remove unused structs in hostmap.go (#430) 2021-04-01 22:07:11 -05:00
Nathan Brown
1499be3e40 Fix name resolution for host names in config (#431) 2021-04-01 21:48:41 -05:00
Nathan Brown
64d8e5aa96 More LH cleanup (#429) 2021-04-01 10:23:31 -05:00
Nathan Brown
75f7bda0a4 Lighthouse performance pass (#418) 2021-03-31 17:32:02 -05:00
Nathan Brown
e7e55618ff Include bad backets in the good handshake test (#428) 2021-03-31 13:36:10 -05:00
Nathan Brown
0c2e5973e1 Simple lie test (#427) 2021-03-31 10:26:35 -05:00
Nathan Brown
830d6d4639 Start of end to end testing with a good handshake between two nodes (#425) 2021-03-29 14:29:20 -05:00
Nathan Brown
883e09a392 Don't use a global ca pool (#426) 2021-03-29 12:10:19 -05:00
Wade Simmons
4603b5b2dd fix PromoteEvery check (#424)
This check was accidentally typo'd in #396 from `%` to `&`. Restore the
correct functionality here (we want to do the check every "PromoteEvery"
count packets).
2021-03-26 15:01:05 -04:00
Wade Simmons
a71541fb0b export build version as a prometheus label (#405)
This is how Prometheus recommends you do it, and how they do it
themselves in their client. This makes it easy to see which versions you
have deployed in your fleet, and query over it too.
2021-03-26 14:16:35 -04:00
Nathan Brown
3ea7e1b75f Don't use a global logger (#423) 2021-03-26 09:46:30 -05:00
Nathan Brown
7a9f9dbded Don't craft buffers if we don't need them (#416) 2021-03-22 18:25:06 -05:00
Nathan Brown
7073d204a8 IPv6 support for outside (udp) (#369) 2021-03-18 20:37:24 -05:00
Joe Doss
9e94442ce7 Add fedora dist files. (#413) 2021-03-18 12:33:43 -07:00
Joe Doss
13471f5792 Remove obsolete systemd unit settings. (#412) 2021-03-18 12:29:36 -07:00
Thomas Roten
ea07a89cc8 Ensure mutex is unlocked when adding remote IP. (#406)
Currently, if you use the remote allow list config, as soon as you attempt to create a tunnel to a node that has a blocked IP address, a mutex is locked and never unlocked. This happens even if the node has an allowed remote IP address in addition to the blocked remote IP address.

This pull request ensures that the lighthouse mutex is unlocked whenever we attempt to add a remote IP.
2021-03-16 12:41:35 -04:00
Ryan Huber
3aaaea6309 don't allow a useless handshake with yourself (#402)
* don't allow a useless handshake with yourself

* remove helper
2021-03-15 12:58:23 -07:00
Wade Simmons
5506da3de9 Fix selection of UDP remote to use during stage2 (#404)
The change for #401 incorrectly called HostInfo.ForcePromoteBest in
stage2, when we really we want to pick the remote that we received the
response from.
2021-03-12 21:43:24 -05:00
Wade Simmons
6c55d67f18 Refactor handshake_ix (#401)
There are some subtle race conditions with the previous handshake_ix implementation, mostly around collisions with localIndexId. This change refactors it so that we have a "commit" phase during the handshake where we grab the lock for the hostmap and ensure that we have a unique local index before storing it. We also now avoid using the pending hostmap at all for receiving stage1 packets, since we have everything we need to just store the completed handshake.

Co-authored-by: Nate Brown <nbrown.us@gmail.com>
Co-authored-by: Ryan Huber <rhuber@gmail.com>
Co-authored-by: forfuncsake <drussell@slack-corp.com>
2021-03-12 14:16:25 -05:00
Wade Simmons
64d8035d09 fix race in getOrHandshake (#400)
We missed this race with #396 (and I think this is also the crash in
issue #226). We need to lock a little higher in the getOrHandshake
method, before we reset hostinfo.ConnectionInfo. Previously, two
routines could enter this section and confuse the handshake process.

This could result in the other side sending a recv_error that also has
a race with setting hostinfo.ConnectionInfo back to nil. So we make sure
to grab the lock in handleRecvError as well.

Neither of these code paths are in the hot path (handling packets
between two hosts over an active tunnel) so there should be no
performance concerns.
2021-03-09 09:27:02 -05:00
Ryan Huber
73a5ed90b2 Do not allow someone to run a nebula lighthouse with an ephemeral port (#399)
* Do not allow someone to run a nebula lighthouse with an ephemeral port

* derp - we discover the port so we have to check the config setting

* No context needed for this error

* gofmt yourself

* Revert "gofmt yourself"

This reverts commit c01423498e.

* Revert "No context needed for this error"

This reverts commit 6792af6846.

* snip snap snip snap
2021-03-08 12:42:06 -08:00
Wade Simmons
d604270966 Fix most known data races (#396)
This change fixes all of the known data races that `make smoke-docker-race` finds, except for one.

Most of these races are around the handshake phase for a hostinfo, so we add a RWLock to the hostinfo and Lock during each of the handshake stages.

Some of the other races are around consistently using `atomic` around the `messageCounter` field. To make this harder to mess up, I have renamed the field to `atomicMessageCounter` (I also removed the unnecessary extra pointer deference as we can just point directly to the struct field).

The last remaining data race is around reading `ConnectionInfo.ready`, which is a boolean that is only written to once when the handshake has finished. Due to it being in the hot path for packets and the rare case that this could actually be an issue, holding off on fixing that one for now.

here is the results of `make smoke-docker-race`:

before:

    lighthouse1: Found 2 data race(s)
    host2:       Found 36 data race(s)
    host3:       Found 17 data race(s)
    host4:       Found 31 data race(s)

after:

    host2: Found 1 data race(s)
    host4: Found 1 data race(s)

Fixes: #147
Fixes: #226
Fixes: #283
Fixes: #316
2021-03-05 21:18:33 -05:00
Nathan Brown
29c5f31f90 Add a check in the makefile to ensure a minimum version of go is installed (#383) 2021-03-02 13:29:05 -06:00
Nathan Brown
b6234abfb3 Add a way to trigger punch backs via lighthouse (#394) 2021-03-01 19:06:01 -06:00
Wade Simmons
2a4beb41b9 Routine-local conntrack cache (#391)
Previously, every packet we see gets a lock on the conntrack table and updates it. When running with multiple routines, this can cause heavy lock contention and limit our ability for the threads to run independently. This change caches reads from the conntrack table for a very short period of time to reduce this lock contention. This cache will currently default to disabled unless you are running with multiple routines, in which case the default cache delay will be 1 second. This means that entries in the conntrack table may be up to 1 second out of date and remain in a routine local cache for up to 1 second longer than the global table.

Instead of calling time.Now() for every packet, this cache system relies on a tick thread that updates the current cache "version" each tick. Every packet we check if the cache version is out of date, and reset the cache if so.
2021-03-01 19:52:17 -05:00
Wade Simmons
d232ccbfab add metrics for the udp sockets using SO_MEMINFO (#390)
Retrieve the current socket stats using SO_MEMINFO and report them as
metrics gauges. If SO_MEMINFO isn't supported, we don't report these metrics.
2021-03-01 19:51:33 -05:00
Nathan Brown
ecfb40f29c Fix osx for mq changes, this does not implement mq on osx (#395) 2021-03-01 16:57:05 -05:00
Wade Simmons
1bae5b2550 more validation in pending hostmap deletes (#344)
We are currently seeing some cases where we are not deleting entries
correctly from the pending hostmap. I believe this is a case of
an inbound timer tick firing and deleting the Hosts map entry for
a newer handshake attempt than intended, thus leaving the old Indexes
entry orphaned. This change adds some extra checking when deleteing from
the Indexes and Hosts maps to ensure we clean everything up correctly.
2021-03-01 12:40:46 -05:00
Wade Simmons
73081d99bc add make smoke-docker (#287)
This makes it easier to use the docker container smoke test that
GitHub actions runs. There is also `make smoke-docker-race` that runs the
smoke test with `-race` enabled.
2021-03-01 11:15:15 -05:00
Tim Rots
e7e6a23cde fix a few typos (#302) 2021-03-01 11:14:34 -05:00
Wade Simmons
a0583ebdca tun_disabled: reply to ICMP Echo Request (#342)
This change allows a server running with `tun.disabled: true` (usually
a lighthouse) to still reply to ICMP EchoRequest packets. This allows
you to "ping" the lighthouse Nebula IP as a quick check to make sure the
tunnel is up, even when running with tun.disabled.

This is still gated by allowing `icmp` packets in the inbound firewall
rules.
2021-03-01 11:09:41 -05:00
Wade Simmons
27d9a67dda Proper multiqueue support for tun devices (#382)
This change is for Linux only.

Previously, when running with multiple tun.routines, we would only have one file descriptor. This change instead sets IFF_MULTI_QUEUE and opens a file descriptor for each routine. This allows us to process with multiple threads while preventing out of order packet reception issues.

To attempt to distribute the flows across the queues, we try to write to the tun/UDP queue that corresponds with the one we read from. So if we read a packet from tun queue "2", we will write the outgoing encrypted packet to UDP queue "2". Because of the nature of how multi queue works with flows, a given host tunnel will be sticky to a given routine (so if you try to performance benchmark by only using one tunnel between two hosts, you are only going to be using a max of one thread for each direction).

Because this system works much better when we can correlate flows between the tun and udp routines, we are deprecating the undocumented "tun.routines" and "listen.routines" parameters and introducing a new "routines" parameter that sets the value for both. If you use the old undocumented parameters, the max of the values will be used and a warning logged.

Co-authored-by: Nate Brown <nbrown.us@gmail.com>
2021-02-25 15:01:14 -05:00
John Maguire
2bce222550 List possible cipher options in example config (#385) 2021-02-19 21:46:42 -06:00
Wade Simmons
3dd1108099 Go 1.16 and darwin-arm64 (#381)
This commit switches to Go 1.16 and adds a release binary for darwin-arm64.

Fixes: #343
2021-02-17 13:11:57 -05:00
Nathan Brown
d4b81f9b8d Add QR code support to nebula-cert (#297) 2021-02-11 18:53:25 -06:00
brad-defined
454bc8a6bb Check certificate banner during nebula-cert print (#373) 2021-02-05 14:52:32 -06:00
Wade Simmons
ce9ad37431 fix regression with LightHouseHandler and punchBack (#346)
The change introduced by #320 incorrectly re-uses the output buffer for
sending punchBack packets. Since we are currently spawning a new
goroutine for each send here, we need to allocate a new buffer each
time. We can come back and optimize this in the future, but for now we
should fix the regression.
2020-11-25 17:49:26 -05:00
Wade Simmons
ee7c27093c add HostMap.RemoteIndexes (#329)
This change adds an index based on HostInfo.remoteIndexId. This allows
us to use HostMap.QueryReverseIndex without having to loop over all
entries in the map (this can be a bottleneck under high traffic
lighthouses).

Without this patch, a high traffic lighthouse server receiving recv_error
packets and lots of handshakes, cpu pprof trace can look like this:

      flat  flat%   sum%        cum   cum%
    2000ms 32.26% 32.26%     3040ms 49.03%  github.com/slackhq/nebula.(*HostMap).QueryReverseIndex
     870ms 14.03% 46.29%     1060ms 17.10%  runtime.mapiternext

Which shows 50% of total cpu time is being spent in QueryReverseIndex.
2020-11-23 14:51:16 -05:00
Wade Simmons
2e7ca027a4 Lighthouse handler optimizations (#320)
We noticed that the number of memory allocations LightHouse.HandleRequest creates for each call can seriously impact performance for high traffic lighthouses. This PR introduces a benchmark in the first commit and then optimizes memory usage by creating a LightHouseHandler struct. This struct allows us to re-use memory between each lighthouse request (one instance per UDP listener go-routine).
2020-11-23 14:50:01 -05:00
mhp
672ce1f0a8 Move slice allocations in connection manager monitor loop (#340)
* Move slice allocations in connection manager monitor loop

* move further out

Co-authored-by: Miran Park <mpark@slack-corp.com>
2020-11-19 15:44:05 -08:00
Wade Simmons
384b1166ea fix panic in UnmarshalNebulaCertificate (#339)
This fixes a panic in UnmarshalNebulaCertificate when unmarshaling
a payload with Details set to nil.

Fixes: #332
2020-11-19 08:44:54 -05:00
Wade Simmons
0389596f66 don't mark handshake packets as "lost" (#331)
Packet 1 is always a stage 1 handshake and packet 2 is always stage 2.
Normal packets don't start flowing until the message counter is 3 or
higher.

Currently we only receive either packet 1 or 2 depending on if
we are the initiator or responder for the handshake, so we end up
marking one of these as "lost". We should mark these packets as "seen"
when we are the one sending them, since we don't expect to see them from
the other side.
2020-11-16 14:03:08 -05:00
Ryan Huber
43a3988afc i don't think this is used at all anymore (#323) 2020-10-29 21:43:50 -04:00
Brian Kelly
5c23676a0f Added line to systemd config template to start Nebula before sshd (#317)
During shutdown, this will keep Nebula alive until after sshd is finished. This cleanly terminates ssh clients accessing a server over a Nebula tunnel.
2020-10-29 21:43:02 -04:00
Nathan Brown
f6d0b4b893 Update README for supported platforms (#312) 2020-10-12 13:11:32 -05:00
Ryan Huber
0d6b55e495 Bring in the new version of kardianos/service and output logfiles on osx (#303)
* this brings in the new version of kardianos/service which properly
outputs logs from launchd services

* add go sum

* is it really this easy?

* Update CHANGELOG.md
2020-09-24 15:34:08 -07:00
Wade Simmons
c71c84882e v1.3.0 (#268)
Update the CHANGELOG for Nebula v1.3.0

Co-authored-by: forfuncsake <drussell@slack-corp.com>
2020-09-22 12:21:12 -04:00
Darren Hoo
0010db46e4 Fix a data race on message counter (#284)
3. ==================
WARNING: DATA RACE
Write at 0x00c00030e020 by goroutine 17:
  sync/atomic.AddInt64()
      runtime/race_amd64.s:276 +0xb
  github.com/slackhq/nebula.(*Interface).sendNoMetrics()
      github.com/slackhq/nebula/inside.go:226 +0x9c
  github.com/slackhq/nebula.(*Interface).send()
      github.com/slackhq/nebula/inside.go:214 +0x149
  github.com/slackhq/nebula.(*Interface).readOutsidePackets()
      github.com/slackhq/nebula/outside.go:94 +0x1213
  github.com/slackhq/nebula.(*udpConn).ListenOut()
      github.com/slackhq/nebula/udp_generic.go:109 +0x3b5
  github.com/slackhq/nebula.(*Interface).listenOut()
      github.com/slackhq/nebula/interface.go:147 +0x15e

Previous read at 0x00c00030e020 by goroutine 18:
  github.com/slackhq/nebula.(*Interface).consumeInsidePacket()
      github.com/slackhq/nebula/inside.go:58 +0x892
  github.com/slackhq/nebula.(*Interface).listenIn()
      github.com/slackhq/nebula/interface.go:164 +0x178
2020-09-21 21:41:46 -04:00
Nathan Brown
68e3e84fdc More like a library (#279) 2020-09-18 09:20:09 -05:00
Brian Luong
6238f1550b Handle panic when invalid IP entered in sshd (#296) 2020-09-18 10:10:25 -04:00
forfuncsake
50b04413c7 Block nebula ssh server from listening on port 22 (#266)
Port 22 is blocked as a safety mechanism. In a case where nebula is
started before sshd, a system may be rendered unreachable if nebula
is holding the system ssh port and there is no other connectivity.

This commit enforces the restriction, which could previously be worked
around by listening on an IPv6 address, e.g.  "[::]:22".
2020-09-15 09:57:32 -04:00
CzBiX
ef498a31da Add disable_timestamp option (#288) 2020-09-09 07:42:11 -04:00
forfuncsake
2e5a477a50 Align linux UDP performance optimizations with configuration (#275)
* Remove unused (*udpConn).Read method

* Align linux UDP performance optimizations with configuration

While attempting to run nebula on an older Synology NAS, it became
apparent that some of the performance optimizations effectively
block support for older kernels. The recvmmsg syscall was added in
linux kernel 2.6.33, and the Synology DS212j (among other models)
is pinned to 2.6.32.12.

Similarly, SO_REUSEPORT was added to the kernel in the 3.9 cycle.
While this option has been backported into some older trees, it
is also missing from the Synology kernel.

This commit allows nebula to be run on linux devices with older
kernels if the config options are set up with a single listener
and a UDP batch size of 1.
2020-08-13 08:24:05 +10:00
Wade Simmons
32fe9bfe75 Use Go 1.15 (#277)
Update all CI checks and release process to use the latest patch version
of go1.15.
2020-08-12 16:16:21 -04:00
forfuncsake
9b8b3c478b Support startup without a tun device (#269)
This commit adds support for Nebula to be started without creating
a tun device. A node started in this mode still has a full "control
plane", but no effective "data plane". Its use is suited to a
lighthouse that has no need to partake in the mesh VPN.

Consequently, creation of the tun device is the only reason nebula
neesd to be started with elevated privileged, so this example
lighthouse can also be run as a non-root user.
2020-08-10 09:15:55 -04:00
Michael Hardy
7b3f23d9a1 Start nebula after the network is up (#270) 2020-08-07 11:33:48 -05:00
forfuncsake
25964b54f6 Use inclusive terminology for cert blocking (#272) 2020-08-06 11:17:47 +10:00
Wade Simmons
ac557f381b drop unroutable packets (#267)
Currently, if a packet arrives on the tun device with a destination that
is not a routable Nebula IP, `queryUnsafeRoute` converts that IP to
0.0.0.0 and we store that packet and try to look up that IP with the
lighthouse. This doesn't make any sense to do, if we get a packet that
is unroutable we should just drop it.

Note, we have a few configurable options like `drop_local_broadcast`
and `drop_multicast` which do this for a few specific types, but since
no packets like this will send correctly I think we should just drop
anything that is unroutable.
2020-08-04 22:59:04 -04:00
Wade Simmons
a54f3fc681 fix fast handshake trigger for static hosts (#265)
We are currently triggering a fast handshake for static hosts right
inside HandshakeManager.AddVpnIP, but this can actually trigger before
we have generated the handshake packet to use. Instead, we should be
triggering right after we call ixHandshakeStage0 in getOrHandshake
(which generates the handshake packet)
2020-08-02 20:59:50 -04:00
Alan Lam
5545cff6ef log remote certificate fingerprint on handshakes (#262) 2020-07-31 18:54:51 -04:00
Wade Simmons
f3a6d8d990 Preserve conntrack table during firewall rules reload (SIGHUP) (#233)
Currently, we drop the conntrack table when firewall rules change during a SIGHUP reload. This means responses to inflight HTTP requests can be dropped, among other issues. This change copies the conntrack table over to the new firewall (it holds the conntrack mutex lock during this process, to be safe).

This change also records which firewall rules hash each conntrack entry used, so that we can re-verify the rules after the new firewall has been loaded.
2020-07-31 18:53:36 -04:00
forfuncsake
9b06748506 Make Interface.Inside an interface type (#252)
This commit updates the Interface.Inside type to be a new interface
type instead of a *Tun. This will allow for an inside interface
that does not use a tun device, such as a single-binary client that
can run without elevated privileges.
2020-07-28 08:53:16 -04:00
Wade Simmons
4756c9613d trigger handshakes when lighthouse reply arrives (#246)
Currently, we wait until the next timer tick to act on the lighthouse's
reply to our HostQuery. This means we can easily add hundreds of
milliseconds of unnecessary delay to the handshake. To fix this, we
can introduce a channel to trigger an outbound handshake without waiting
for the next timer tick.

A few samples of cold ping time between two hosts that require a
lighthouse lookup:

    before (v1.2.0):

    time=156 ms
    time=252 ms
    time=12.6 ms
    time=301 ms
    time=352 ms
    time=49.4 ms
    time=150 ms
    time=13.5 ms
    time=8.24 ms
    time=161 ms
    time=355 ms

    after:

    time=3.53 ms
    time=3.14 ms
    time=3.08 ms
    time=3.92 ms
    time=7.78 ms
    time=3.59 ms
    time=3.07 ms
    time=3.22 ms
    time=3.12 ms
    time=3.08 ms
    time=8.04 ms

I recommend reviewing this PR by looking at each commit individually, as
some refactoring was required that makes the diff a bit confusing when
combined together.
2020-07-22 10:35:10 -04:00
Nathan Brown
4645e6034b Fix up the tun for android (#249) 2020-07-01 10:20:52 -05:00
Wade Simmons
aba42f9fa6 enforce the use of goimports (#248)
* enforce the use of goimports

Instead of enforcing `gofmt`, enforce `goimports`, which also asserts
a separate section for non-builtin packages.

* run `goimports` everywhere

* exclude generated .pb.go files
2020-06-30 18:53:30 -04:00
Nathan Brown
41578ca971 Be more like a library to support mobile (#247) 2020-06-30 13:48:58 -05:00
Wade Simmons
1ea8847085 linux: set advmss correctly when route MTU is used (#245)
If different mtus are specified for different routes, we should set
advmss on each route because Linux does a poor job of selecting the
default (from ip-route(8)):

    advmss NUMBER (Linux 2.3.15+ only)
           the MSS ('Maximal Segment Size') to advertise to these destinations when estab‐
           lishing TCP connections. If it is not given, Linux uses a default value calcu‐
           lated from the first hop device MTU.  (If the path to these destination is asym‐
           metric, this guess may be wrong.)

Note that the default value is calculated from the first hop *device
MTU*, not the *route MTU*. In practice this is usually ok as long as the
other side of the tunnel has the mtu configured exactly the same, but we
should probably just set advmss correctly on these routes.
2020-06-26 13:47:21 -04:00
Wade Simmons
55858c64cc smoke test: test firewall inbound / outbound (#240)
Test that basic inbound / outbound firewall rules work during the smoke
test. This change sets an inbound firewall rule on host3, and a new
host4 with outbound firewall rules. It also tests that conntrack allows
packets once the connection has been established.
2020-06-26 13:46:51 -04:00
Wade Simmons
e94c6b0125 mips-softfloat (#231)
This makes GOARM more generic and does GOMIPS in a similar way to
support mips-softfloat. We also set `-ldflags "-s -w"` for
mips-softfloat to give the best chance of the binary working on these
small devices.
2020-06-26 13:46:23 -04:00
Wade Simmons
b37a91cfbc add meta packet statistics (#230)
This change add more metrics around "meta" (non "message" type packets).
For lighthouse packets, we also record statistics around the specific
lighthouse meta type.

We don't keep statistics for the "message" type so that we don't slow
down the fast path (and you can just look at metrics on the tun
interface to find that information).
2020-06-26 13:45:48 -04:00
David Sonder
3212b769d4 fix typo in conntrack section in examples/config.yml (#236)
the rest of the conntrack values match the default
2020-06-26 11:08:22 -05:00
Patrick Bogen
ecf0e5a9f6 drop packets even if we aren't going to emit Debug logs about it (#239)
* drop packets even if we aren't going to emit Debug logs about it

* smallify change
2020-06-10 16:55:49 -05:00
Wade Simmons
ff13aba8fc allow go test -bench=. to run (#234)
This benchmark had an Errorf at the end, lets remove it so the
benchmarks all run.
2020-05-27 16:52:34 -04:00
Mateusz Kwiatkowski
cc03ff9e9a Unbreak building for FreeBSD (#103)
Add support for freebsd. You have to set `tun.dev` in your config. The second pass of this would be to remove the exec calls and use ioctl(2) and route(4) instead, but we can do that in a second PR.

Co-authored-by: Wade Simmons <wade@wades.im>
2020-05-26 22:23:23 -04:00
Patrick Bogen
363c836422 log the reason for fw drops (#220)
* log the reason for fw drops

* only prepare log if we will end up sending it
2020-04-10 10:57:21 -07:00
248 changed files with 35608 additions and 10265 deletions

69
.github/ISSUE_TEMPLATE/bug-report.yml vendored Normal file
View File

@@ -0,0 +1,69 @@
name: "\U0001F41B Bug Report"
description: Report an issue or possible bug
title: "\U0001F41B BUG:"
labels: []
assignees: []
body:
- type: markdown
attributes:
value: |
### Thank you for taking the time to file a bug report!
Please fill out this form as completely as possible.
- type: input
id: version
attributes:
label: What version of `nebula` are you using? (`nebula -version`)
placeholder: 0.0.0
validations:
required: true
- type: input
id: os
attributes:
label: What operating system are you using?
description: iOS and Android specific issues belong in the [mobile_nebula](https://github.com/DefinedNet/mobile_nebula) repo.
placeholder: Linux, Mac, Windows
validations:
required: true
- type: textarea
id: description
attributes:
label: Describe the Bug
description: A clear and concise description of what the bug is.
validations:
required: true
- type: textarea
id: logs
attributes:
label: Logs from affected hosts
description: |
Please provide logs from ALL affected hosts during the time of the issue. If you do not provide logs we will be unable to assist you!
[Learn how to find Nebula logs here.](https://nebula.defined.net/docs/guides/viewing-nebula-logs/)
Improve formatting by using <code>```</code> at the beginning and end of each log block.
value: |
```
```
validations:
required: true
- type: textarea
id: configs
attributes:
label: Config files from affected hosts
description: |
Provide config files for all affected hosts.
Improve formatting by using <code>```</code> at the beginning and end of each config file.
value: |
```
```
validations:
required: true

21
.github/ISSUE_TEMPLATE/config.yml vendored Normal file
View File

@@ -0,0 +1,21 @@
blank_issues_enabled: true
contact_links:
- name: 💨 Performance Issues
url: https://github.com/slackhq/nebula/discussions/new/choose
about: 'We ask that you create a discussion instead of an issue for performance-related questions. This allows us to have a more open conversation about the issue and helps us to better understand the problem.'
- name: 📄 Documentation Issues
url: https://github.com/definednet/nebula-docs
about: "If you've found an issue with the website documentation, please file it in the nebula-docs repository."
- name: 📱 Mobile Nebula Issues
url: https://github.com/definednet/mobile_nebula
about: "If you're using the mobile Nebula app and have found an issue, please file it in the mobile_nebula repository."
- name: 📘 Documentation
url: https://nebula.defined.net/docs/
about: 'The documentation is the best place to start if you are new to Nebula.'
- name: 💁 Support/Chat
url: https://join.slack.com/t/nebulaoss/shared_invite/zt-39pk4xopc-CUKlGcb5Z39dQ0cK1v7ehA
about: 'For faster support, join us on Slack for assistance!'

22
.github/dependabot.yml vendored Normal file
View File

@@ -0,0 +1,22 @@
version: 2
updates:
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "weekly"
- package-ecosystem: "gomod"
directory: "/"
schedule:
interval: "weekly"
groups:
golang-x-dependencies:
patterns:
- "golang.org/x/*"
zx2c4-dependencies:
patterns:
- "golang.zx2c4.com/*"
protobuf-dependencies:
patterns:
- "github.com/golang/protobuf"
- "google.golang.org/protobuf"

11
.github/pull_request_template.md vendored Normal file
View File

@@ -0,0 +1,11 @@
<!--
Thank you for taking the time to submit a pull request!
Please be sure to provide a clear description of what you're trying to achieve with the change.
- If you're submitting a new feature, please explain how to use it and document any new config options in the example config.
- If you're submitting a bugfix, please link the related issue or describe the circumstances surrounding the issue.
- If you're changing a default, explain why you believe the new default is appropriate for most users.
P.S. If you're only updating the README or other docs, please file a pull request here instead: https://github.com/DefinedNet/nebula-docs
-->

View File

@@ -14,19 +14,21 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Set up Go 1.14
uses: actions/setup-go@v1
with:
go-version: 1.14
id: go
- uses: actions/checkout@v4
- name: Check out code into the Go module directory
uses: actions/checkout@v1
- uses: actions/setup-go@v5
with:
go-version: '1.24'
check-latest: true
- name: Install goimports
run: |
go install golang.org/x/tools/cmd/goimports@latest
- name: gofmt
run: |
if [ "$(find . -iname '*.go' | xargs gofmt -l)" ]
if [ "$(find . -iname '*.go' | grep -v '\.pb\.go$' | xargs goimports -l)" ]
then
find . -iname '*.go' | xargs gofmt -d
find . -iname '*.go' | grep -v '\.pb\.go$' | xargs goimports -d
exit 1
fi

View File

@@ -7,274 +7,212 @@ name: Create release and upload binaries
jobs:
build-linux:
name: Build Linux All
name: Build Linux/BSD All
runs-on: ubuntu-latest
steps:
- name: Set up Go 1.14
uses: actions/setup-go@v1
with:
go-version: 1.14
- uses: actions/checkout@v4
- name: Checkout code
uses: actions/checkout@v2
- uses: actions/setup-go@v5
with:
go-version: '1.24'
check-latest: true
- name: Build
run: |
make BUILD_NUMBER="${GITHUB_REF#refs/tags/v}" release-linux
make BUILD_NUMBER="${GITHUB_REF#refs/tags/v}" release-linux release-freebsd release-openbsd release-netbsd
mkdir release
mv build/*.tar.gz release
- name: Upload artifacts
uses: actions/upload-artifact@v1
uses: actions/upload-artifact@v4
with:
name: linux-latest
path: release
build-windows:
name: Build Windows amd64
name: Build Windows
runs-on: windows-latest
steps:
- name: Set up Go 1.14
uses: actions/setup-go@v1
with:
go-version: 1.14
- uses: actions/checkout@v4
- name: Checkout code
uses: actions/checkout@v2
- uses: actions/setup-go@v5
with:
go-version: '1.24'
check-latest: true
- name: Build
run: |
echo $Env:GITHUB_REF.Substring(11)
go build -trimpath -ldflags "-X main.Build=$($Env:GITHUB_REF.Substring(11))" -o build\nebula.exe ./cmd/nebula-service
go build -trimpath -ldflags "-X main.Build=$($Env:GITHUB_REF.Substring(11))" -o build\nebula-cert.exe ./cmd/nebula-cert
mkdir build\windows-amd64
$Env:GOARCH = "amd64"
go build -trimpath -ldflags "-X main.Build=$($Env:GITHUB_REF.Substring(11))" -o build\windows-amd64\nebula.exe ./cmd/nebula-service
go build -trimpath -ldflags "-X main.Build=$($Env:GITHUB_REF.Substring(11))" -o build\windows-amd64\nebula-cert.exe ./cmd/nebula-cert
mkdir build\windows-arm64
$Env:GOARCH = "arm64"
go build -trimpath -ldflags "-X main.Build=$($Env:GITHUB_REF.Substring(11))" -o build\windows-arm64\nebula.exe ./cmd/nebula-service
go build -trimpath -ldflags "-X main.Build=$($Env:GITHUB_REF.Substring(11))" -o build\windows-arm64\nebula-cert.exe ./cmd/nebula-cert
mkdir build\dist\windows
mv dist\windows\wintun build\dist\windows\
- name: Upload artifacts
uses: actions/upload-artifact@v1
uses: actions/upload-artifact@v4
with:
name: windows-latest
path: build
build-darwin:
name: Build Darwin amd64
runs-on: macOS-latest
name: Build Universal Darwin
env:
HAS_SIGNING_CREDS: ${{ secrets.AC_USERNAME != '' }}
runs-on: macos-latest
steps:
- name: Set up Go 1.14
uses: actions/setup-go@v1
- uses: actions/checkout@v4
- uses: actions/setup-go@v5
with:
go-version: 1.14
go-version: '1.24'
check-latest: true
- name: Checkout code
uses: actions/checkout@v2
- name: Import certificates
if: env.HAS_SIGNING_CREDS == 'true'
uses: Apple-Actions/import-codesign-certs@v5
with:
p12-file-base64: ${{ secrets.APPLE_DEVELOPER_CERTIFICATE_P12_BASE64 }}
p12-password: ${{ secrets.APPLE_DEVELOPER_CERTIFICATE_PASSWORD }}
- name: Build
- name: Build, sign, and notarize
env:
AC_USERNAME: ${{ secrets.AC_USERNAME }}
AC_PASSWORD: ${{ secrets.AC_PASSWORD }}
run: |
make BUILD_NUMBER="${GITHUB_REF#refs/tags/v}" service build/nebula-darwin-amd64.tar.gz
rm -rf release
mkdir release
mv build/*.tar.gz release
make BUILD_NUMBER="${GITHUB_REF#refs/tags/v}" service build/darwin-amd64/nebula build/darwin-amd64/nebula-cert
make BUILD_NUMBER="${GITHUB_REF#refs/tags/v}" service build/darwin-arm64/nebula build/darwin-arm64/nebula-cert
lipo -create -output ./release/nebula ./build/darwin-amd64/nebula ./build/darwin-arm64/nebula
lipo -create -output ./release/nebula-cert ./build/darwin-amd64/nebula-cert ./build/darwin-arm64/nebula-cert
if [ -n "$AC_USERNAME" ]; then
codesign -s "10BC1FDDEB6CE753550156C0669109FAC49E4D1E" -f -v --timestamp --options=runtime -i "net.defined.nebula" ./release/nebula
codesign -s "10BC1FDDEB6CE753550156C0669109FAC49E4D1E" -f -v --timestamp --options=runtime -i "net.defined.nebula-cert" ./release/nebula-cert
fi
zip -j release/nebula-darwin.zip release/nebula-cert release/nebula
if [ -n "$AC_USERNAME" ]; then
xcrun notarytool submit ./release/nebula-darwin.zip --team-id "576H3XS7FP" --apple-id "$AC_USERNAME" --password "$AC_PASSWORD" --wait
fi
- name: Upload artifacts
uses: actions/upload-artifact@v1
uses: actions/upload-artifact@v4
with:
name: darwin-latest
path: release
path: ./release/*
build-docker:
name: Create and Upload Docker Images
# Technically we only need build-linux to succeed, but if any platforms fail we'll
# want to investigate and restart the build
needs: [build-linux, build-darwin, build-windows]
runs-on: ubuntu-latest
env:
HAS_DOCKER_CREDS: ${{ vars.DOCKERHUB_USERNAME != '' && secrets.DOCKERHUB_TOKEN != '' }}
# XXX It's not possible to write a conditional here, so instead we do it on every step
#if: ${{ env.HAS_DOCKER_CREDS == 'true' }}
steps:
# Be sure to checkout the code before downloading artifacts, or they will
# be overwritten
- name: Checkout code
if: ${{ env.HAS_DOCKER_CREDS == 'true' }}
uses: actions/checkout@v4
- name: Download artifacts
if: ${{ env.HAS_DOCKER_CREDS == 'true' }}
uses: actions/download-artifact@v4
with:
name: linux-latest
path: artifacts
- name: Login to Docker Hub
if: ${{ env.HAS_DOCKER_CREDS == 'true' }}
uses: docker/login-action@v3
with:
username: ${{ vars.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Set up Docker Buildx
if: ${{ env.HAS_DOCKER_CREDS == 'true' }}
uses: docker/setup-buildx-action@v3
- name: Build and push images
if: ${{ env.HAS_DOCKER_CREDS == 'true' }}
env:
DOCKER_IMAGE_REPO: ${{ vars.DOCKER_IMAGE_REPO || 'nebulaoss/nebula' }}
DOCKER_IMAGE_TAG: ${{ vars.DOCKER_IMAGE_TAG || 'latest' }}
run: |
mkdir -p build/linux-{amd64,arm64}
tar -zxvf artifacts/nebula-linux-amd64.tar.gz -C build/linux-amd64/
tar -zxvf artifacts/nebula-linux-arm64.tar.gz -C build/linux-arm64/
docker buildx build . --push -f docker/Dockerfile --platform linux/amd64,linux/arm64 --tag "${DOCKER_IMAGE_REPO}:${DOCKER_IMAGE_TAG}" --tag "${DOCKER_IMAGE_REPO}:${GITHUB_REF#refs/tags/v}"
release:
name: Create and Upload Release
needs: [build-linux, build-darwin, build-windows]
runs-on: ubuntu-latest
steps:
- name: Download Linux artifacts
uses: actions/download-artifact@v1
with:
name: linux-latest
- uses: actions/checkout@v4
- name: Download Darwin artifacts
uses: actions/download-artifact@v1
- name: Download artifacts
uses: actions/download-artifact@v4
with:
name: darwin-latest
- name: Download Windows artifacts
uses: actions/download-artifact@v1
with:
name: windows-latest
path: artifacts
- name: Zip Windows
run: |
cd windows-latest
zip nebula-windows-amd64.zip nebula.exe nebula-cert.exe
cd artifacts/windows-latest
cp windows-amd64/* .
zip -r nebula-windows-amd64.zip nebula.exe nebula-cert.exe dist
cp windows-arm64/* .
zip -r nebula-windows-arm64.zip nebula.exe nebula-cert.exe dist
- name: Create sha256sum
run: |
cd artifacts
for dir in linux-latest darwin-latest windows-latest
do
(
cd $dir
if [ "$dir" = windows-latest ]
then
sha256sum <nebula.exe | sed 's=-$=nebula-windows-amd64.zip/nebula.exe='
sha256sum <nebula-cert.exe | sed 's=-$=nebula-windows-amd64.zip/nebula-cert.exe='
sha256sum <windows-amd64/nebula.exe | sed 's=-$=nebula-windows-amd64.zip/nebula.exe='
sha256sum <windows-amd64/nebula-cert.exe | sed 's=-$=nebula-windows-amd64.zip/nebula-cert.exe='
sha256sum <windows-arm64/nebula.exe | sed 's=-$=nebula-windows-arm64.zip/nebula.exe='
sha256sum <windows-arm64/nebula-cert.exe | sed 's=-$=nebula-windows-arm64.zip/nebula-cert.exe='
sha256sum nebula-windows-amd64.zip
sha256sum nebula-windows-arm64.zip
elif [ "$dir" = darwin-latest ]
then
sha256sum <nebula-darwin.zip | sed 's=-$=nebula-darwin.zip='
sha256sum <nebula | sed 's=-$=nebula-darwin.zip/nebula='
sha256sum <nebula-cert | sed 's=-$=nebula-darwin.zip/nebula-cert='
else
for v in *.tar.gz
do
sha256sum $v
tar zxf $v --to-command='sh -c "sha256sum | sed s=-$='$v'/$TAR_FILENAME="'
done
for v in *.tar.gz
do
sha256sum $v
tar zxf $v --to-command='sh -c "sha256sum | sed s=-$='$v'/$TAR_FILENAME="'
done
fi
)
done | sort -k 2 >SHASUM256.txt
- name: Create Release
id: create_release
uses: actions/create-release@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
tag_name: ${{ github.ref }}
release_name: Release ${{ github.ref }}
draft: false
prerelease: false
##
## Upload assets (I wish we could just upload the whole folder at once...
##
- name: Upload SHASUM256.txt
uses: actions/upload-release-asset@v1.0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: ./SHASUM256.txt
asset_name: SHASUM256.txt
asset_content_type: text/plain
- name: Upload darwin-amd64
uses: actions/upload-release-asset@v1.0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: ./darwin-latest/nebula-darwin-amd64.tar.gz
asset_name: nebula-darwin-amd64.tar.gz
asset_content_type: application/gzip
- name: Upload windows-amd64
uses: actions/upload-release-asset@v1.0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: ./windows-latest/nebula-windows-amd64.zip
asset_name: nebula-windows-amd64.zip
asset_content_type: application/zip
- name: Upload linux-amd64
uses: actions/upload-release-asset@v1.0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: ./linux-latest/nebula-linux-amd64.tar.gz
asset_name: nebula-linux-amd64.tar.gz
asset_content_type: application/gzip
- name: Upload linux-386
uses: actions/upload-release-asset@v1.0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: ./linux-latest/nebula-linux-386.tar.gz
asset_name: nebula-linux-386.tar.gz
asset_content_type: application/gzip
- name: Upload linux-ppc64le
uses: actions/upload-release-asset@v1.0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: ./linux-latest/nebula-linux-ppc64le.tar.gz
asset_name: nebula-linux-ppc64le.tar.gz
asset_content_type: application/gzip
- name: Upload linux-arm-5
uses: actions/upload-release-asset@v1.0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: ./linux-latest/nebula-linux-arm-5.tar.gz
asset_name: nebula-linux-arm-5.tar.gz
asset_content_type: application/gzip
- name: Upload linux-arm-6
uses: actions/upload-release-asset@v1.0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: ./linux-latest/nebula-linux-arm-6.tar.gz
asset_name: nebula-linux-arm-6.tar.gz
asset_content_type: application/gzip
- name: Upload linux-arm-7
uses: actions/upload-release-asset@v1.0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: ./linux-latest/nebula-linux-arm-7.tar.gz
asset_name: nebula-linux-arm-7.tar.gz
asset_content_type: application/gzip
- name: Upload linux-arm64
uses: actions/upload-release-asset@v1.0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: ./linux-latest/nebula-linux-arm64.tar.gz
asset_name: nebula-linux-arm64.tar.gz
asset_content_type: application/gzip
- name: Upload linux-mips
uses: actions/upload-release-asset@v1.0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: ./linux-latest/nebula-linux-mips.tar.gz
asset_name: nebula-linux-mips.tar.gz
asset_content_type: application/gzip
- name: Upload linux-mipsle
uses: actions/upload-release-asset@v1.0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: ./linux-latest/nebula-linux-mipsle.tar.gz
asset_name: nebula-linux-mipsle.tar.gz
asset_content_type: application/gzip
- name: Upload linux-mips64
uses: actions/upload-release-asset@v1.0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: ./linux-latest/nebula-linux-mips64.tar.gz
asset_name: nebula-linux-mips64.tar.gz
asset_content_type: application/gzip
- name: Upload linux-mips64le
uses: actions/upload-release-asset@v1.0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: ./linux-latest/nebula-linux-mips64le.tar.gz
asset_name: nebula-linux-mips64le.tar.gz
asset_content_type: application/gzip
run: |
cd artifacts
gh release create \
--verify-tag \
--title "Release ${{ github.ref_name }}" \
"${{ github.ref_name }}" \
SHASUM256.txt *-latest/*.zip *-latest/*.tar.gz

51
.github/workflows/smoke-extra.yml vendored Normal file
View File

@@ -0,0 +1,51 @@
name: smoke-extra
on:
push:
branches:
- master
pull_request:
types: [opened, synchronize, labeled, reopened]
paths:
- '.github/workflows/smoke**'
- '**Makefile'
- '**.go'
- '**.proto'
- 'go.mod'
- 'go.sum'
jobs:
smoke-extra:
if: github.ref == 'refs/heads/master' || contains(github.event.pull_request.labels.*.name, 'smoke-test-extra')
name: Run extra smoke tests
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-go@v5
with:
go-version-file: 'go.mod'
check-latest: true
- name: add hashicorp source
run: wget -O- https://apt.releases.hashicorp.com/gpg | gpg --dearmor | sudo tee /usr/share/keyrings/hashicorp-archive-keyring.gpg && echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list
- name: install vagrant
run: sudo apt-get update && sudo apt-get install -y vagrant virtualbox
- name: freebsd-amd64
run: make smoke-vagrant/freebsd-amd64
- name: openbsd-amd64
run: make smoke-vagrant/openbsd-amd64
- name: netbsd-amd64
run: make smoke-vagrant/netbsd-amd64
- name: linux-386
run: make smoke-vagrant/linux-386
- name: linux-amd64-ipv6disable
run: make smoke-vagrant/linux-amd64-ipv6disable
timeout-minutes: 30

View File

@@ -14,28 +14,19 @@ on:
jobs:
smoke:
name: Run 3 node smoke test
name: Run multi node smoke test
runs-on: ubuntu-latest
steps:
- name: Set up Go 1.14
uses: actions/setup-go@v1
with:
go-version: 1.14
id: go
- uses: actions/checkout@v4
- name: Check out code into the Go module directory
uses: actions/checkout@v1
- uses: actions/cache@v1
- uses: actions/setup-go@v5
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-
go-version: '1.24'
check-latest: true
- name: build
run: make
run: make bin-docker CGO_ENABLED=1 BUILD_ARGS=-race
- name: setup docker image
working-directory: ./.github/workflows/smoke
@@ -45,4 +36,28 @@ jobs:
working-directory: ./.github/workflows/smoke
run: ./smoke.sh
- name: setup relay docker image
working-directory: ./.github/workflows/smoke
run: ./build-relay.sh
- name: run smoke relay
working-directory: ./.github/workflows/smoke
run: ./smoke-relay.sh
- name: setup docker image for P256
working-directory: ./.github/workflows/smoke
run: NAME="smoke-p256" CURVE=P256 ./build.sh
- name: run smoke-p256
working-directory: ./.github/workflows/smoke
run: NAME="smoke-p256" ./smoke.sh
- name: setup docker image for fips140
working-directory: ./.github/workflows/smoke
run: NAME="smoke-fips140" CURVE=P256 GOFIPS140=v1.0.0 LDFLAGS=-checklinkname=0 ./build.sh
- name: run smoke-fips140
working-directory: ./.github/workflows/smoke
run: NAME="smoke-fips140" ./smoke.sh
timeout-minutes: 10

View File

@@ -1,5 +1,9 @@
FROM debian:buster
FROM ubuntu:jammy
ADD ./build /
RUN apt-get update && apt-get install -y iputils-ping ncat tcpdump
ENTRYPOINT ["/nebula"]
ADD ./build /nebula
WORKDIR /nebula
ENTRYPOINT ["/nebula/nebula"]

44
.github/workflows/smoke/build-relay.sh vendored Executable file
View File

@@ -0,0 +1,44 @@
#!/bin/sh
set -e -x
rm -rf ./build
mkdir ./build
(
cd build
cp ../../../../build/linux-amd64/nebula .
cp ../../../../build/linux-amd64/nebula-cert .
HOST="lighthouse1" AM_LIGHTHOUSE=true ../genconfig.sh >lighthouse1.yml <<EOF
relay:
am_relay: true
EOF
export LIGHTHOUSES="192.168.100.1 172.17.0.2:4242"
export REMOTE_ALLOW_LIST='{"172.17.0.4/32": false, "172.17.0.5/32": false}'
HOST="host2" ../genconfig.sh >host2.yml <<EOF
relay:
relays:
- 192.168.100.1
EOF
export REMOTE_ALLOW_LIST='{"172.17.0.3/32": false}'
HOST="host3" ../genconfig.sh >host3.yml
HOST="host4" ../genconfig.sh >host4.yml <<EOF
relay:
use_relays: false
EOF
../../../../nebula-cert ca -name "Smoke Test"
../../../../nebula-cert sign -name "lighthouse1" -groups "lighthouse,lighthouse1" -ip "192.168.100.1/24"
../../../../nebula-cert sign -name "host2" -groups "host,host2" -ip "192.168.100.2/24"
../../../../nebula-cert sign -name "host3" -groups "host,host3" -ip "192.168.100.3/24"
../../../../nebula-cert sign -name "host4" -groups "host,host4" -ip "192.168.100.4/24"
)
docker build -t nebula:smoke-relay .

View File

@@ -5,20 +5,44 @@ set -e -x
rm -rf ./build
mkdir ./build
# TODO: Assumes your docker bridge network is a /24, and the first container that launches will be .1
# - We could make this better by launching the lighthouse first and then fetching what IP it is.
NET="$(docker network inspect bridge -f '{{ range .IPAM.Config }}{{ .Subnet }}{{ end }}' | cut -d. -f1-3)"
(
cd build
cp ../../../../nebula .
cp ../../../../nebula-cert .
cp ../../../../build/linux-amd64/nebula .
cp ../../../../build/linux-amd64/nebula-cert .
HOST="lighthouse1" AM_LIGHTHOUSE=true ../genconfig.sh >lighthouse1.yml
HOST="host2" LIGHTHOUSES="192.168.100.1 172.17.0.2:4242" ../genconfig.sh >host2.yml
HOST="host3" LIGHTHOUSES="192.168.100.1 172.17.0.2:4242" ../genconfig.sh >host3.yml
if [ "$1" ]
then
cp "../../../../build/$1/nebula" "$1-nebula"
fi
./nebula-cert ca -name "Smoke Test"
./nebula-cert sign -name "lighthouse1" -ip "192.168.100.1/24"
./nebula-cert sign -name "host2" -ip "192.168.100.2/24"
./nebula-cert sign -name "host3" -ip "192.168.100.3/24"
HOST="lighthouse1" \
AM_LIGHTHOUSE=true \
../genconfig.sh >lighthouse1.yml
HOST="host2" \
LIGHTHOUSES="192.168.100.1 $NET.2:4242" \
../genconfig.sh >host2.yml
HOST="host3" \
LIGHTHOUSES="192.168.100.1 $NET.2:4242" \
INBOUND='[{"port": "any", "proto": "icmp", "group": "lighthouse"}]' \
../genconfig.sh >host3.yml
HOST="host4" \
LIGHTHOUSES="192.168.100.1 $NET.2:4242" \
OUTBOUND='[{"port": "any", "proto": "icmp", "group": "lighthouse"}]' \
../genconfig.sh >host4.yml
../../../../nebula-cert ca -curve "${CURVE:-25519}" -name "Smoke Test"
../../../../nebula-cert sign -name "lighthouse1" -groups "lighthouse,lighthouse1" -ip "192.168.100.1/24"
../../../../nebula-cert sign -name "host2" -groups "host,host2" -ip "192.168.100.2/24"
../../../../nebula-cert sign -name "host3" -groups "host,host3" -ip "192.168.100.3/24"
../../../../nebula-cert sign -name "host4" -groups "host,host4" -ip "192.168.100.4/24"
)
docker build -t nebula:smoke .
docker build -t "nebula:${NAME:-smoke}" .

View File

@@ -2,6 +2,7 @@
set -e
FIREWALL_ALL='[{"port": "any", "proto": "any", "host": "any"}]'
if [ "$STATIC_HOSTS" ] || [ "$LIGHTHOUSES" ]
then
@@ -32,29 +33,27 @@ lighthouse_hosts() {
cat <<EOF
pki:
ca: /ca.crt
cert: /${HOST}.crt
key: /${HOST}.key
ca: ca.crt
cert: ${HOST}.crt
key: ${HOST}.key
lighthouse:
am_lighthouse: ${AM_LIGHTHOUSE:-false}
hosts: $(lighthouse_hosts)
remote_allow_list: ${REMOTE_ALLOW_LIST}
listen:
host: 0.0.0.0
port: ${LISTEN_PORT:-4242}
tun:
dev: ${TUN_DEV:-nebula1}
dev: ${TUN_DEV:-tun0}
firewall:
outbound:
- port: any
proto: any
host: any
inbound_action: reject
outbound_action: reject
outbound: ${OUTBOUND:-$FIREWALL_ALL}
inbound: ${INBOUND:-$FIREWALL_ALL}
inbound:
- port: any
proto: any
host: any
$(test -t 0 || cat)
EOF

85
.github/workflows/smoke/smoke-relay.sh vendored Executable file
View File

@@ -0,0 +1,85 @@
#!/bin/bash
set -e -x
set -o pipefail
mkdir -p logs
cleanup() {
echo
echo " *** cleanup"
echo
set +e
if [ "$(jobs -r)" ]
then
docker kill lighthouse1 host2 host3 host4
fi
}
trap cleanup EXIT
docker run --name lighthouse1 --rm nebula:smoke-relay -config lighthouse1.yml -test
docker run --name host2 --rm nebula:smoke-relay -config host2.yml -test
docker run --name host3 --rm nebula:smoke-relay -config host3.yml -test
docker run --name host4 --rm nebula:smoke-relay -config host4.yml -test
docker run --name lighthouse1 --device /dev/net/tun:/dev/net/tun --cap-add NET_ADMIN --rm nebula:smoke-relay -config lighthouse1.yml 2>&1 | tee logs/lighthouse1 | sed -u 's/^/ [lighthouse1] /' &
sleep 1
docker run --name host2 --device /dev/net/tun:/dev/net/tun --cap-add NET_ADMIN --rm nebula:smoke-relay -config host2.yml 2>&1 | tee logs/host2 | sed -u 's/^/ [host2] /' &
sleep 1
docker run --name host3 --device /dev/net/tun:/dev/net/tun --cap-add NET_ADMIN --rm nebula:smoke-relay -config host3.yml 2>&1 | tee logs/host3 | sed -u 's/^/ [host3] /' &
sleep 1
docker run --name host4 --device /dev/net/tun:/dev/net/tun --cap-add NET_ADMIN --rm nebula:smoke-relay -config host4.yml 2>&1 | tee logs/host4 | sed -u 's/^/ [host4] /' &
sleep 1
set +x
echo
echo " *** Testing ping from lighthouse1"
echo
set -x
docker exec lighthouse1 ping -c1 192.168.100.2
docker exec lighthouse1 ping -c1 192.168.100.3
docker exec lighthouse1 ping -c1 192.168.100.4
set +x
echo
echo " *** Testing ping from host2"
echo
set -x
docker exec host2 ping -c1 192.168.100.1
# Should fail because no relay configured in this direction
! docker exec host2 ping -c1 192.168.100.3 -w5 || exit 1
! docker exec host2 ping -c1 192.168.100.4 -w5 || exit 1
set +x
echo
echo " *** Testing ping from host3"
echo
set -x
docker exec host3 ping -c1 192.168.100.1
docker exec host3 ping -c1 192.168.100.2
docker exec host3 ping -c1 192.168.100.4
set +x
echo
echo " *** Testing ping from host4"
echo
set -x
docker exec host4 ping -c1 192.168.100.1
# Should fail because relays not allowed
! docker exec host4 ping -c1 192.168.100.2 -w5 || exit 1
docker exec host4 ping -c1 192.168.100.3
docker exec host4 sh -c 'kill 1'
docker exec host3 sh -c 'kill 1'
docker exec host2 sh -c 'kill 1'
docker exec lighthouse1 sh -c 'kill 1'
sleep 5
if [ "$(jobs -r)" ]
then
echo "nebula still running after SIGTERM sent" >&2
exit 1
fi

105
.github/workflows/smoke/smoke-vagrant.sh vendored Executable file
View File

@@ -0,0 +1,105 @@
#!/bin/bash
set -e -x
set -o pipefail
export VAGRANT_CWD="$PWD/vagrant-$1"
mkdir -p logs
cleanup() {
echo
echo " *** cleanup"
echo
set +e
if [ "$(jobs -r)" ]
then
docker kill lighthouse1 host2
fi
vagrant destroy -f
}
trap cleanup EXIT
CONTAINER="nebula:${NAME:-smoke}"
docker run --name lighthouse1 --rm "$CONTAINER" -config lighthouse1.yml -test
docker run --name host2 --rm "$CONTAINER" -config host2.yml -test
vagrant up
vagrant ssh -c "cd /nebula && /nebula/$1-nebula -config host3.yml -test" -- -T
docker run --name lighthouse1 --device /dev/net/tun:/dev/net/tun --cap-add NET_ADMIN --rm "$CONTAINER" -config lighthouse1.yml 2>&1 | tee logs/lighthouse1 | sed -u 's/^/ [lighthouse1] /' &
sleep 1
docker run --name host2 --device /dev/net/tun:/dev/net/tun --cap-add NET_ADMIN --rm "$CONTAINER" -config host2.yml 2>&1 | tee logs/host2 | sed -u 's/^/ [host2] /' &
sleep 1
vagrant ssh -c "cd /nebula && sudo sh -c 'echo \$\$ >/nebula/pid && exec /nebula/$1-nebula -config host3.yml'" 2>&1 -- -T | tee logs/host3 | sed -u 's/^/ [host3] /' &
sleep 15
# grab tcpdump pcaps for debugging
docker exec lighthouse1 tcpdump -i nebula1 -q -w - -U 2>logs/lighthouse1.inside.log >logs/lighthouse1.inside.pcap &
docker exec lighthouse1 tcpdump -i eth0 -q -w - -U 2>logs/lighthouse1.outside.log >logs/lighthouse1.outside.pcap &
docker exec host2 tcpdump -i nebula1 -q -w - -U 2>logs/host2.inside.log >logs/host2.inside.pcap &
docker exec host2 tcpdump -i eth0 -q -w - -U 2>logs/host2.outside.log >logs/host2.outside.pcap &
# vagrant ssh -c "tcpdump -i nebula1 -q -w - -U" 2>logs/host3.inside.log >logs/host3.inside.pcap &
# vagrant ssh -c "tcpdump -i eth0 -q -w - -U" 2>logs/host3.outside.log >logs/host3.outside.pcap &
#docker exec host2 ncat -nklv 0.0.0.0 2000 &
#vagrant ssh -c "ncat -nklv 0.0.0.0 2000" &
#docker exec host2 ncat -e '/usr/bin/echo host2' -nkluv 0.0.0.0 3000 &
#vagrant ssh -c "ncat -e '/usr/bin/echo host3' -nkluv 0.0.0.0 3000" &
set +x
echo
echo " *** Testing ping from lighthouse1"
echo
set -x
docker exec lighthouse1 ping -c1 192.168.100.2
docker exec lighthouse1 ping -c1 192.168.100.3
set +x
echo
echo " *** Testing ping from host2"
echo
set -x
docker exec host2 ping -c1 192.168.100.1
# Should fail because not allowed by host3 inbound firewall
! docker exec host2 ping -c1 192.168.100.3 -w5 || exit 1
#set +x
#echo
#echo " *** Testing ncat from host2"
#echo
#set -x
# Should fail because not allowed by host3 inbound firewall
#! docker exec host2 ncat -nzv -w5 192.168.100.3 2000 || exit 1
#! docker exec host2 ncat -nzuv -w5 192.168.100.3 3000 | grep -q host3 || exit 1
set +x
echo
echo " *** Testing ping from host3"
echo
set -x
vagrant ssh -c "ping -c1 192.168.100.1" -- -T
vagrant ssh -c "ping -c1 192.168.100.2" -- -T
#set +x
#echo
#echo " *** Testing ncat from host3"
#echo
#set -x
#vagrant ssh -c "ncat -nzv -w5 192.168.100.2 2000"
#vagrant ssh -c "ncat -nzuv -w5 192.168.100.2 3000" | grep -q host2
vagrant ssh -c "sudo xargs kill </nebula/pid" -- -T
docker exec host2 sh -c 'kill 1'
docker exec lighthouse1 sh -c 'kill 1'
sleep 1
if [ "$(jobs -r)" ]
then
echo "nebula still running after SIGTERM sent" >&2
exit 1
fi

View File

@@ -1,17 +1,55 @@
#!/bin/sh
#!/bin/bash
set -e -x
docker run --name lighthouse1 --rm nebula:smoke -config lighthouse1.yml -test
docker run --name host2 --rm nebula:smoke -config host2.yml -test
docker run --name host3 --rm nebula:smoke -config host3.yml -test
set -o pipefail
docker run --name lighthouse1 --device /dev/net/tun:/dev/net/tun --cap-add NET_ADMIN --rm nebula:smoke -config lighthouse1.yml &
mkdir -p logs
cleanup() {
echo
echo " *** cleanup"
echo
set +e
if [ "$(jobs -r)" ]
then
docker kill lighthouse1 host2 host3 host4
fi
}
trap cleanup EXIT
CONTAINER="nebula:${NAME:-smoke}"
docker run --name lighthouse1 --rm "$CONTAINER" -config lighthouse1.yml -test
docker run --name host2 --rm "$CONTAINER" -config host2.yml -test
docker run --name host3 --rm "$CONTAINER" -config host3.yml -test
docker run --name host4 --rm "$CONTAINER" -config host4.yml -test
docker run --name lighthouse1 --device /dev/net/tun:/dev/net/tun --cap-add NET_ADMIN --rm "$CONTAINER" -config lighthouse1.yml 2>&1 | tee logs/lighthouse1 | sed -u 's/^/ [lighthouse1] /' &
sleep 1
docker run --name host2 --device /dev/net/tun:/dev/net/tun --cap-add NET_ADMIN --rm nebula:smoke -config host2.yml &
docker run --name host2 --device /dev/net/tun:/dev/net/tun --cap-add NET_ADMIN --rm "$CONTAINER" -config host2.yml 2>&1 | tee logs/host2 | sed -u 's/^/ [host2] /' &
sleep 1
docker run --name host3 --device /dev/net/tun:/dev/net/tun --cap-add NET_ADMIN --rm nebula:smoke -config host3.yml &
docker run --name host3 --device /dev/net/tun:/dev/net/tun --cap-add NET_ADMIN --rm "$CONTAINER" -config host3.yml 2>&1 | tee logs/host3 | sed -u 's/^/ [host3] /' &
sleep 1
docker run --name host4 --device /dev/net/tun:/dev/net/tun --cap-add NET_ADMIN --rm "$CONTAINER" -config host4.yml 2>&1 | tee logs/host4 | sed -u 's/^/ [host4] /' &
sleep 1
# grab tcpdump pcaps for debugging
docker exec lighthouse1 tcpdump -i nebula1 -q -w - -U 2>logs/lighthouse1.inside.log >logs/lighthouse1.inside.pcap &
docker exec lighthouse1 tcpdump -i eth0 -q -w - -U 2>logs/lighthouse1.outside.log >logs/lighthouse1.outside.pcap &
docker exec host2 tcpdump -i nebula1 -q -w - -U 2>logs/host2.inside.log >logs/host2.inside.pcap &
docker exec host2 tcpdump -i eth0 -q -w - -U 2>logs/host2.outside.log >logs/host2.outside.pcap &
docker exec host3 tcpdump -i nebula1 -q -w - -U 2>logs/host3.inside.log >logs/host3.inside.pcap &
docker exec host3 tcpdump -i eth0 -q -w - -U 2>logs/host3.outside.log >logs/host3.outside.pcap &
docker exec host4 tcpdump -i nebula1 -q -w - -U 2>logs/host4.inside.log >logs/host4.inside.pcap &
docker exec host4 tcpdump -i eth0 -q -w - -U 2>logs/host4.outside.log >logs/host4.outside.pcap &
docker exec host2 ncat -nklv 0.0.0.0 2000 &
docker exec host3 ncat -nklv 0.0.0.0 2000 &
docker exec host2 ncat -e '/usr/bin/echo host2' -nkluv 0.0.0.0 3000 &
docker exec host3 ncat -e '/usr/bin/echo host3' -nkluv 0.0.0.0 3000 &
set +x
echo
@@ -27,7 +65,17 @@ echo " *** Testing ping from host2"
echo
set -x
docker exec host2 ping -c1 192.168.100.1
docker exec host2 ping -c1 192.168.100.3
# Should fail because not allowed by host3 inbound firewall
! docker exec host2 ping -c1 192.168.100.3 -w5 || exit 1
set +x
echo
echo " *** Testing ncat from host2"
echo
set -x
# Should fail because not allowed by host3 inbound firewall
! docker exec host2 ncat -nzv -w5 192.168.100.3 2000 || exit 1
! docker exec host2 ncat -nzuv -w5 192.168.100.3 3000 | grep -q host3 || exit 1
set +x
echo
@@ -36,3 +84,55 @@ echo
set -x
docker exec host3 ping -c1 192.168.100.1
docker exec host3 ping -c1 192.168.100.2
set +x
echo
echo " *** Testing ncat from host3"
echo
set -x
docker exec host3 ncat -nzv -w5 192.168.100.2 2000
docker exec host3 ncat -nzuv -w5 192.168.100.2 3000 | grep -q host2
set +x
echo
echo " *** Testing ping from host4"
echo
set -x
docker exec host4 ping -c1 192.168.100.1
# Should fail because not allowed by host4 outbound firewall
! docker exec host4 ping -c1 192.168.100.2 -w5 || exit 1
! docker exec host4 ping -c1 192.168.100.3 -w5 || exit 1
set +x
echo
echo " *** Testing ncat from host4"
echo
set -x
# Should fail because not allowed by host4 outbound firewall
! docker exec host4 ncat -nzv -w5 192.168.100.2 2000 || exit 1
! docker exec host4 ncat -nzv -w5 192.168.100.3 2000 || exit 1
! docker exec host4 ncat -nzuv -w5 192.168.100.2 3000 | grep -q host2 || exit 1
! docker exec host4 ncat -nzuv -w5 192.168.100.3 3000 | grep -q host3 || exit 1
set +x
echo
echo " *** Testing conntrack"
echo
set -x
# host2 can ping host3 now that host3 pinged it first
docker exec host2 ping -c1 192.168.100.3
# host4 can ping host2 once conntrack established
docker exec host2 ping -c1 192.168.100.4
docker exec host4 ping -c1 192.168.100.2
docker exec host4 sh -c 'kill 1'
docker exec host3 sh -c 'kill 1'
docker exec host2 sh -c 'kill 1'
docker exec lighthouse1 sh -c 'kill 1'
sleep 5
if [ "$(jobs -r)" ]
then
echo "nebula still running after SIGTERM sent" >&2
exit 1
fi

View File

@@ -0,0 +1,7 @@
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure("2") do |config|
config.vm.box = "generic/freebsd14"
config.vm.synced_folder "../build", "/nebula", type: "rsync"
end

View File

@@ -0,0 +1,7 @@
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure("2") do |config|
config.vm.box = "ubuntu/xenial32"
config.vm.synced_folder "../build", "/nebula"
end

View File

@@ -0,0 +1,16 @@
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure("2") do |config|
config.vm.box = "ubuntu/jammy64"
config.vm.synced_folder "../build", "/nebula"
config.vm.provision :shell do |shell|
shell.inline = <<-EOF
sed -i 's/GRUB_CMDLINE_LINUX=""/GRUB_CMDLINE_LINUX="ipv6.disable=1"/' /etc/default/grub
update-grub
EOF
shell.privileged = true
shell.reboot = true
end
end

View File

@@ -0,0 +1,7 @@
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure("2") do |config|
config.vm.box = "generic/netbsd9"
config.vm.synced_folder "../build", "/nebula", type: "rsync"
end

View File

@@ -0,0 +1,7 @@
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure("2") do |config|
config.vm.box = "generic/openbsd7"
config.vm.synced_folder "../build", "/nebula", type: "rsync"
end

View File

@@ -18,51 +18,92 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Set up Go 1.14
uses: actions/setup-go@v1
with:
go-version: 1.14
id: go
- uses: actions/checkout@v4
- name: Check out code into the Go module directory
uses: actions/checkout@v1
- uses: actions/cache@v1
- uses: actions/setup-go@v5
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-
go-version: '1.24'
check-latest: true
- name: Build
run: make all
- name: Vet
run: make vet
- name: golangci-lint
uses: golangci/golangci-lint-action@v8
with:
version: v2.1
- name: Test
run: make test
- name: End 2 end
run: make e2evv
- name: Build test mobile
run: make build-test-mobile
- uses: actions/upload-artifact@v4
with:
name: e2e packet flow linux-latest
path: e2e/mermaid/linux-latest
if-no-files-found: warn
test-linux-boringcrypto:
name: Build and test on linux with boringcrypto
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-go@v5
with:
go-version: '1.24'
check-latest: true
- name: Build
run: make bin-boringcrypto
- name: Test
run: make test-boringcrypto
- name: End 2 end
run: make e2e GOEXPERIMENT=boringcrypto CGO_ENABLED=1 TEST_ENV="TEST_LOGS=1" TEST_FLAGS="-v -ldflags -checklinkname=0"
test-linux-pkcs11:
name: Build and test on linux with pkcs11
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-go@v5
with:
go-version: '1.22'
check-latest: true
- name: Build
run: make bin-pkcs11
- name: Test
run: make test-pkcs11
test:
name: Build and test on ${{ matrix.os }}
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [windows-latest, macOS-latest]
os: [windows-latest, macos-latest]
steps:
- name: Set up Go 1.14
uses: actions/setup-go@v1
with:
go-version: 1.14
id: go
- uses: actions/checkout@v4
- name: Check out code into the Go module directory
uses: actions/checkout@v1
- uses: actions/cache@v1
- uses: actions/setup-go@v5
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-
go-version: '1.24'
check-latest: true
- name: Build nebula
run: go build ./cmd/nebula
@@ -70,5 +111,22 @@ jobs:
- name: Build nebula-cert
run: go build ./cmd/nebula-cert
- name: Vet
run: make vet
- name: golangci-lint
uses: golangci/golangci-lint-action@v8
with:
version: v2.1
- name: Test
run: go test -v ./...
run: make test
- name: End 2 end
run: make e2evv
- uses: actions/upload-artifact@v4
with:
name: e2e packet flow ${{ matrix.os }}
path: e2e/mermaid/${{ matrix.os }}
if-no-files-found: warn

13
.gitignore vendored
View File

@@ -4,9 +4,16 @@
/nebula-arm6
/nebula-darwin
/nebula.exe
/cert/*.crt
/cert/*.key
/coverage.out
/nebula-cert.exe
**/coverage.out
**/cover.out
/cpu.pprof
/build
/*.tar.gz
/e2e/mermaid/
**.crt
**.key
**.pem
**.pub
!/examples/quickstart-vagrant/ansible/roles/nebula/files/vagrant-test-ca.key
!/examples/quickstart-vagrant/ansible/roles/nebula/files/vagrant-test-ca.crt

23
.golangci.yaml Normal file
View File

@@ -0,0 +1,23 @@
version: "2"
linters:
default: none
enable:
- testifylint
exclusions:
generated: lax
presets:
- comments
- common-false-positives
- legacy
- std-error-handling
paths:
- third_party$
- builtin$
- examples$
formatters:
exclusions:
generated: lax
paths:
- third_party$
- builtin$
- examples$

View File

@@ -7,6 +7,559 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
## [Unreleased]
### Changed
- `default_local_cidr_any` now defaults to false, meaning that any firewall rule
intended to target an `unsafe_routes` entry must explicitly declare it via the
`local_cidr` field. This is almost always the intended behavior. This flag is
deprecated and will be removed in a future release.
## [1.9.4] - 2024-09-09
### Added
- Support UDP dialing with gVisor. (#1181)
### Changed
- Make some Nebula state programmatically available via control object. (#1188)
- Switch internal representation of IPs to netip, to prepare for IPv6 support
in the overlay. (#1173)
- Minor build and cleanup changes. (#1171, #1164, #1162)
- Various dependency updates. (#1195, #1190, #1174, #1168, #1167, #1161, #1147, #1146)
### Fixed
- Fix a bug on big endian hosts, like mips. (#1194)
- Fix a rare panic if a local index collision happens. (#1191)
- Fix integer wraparound in the calculation of handshake timeouts on 32-bit targets. (#1185)
## [1.9.3] - 2024-06-06
### Fixed
- Initialize messageCounter to 2 instead of verifying later. (#1156)
## [1.9.2] - 2024-06-03
### Fixed
- Ensure messageCounter is set before handshake is complete. (#1154)
## [1.9.1] - 2024-05-29
### Fixed
- Fixed a potential deadlock in GetOrHandshake. (#1151)
## [1.9.0] - 2024-05-07
### Deprecated
- This release adds a new setting `default_local_cidr_any` that defaults to
true to match previous behavior, but will default to false in the next
release (1.10). When set to false, `local_cidr` is matched correctly for
firewall rules on hosts acting as unsafe routers, and should be set for any
firewall rules you want to allow unsafe route hosts to access. See the issue
and example config for more details. (#1071, #1099)
### Added
- Nebula now has an official Docker image `nebulaoss/nebula` that is
distroless and contains just the `nebula` and `nebula-cert` binaries. You
can find it here: https://hub.docker.com/r/nebulaoss/nebula (#1037)
- Experimental binaries for `loong64` are now provided. (#1003)
- Added example service script for OpenRC. (#711)
- The SSH daemon now supports inlined host keys. (#1054)
- The SSH daemon now supports certificates with `sshd.trusted_cas`. (#1098)
### Changed
- Config setting `tun.unsafe_routes` is now reloadable. (#1083)
- Small documentation and internal improvements. (#1065, #1067, #1069, #1108,
#1109, #1111, #1135)
- Various dependency updates. (#1139, #1138, #1134, #1133, #1126, #1123, #1110,
#1094, #1092, #1087, #1086, #1085, #1072, #1063, #1059, #1055, #1053, #1047,
#1046, #1034, #1022)
### Removed
- Support for the deprecated `local_range` option has been removed. Please
change to `preferred_ranges` (which is also now reloadable). (#1043)
- We are now building with go1.22, which means that for Windows you need at
least Windows 10 or Windows Server 2016. This is because support for earlier
versions was removed in Go 1.21. See https://go.dev/doc/go1.21#windows (#981)
- Removed vagrant example, as it was unmaintained. (#1129)
- Removed Fedora and Arch nebula.service files, as they are maintained in the
upstream repos. (#1128, #1132)
- Remove the TCP round trip tracking metrics, as they never had correct data
and were an experiment to begin with. (#1114)
### Fixed
- Fixed a potential deadlock introduced in 1.8.1. (#1112)
- Fixed support for Linux when IPv6 has been disabled at the OS level. (#787)
- DNS will return NXDOMAIN now when there are no results. (#845)
- Allow `::` in `lighthouse.dns.host`. (#1115)
- Capitalization of `NotAfter` fixed in DNS TXT response. (#1127)
- Don't log invalid certificates. It is untrusted data and can cause a large
volume of logs. (#1116)
## [1.8.2] - 2024-01-08
### Fixed
- Fix multiple routines when listen.port is zero. This was a regression
introduced in v1.6.0. (#1057)
### Changed
- Small dependency update for Noise. (#1038)
## [1.8.1] - 2023-12-19
### Security
- Update `golang.org/x/crypto`, which includes a fix for CVE-2023-48795. (#1048)
### Fixed
- Fix a deadlock introduced in v1.8.0 that could occur during handshakes. (#1044)
- Fix mobile builds. (#1035)
## [1.8.0] - 2023-12-06
### Deprecated
- The next minor release of Nebula, 1.9.0, will require at least Windows 10 or
Windows Server 2016. This is because support for earlier versions was removed
in Go 1.21. See https://go.dev/doc/go1.21#windows
### Added
- Linux: Notify systemd of service readiness. This should resolve timing issues
with services that depend on Nebula being active. For an example of how to
enable this, see: `examples/service_scripts/nebula.service`. (#929)
- Windows: Use Registered IO (RIO) when possible. Testing on a Windows 11
machine shows ~50x improvement in throughput. (#905)
- NetBSD, OpenBSD: Added rudimentary support. (#916, #812)
- FreeBSD: Add support for naming tun devices. (#903)
### Changed
- `pki.disconnect_invalid` will now default to true. This means that once a
certificate expires, the tunnel will be disconnected. If you use SIGHUP to
reload certificates without restarting Nebula, you should ensure all of your
clients are on 1.7.0 or newer before you enable this feature. (#859)
- Limit how often a busy tunnel can requery the lighthouse. The new config
option `timers.requery_wait_duration` defaults to `60s`. (#940)
- The internal structures for hostmaps were refactored to reduce memory usage
and the potential for subtle bugs. (#843, #938, #953, #954, #955)
- Lots of dependency updates.
### Fixed
- Windows: Retry wintun device creation if it fails the first time. (#985)
- Fix issues with firewall reject packets that could cause panics. (#957)
- Fix relay migration during re-handshakes. (#964)
- Various other refactors and fixes. (#935, #952, #972, #961, #996, #1002,
#987, #1004, #1030, #1032, ...)
## [1.7.2] - 2023-06-01
### Fixed
- Fix a freeze during config reload if the `static_host_map` config was changed. (#886)
## [1.7.1] - 2023-05-18
### Fixed
- Fix IPv4 addresses returned by `static_host_map` DNS lookup queries being
treated as IPv6 addresses. (#877)
## [1.7.0] - 2023-05-17
### Added
- `nebula-cert ca` now supports encrypting the CA's private key with a
passphrase. Pass `-encrypt` in order to be prompted for a passphrase.
Encryption is performed using AES-256-GCM and Argon2id for KDF. KDF
parameters default to RFC recommendations, but can be overridden via CLI
flags `-argon-memory`, `-argon-parallelism`, and `-argon-iterations`. (#386)
- Support for curve P256 and BoringCrypto has been added. See README section
"Curve P256 and BoringCrypto" for more details. (#865, #861, #769, #856, #803)
- New firewall rule `local_cidr`. This could be used to filter destinations
when using `unsafe_routes`. (#507)
- Add `unsafe_route` option `install`. This controls whether the route is
installed in the systems routing table. (#831)
- Add `tun.use_system_route_table` option. Set to true to manage unsafe routes
directly on the system route table with gateway routes instead of in Nebula
configuration files. This is only supported on Linux. (#839)
- The metric `certificate.ttl_seconds` is now exposed via stats. (#782)
- Add `punchy.respond_delay` option. This allows you to change the delay
before attempting punchy.respond. Default is 5 seconds. (#721)
- Added SSH commands to allow the capture of a mutex profile. (#737)
- You can now set `lighthouse.calculated_remotes` to make it possible to do
handshakes without a lighthouse in certain configurations. (#759)
- The firewall can be configured to send REJECT replies instead of the default
DROP behavior. (#738)
- For macOS, an example launchd configuration file is now provided. (#762)
### Changed
- Lighthouses and other `static_host_map` entries that use DNS names will now
be automatically refreshed to detect when the IP address changes. (#796)
- Lighthouses send ACK replies back to clients so that they do not fall into
connection testing as often by clients. (#851, #408)
- Allow the `listen.host` option to contain a hostname. (#825)
- When Nebula switches to a new certificate (such as via SIGHUP), we now
rehandshake with all existing tunnels. This allows firewall groups to be
updated and `pki.disconnect_invalid` to know about the new certificate
expiration time. (#838, #857, #842, #840, #835, #828, #820, #807)
### Fixed
- Always disconnect blocklisted hosts, even if `pki.disconnect_invalid` is
not set. (#858)
- Dependencies updated and go1.20 required. (#780, #824, #855, #854)
- Fix possible race condition with relays. (#827)
- FreeBSD: Fix connection to the localhost's own Nebula IP. (#808)
- Normalize and document some common log field values. (#837, #811)
- Fix crash if you set unlucky values for the firewall timeout configuration
options. (#802)
- Make DNS queries case insensitive. (#793)
- Update example systemd configurations to want `nss-lookup`. (#791)
- Errors with SSH commands now go to the SSH tunnel instead of stderr. (#757)
- Fix a hang when shutting down Android. (#772)
## [1.6.1] - 2022-09-26
### Fixed
- Refuse to process underlay packets received from overlay IPs. This prevents
confusion on hosts that have unsafe routes configured. (#741)
- The ssh `reload` command did not work on Windows, since it relied on sending
a SIGHUP signal internally. This has been fixed. (#725)
- A regression in v1.5.2 that broke unsafe routes on Mobile clients has been
fixed. (#729)
## [1.6.0] - 2022-06-30
### Added
- Experimental: nebula clients can be configured to act as relays for other nebula clients.
Primarily useful when stubborn NATs make a direct tunnel impossible. (#678)
- Configuration option to report manually specified `ip:port`s to lighthouses. (#650)
- Windows arm64 build. (#638)
- `punchy` and most `lighthouse` config options now support hot reloading. (#649)
### Changed
- Build against go 1.18. (#656)
- Promoted `routines` config from experimental to supported feature. (#702)
- Dependencies updated. (#664)
### Fixed
- Packets destined for the same host that sent it will be returned on MacOS.
This matches the default behavior of other operating systems. (#501)
- `unsafe_route` configuration will no longer crash on Windows. (#648)
- A few panics that were introduced in 1.5.x. (#657, #658, #675)
### Security
- You can set `listen.send_recv_error` to control the conditions in which
`recv_error` messages are sent. Sending these messages can expose the fact
that Nebula is running on a host, but it speeds up re-handshaking. (#670)
### Removed
- `x509` config stanza support has been removed. (#685)
## [1.5.2] - 2021-12-14
### Added
- Warn when a non lighthouse node does not have lighthouse hosts configured. (#587)
### Changed
- No longer fatals if expired CA certificates are present in `pki.ca`, as long as 1 valid CA is present. (#599)
- `nebula-cert` will now enforce ipv4 addresses. (#604)
- Warn on macOS if an unsafe route cannot be created due to a collision with an
existing route. (#610)
- Warn if you set a route MTU on platforms where we don't support it. (#611)
### Fixed
- Rare race condition when tearing down a tunnel due to `recv_error` and sending packets on another thread. (#590)
- Bug in `routes` and `unsafe_routes` handling that was introduced in 1.5.0. (#595)
- `-test` mode no longer results in a crash. (#602)
### Removed
- `x509.ca` config alias for `pki.ca`. (#604)
### Security
- Upgraded `golang.org/x/crypto` to address an issue which allowed unauthenticated clients to cause a panic in SSH
servers. (#603)
## 1.5.1 - 2021-12-13
(This release was skipped due to discovering #610 and #611 after the tag was
created.)
## [1.5.0] - 2021-11-11
### Added
- SSH `print-cert` has a new `-raw` flag to get the PEM representation of a certificate. (#483)
- New build architecture: Linux `riscv64`. (#542)
- New experimental config option `remote_allow_ranges`. (#540)
- New config option `pki.disconnect_invalid` that will tear down tunnels when they become invalid (through expiry or
removal of root trust). Default is `false`. Note, this will not currently recognize if a remote has changed
certificates since the last handshake. (#370)
- New config option `unsafe_routes.<route>.metric` will set a metric for a specific unsafe route. It's useful if you have
more than one identical route and want to prefer one against the other. (#353)
### Changed
- Build against go 1.17. (#553)
- Build with `CGO_ENABLED=0` set, to create more portable binaries. This could
have an effect on DNS resolution if you rely on anything non-standard. (#421)
- Windows now uses the [wintun](https://www.wintun.net/) driver which does not require installation. This driver
is a large improvement over the TAP driver that was used in previous versions. If you had a previous version
of `nebula` running, you will want to disable the tap driver in Control Panel, or uninstall the `tap0901` driver
before running this version. (#289)
- Darwin binaries are now universal (works on both amd64 and arm64), signed, and shipped in a notarized zip file.
`nebula-darwin.zip` will be the only darwin release artifact. (#571)
- Darwin uses syscalls and AF_ROUTE to configure the routing table, instead of
using `/sbin/route`. Setting `tun.dev` is now allowed on Darwin as well, it
must be in the format `utun[0-9]+` or it will be ignored. (#163)
### Deprecated
- The `preferred_ranges` option has been supported as a replacement for
`local_range` since v1.0.0. It has now been documented and `local_range`
has been officially deprecated. (#541)
### Fixed
- Valid recv_error packets were incorrectly marked as "spoofing" and ignored. (#482)
- SSH server handles single `exec` requests correctly. (#483)
- Signing a certificate with `nebula-cert sign` now verifies that the supplied
ca-key matches the ca-crt. (#503)
- If `preferred_ranges` (or the deprecated `local_range`) is configured, we
will immediately switch to a preferred remote address after the reception of
a handshake packet (instead of waiting until 1,000 packets have been sent).
(#532)
- A race condition when `punchy.respond` is enabled and ensures the correct
vpn ip is sent a punch back response in highly queried node. (#566)
- Fix a rare crash during handshake due to a race condition. (#535)
## [1.4.0] - 2021-05-11
### Added
- Ability to output qr code images in `print`, `ca`, and `sign` modes for `nebula-cert`.
This is useful when configuring mobile clients. (#297)
- Experimental: Nebula can now do work on more than 2 cpu cores in send and receive paths via
the new `routines` config option. (#382, #391, #395)
- ICMP ping requests can be responded to when the `tun.disabled` is `true`.
This is useful so that you can "ping" a lighthouse running in this mode. (#342)
- Run smoke tests via `make smoke-docker`. (#287)
- More reported stats, udp memory use on linux, build version (when using Prometheus), firewall,
handshake, and cached packet stats. (#390, #405, #450, #453)
- IPv6 support for the underlay network. (#369)
- End to end testing, run with `make e2e`. (#425, #427, #428)
### Changed
- Darwin will now log stdout/stderr to a file when using `-service` mode. (#303)
- Example systemd unit file now better arranged startup order when using `sshd`
and other fixes. (#317, #412, #438)
- Reduced memory utilization/garbage collection. (#320, #323, #340)
- Reduced CPU utilization. (#329)
- Build against go 1.16. (#381)
- Refactored handshakes to improve performance and correctness. (#401, #402, #404, #416, #451)
- Improved roaming support for mobile clients. (#394, #457)
- Lighthouse performance and correctness improvements. (#406, #418, #429, #433, #437, #442, #449)
- Better ordered startup to enable `sshd`, `stats`, and `dns` subsystems to listen on
the nebula interface. (#375)
### Fixed
- No longer report handshake packets as `lost` in stats. (#331)
- Error handling in the `cert` package. (#339, #373)
- Orphaned pending hostmap entries are cleaned up. (#344)
- Most known data races are now resolved. (#396, #400, #424)
- Refuse to run a lighthouse on an ephemeral port. (#399)
- Removed the global references. (#423, #426, #446)
- Reloading via ssh command avoids a panic. (#447)
- Shutdown is now performed in a cleaner way. (#448)
- Logs will now find their way to Windows event viewer when running under `-service` mode
in Windows. (#443)
## [1.3.0] - 2020-09-22
### Added
- You can emit statistics about non-message packets by setting the option
`stats.message_metrics`. You can similarly emit detailed statistics about
lighthouse packets by setting the option `stats.lighthouse_metrics`. See
the example config for more details. (#230)
- We now support freebsd/amd64. This is experimental, please give us feedback.
(#103)
- We now release a binary for `linux/mips-softfloat` which has also been
stripped to reduce filesize and hopefully have a better chance on running on
small mips devices. (#231)
- You can set `tun.disabled` to true to run a standalone lighthouse without a
tun device (and thus, without root). (#269)
- You can set `logging.disable_timestamp` to remove timestamps from log lines,
which is useful when output is redirected to a logging system that already
adds timestamps. (#288)
### Changed
- Handshakes should now trigger faster, as we try to be proactive with sending
them instead of waiting for the next timer tick in most cases. (#246, #265)
- Previously, we would drop the conntrack table whenever firewall rules were
changed during a SIGHUP. Now, we will maintain the table and just validate
that an entry still matches with the new rule set. (#233)
- Debug logs for firewall drops now include the reason. (#220, #239)
- Logs for handshakes now include the fingerprint of the remote host. (#262)
- Config item `pki.blacklist` is now `pki.blocklist`. (#272)
- Better support for older Linux kernels. We now only set `SO_REUSEPORT` if
`tun.routines` is greater than 1 (default is 1). We also only use the
`recvmmsg` syscall if `listen.batch` is greater than 1 (default is 64).
(#275)
- It is possible to run Nebula as a library inside of another process now.
Note that this is still experimental and the internal APIs around this might
change in minor version releases. (#279)
### Deprecated
- `pki.blacklist` is deprecated in favor of `pki.blocklist` with the same
functionality. Existing configs will continue to load for this release to
allow for migrations. (#272)
### Fixed
- `advmss` is now set correctly for each route table entry when `tun.routes`
is configured to have some routes with higher MTU. (#245)
- Packets that arrive on the tun device with an unroutable destination IP are
now dropped correctly, instead of wasting time making queries to the
lighthouses for IP `0.0.0.0` (#267)
## [1.2.0] - 2020-04-08
### Added
@@ -118,7 +671,24 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- Initial public release.
[Unreleased]: https://github.com/slackhq/nebula/compare/v1.2.0...HEAD
[Unreleased]: https://github.com/slackhq/nebula/compare/v1.9.4...HEAD
[1.9.4]: https://github.com/slackhq/nebula/releases/tag/v1.9.4
[1.9.3]: https://github.com/slackhq/nebula/releases/tag/v1.9.3
[1.9.2]: https://github.com/slackhq/nebula/releases/tag/v1.9.2
[1.9.1]: https://github.com/slackhq/nebula/releases/tag/v1.9.1
[1.9.0]: https://github.com/slackhq/nebula/releases/tag/v1.9.0
[1.8.2]: https://github.com/slackhq/nebula/releases/tag/v1.8.2
[1.8.1]: https://github.com/slackhq/nebula/releases/tag/v1.8.1
[1.8.0]: https://github.com/slackhq/nebula/releases/tag/v1.8.0
[1.7.2]: https://github.com/slackhq/nebula/releases/tag/v1.7.2
[1.7.1]: https://github.com/slackhq/nebula/releases/tag/v1.7.1
[1.7.0]: https://github.com/slackhq/nebula/releases/tag/v1.7.0
[1.6.1]: https://github.com/slackhq/nebula/releases/tag/v1.6.1
[1.6.0]: https://github.com/slackhq/nebula/releases/tag/v1.6.0
[1.5.2]: https://github.com/slackhq/nebula/releases/tag/v1.5.2
[1.5.0]: https://github.com/slackhq/nebula/releases/tag/v1.5.0
[1.4.0]: https://github.com/slackhq/nebula/releases/tag/v1.4.0
[1.3.0]: https://github.com/slackhq/nebula/releases/tag/v1.3.0
[1.2.0]: https://github.com/slackhq/nebula/releases/tag/v1.2.0
[1.1.0]: https://github.com/slackhq/nebula/releases/tag/v1.1.0
[1.0.0]: https://github.com/slackhq/nebula/releases/tag/v1.0.0

37
LOGGING.md Normal file
View File

@@ -0,0 +1,37 @@
### Logging conventions
A log message (the string/format passed to `Info`, `Error`, `Debug` etc, as well as their `Sprintf` counterparts) should
be a descriptive message about the event and may contain specific identifying characteristics. Regardless of the
level of detail in the message identifying characteristics should always be included via `WithField`, `WithFields` or
`WithError`
If an error is being logged use `l.WithError(err)` so that there is better discoverability about the event as well
as the specific error condition.
#### Common fields
- `cert` - a `cert.NebulaCertificate` object, do not `.String()` this manually, `logrus` will marshal objects properly
for the formatter it is using.
- `fingerprint` - a single `NebeulaCertificate` hex encoded fingerprint
- `fingerprints` - an array of `NebulaCertificate` hex encoded fingerprints
- `fwPacket` - a FirewallPacket object
- `handshake` - an object containing:
- `stage` - the current stage counter
- `style` - noise handshake style `ix_psk0`, `xx`, etc
- `header` - a nebula header object
- `udpAddr` - a `net.UDPAddr` object
- `udpIp` - a udp ip address
- `vpnIp` - vpn ip of the host (remote or local)
- `relay` - the vpnIp of the relay host that is or should be handling the relay packet
- `relayFrom` - The vpnIp of the initial sender of the relayed packet
- `relayTo` - The vpnIp of the final destination of a relayed packet
#### Example:
```
l.WithError(err).
WithField("vpnIp", IntIp(hostinfo.hostId)).
WithField("udpAddr", addr).
WithField("handshake", m{"stage": 1, "style": "ix"}).
Info("Invalid certificate from host")
```

188
Makefile
View File

@@ -1,7 +1,31 @@
NEBULA_CMD_PATH = "./cmd/nebula"
BUILD_NUMBER ?= dev+$(shell date -u '+%Y%m%d%H%M%S')
GO111MODULE = on
export GO111MODULE
CGO_ENABLED = 0
export CGO_ENABLED
# Set up OS specific bits
ifeq ($(OS),Windows_NT)
NEBULA_CMD_SUFFIX = .exe
NULL_FILE = nul
# RIO on windows does pointer stuff that makes go vet angry
VET_FLAGS = -unsafeptr=false
else
NEBULA_CMD_SUFFIX =
NULL_FILE = /dev/null
endif
# Only defined the build number if we haven't already
ifndef BUILD_NUMBER
ifeq ($(shell git describe --exact-match 2>$(NULL_FILE)),)
BUILD_NUMBER = $(shell git describe --abbrev=0 --match "v*" | cut -dv -f2)-$(shell git branch --show-current)-$(shell git describe --long --dirty | cut -d- -f2-)
else
BUILD_NUMBER = $(shell git describe --exact-match --dirty | cut -dv -f2)
endif
endif
DOCKER_IMAGE_REPO ?= nebulaoss/nebula
DOCKER_IMAGE_TAG ?= latest
LDFLAGS = -X main.Build=$(BUILD_NUMBER)
ALL_LINUX = linux-amd64 \
linux-386 \
@@ -13,43 +37,118 @@ ALL_LINUX = linux-amd64 \
linux-mips \
linux-mipsle \
linux-mips64 \
linux-mips64le
linux-mips64le \
linux-mips-softfloat \
linux-riscv64 \
linux-loong64
ALL_FREEBSD = freebsd-amd64 \
freebsd-arm64
ALL_OPENBSD = openbsd-amd64 \
openbsd-arm64
ALL_NETBSD = netbsd-amd64 \
netbsd-arm64
ALL = $(ALL_LINUX) \
$(ALL_FREEBSD) \
$(ALL_OPENBSD) \
$(ALL_NETBSD) \
darwin-amd64 \
windows-amd64
darwin-arm64 \
windows-amd64 \
windows-arm64
e2e:
$(TEST_ENV) go test -tags=e2e_testing -count=1 $(TEST_FLAGS) ./e2e
e2ev: TEST_FLAGS += -v
e2ev: e2e
e2evv: TEST_ENV += TEST_LOGS=1
e2evv: e2ev
e2evvv: TEST_ENV += TEST_LOGS=2
e2evvv: e2ev
e2evvvv: TEST_ENV += TEST_LOGS=3
e2evvvv: e2ev
e2e-bench: TEST_FLAGS = -bench=. -benchmem -run=^$
e2e-bench: e2e
DOCKER_BIN = build/linux-amd64/nebula build/linux-amd64/nebula-cert
all: $(ALL:%=build/%/nebula) $(ALL:%=build/%/nebula-cert)
docker: docker/linux-$(shell go env GOARCH)
release: $(ALL:%=build/nebula-%.tar.gz)
release-linux: $(ALL_LINUX:%=build/nebula-%.tar.gz)
release-freebsd: $(ALL_FREEBSD:%=build/nebula-%.tar.gz)
release-openbsd: $(ALL_OPENBSD:%=build/nebula-%.tar.gz)
release-netbsd: $(ALL_NETBSD:%=build/nebula-%.tar.gz)
release-boringcrypto: build/nebula-linux-$(shell go env GOARCH)-boringcrypto.tar.gz
BUILD_ARGS += -trimpath
bin-windows: build/windows-amd64/nebula.exe build/windows-amd64/nebula-cert.exe
mv $? .
bin-windows-arm64: build/windows-arm64/nebula.exe build/windows-arm64/nebula-cert.exe
mv $? .
bin-darwin: build/darwin-amd64/nebula build/darwin-amd64/nebula-cert
mv $? .
bin-freebsd: build/freebsd-amd64/nebula build/freebsd-amd64/nebula-cert
mv $? .
bin-freebsd-arm64: build/freebsd-arm64/nebula build/freebsd-arm64/nebula-cert
mv $? .
bin-boringcrypto: build/linux-$(shell go env GOARCH)-boringcrypto/nebula build/linux-$(shell go env GOARCH)-boringcrypto/nebula-cert
mv $? .
bin-pkcs11: BUILD_ARGS += -tags pkcs11
bin-pkcs11: CGO_ENABLED = 1
bin-pkcs11: bin
bin:
go build -trimpath -ldflags "-X main.Build=$(BUILD_NUMBER)" -o ./nebula ${NEBULA_CMD_PATH}
go build -trimpath -ldflags "-X main.Build=$(BUILD_NUMBER)" -o ./nebula-cert ./cmd/nebula-cert
$(GOENV) go build $(BUILD_ARGS) -ldflags "$(LDFLAGS)" -o ./nebula${NEBULA_CMD_SUFFIX} ${NEBULA_CMD_PATH}
$(GOENV) go build $(BUILD_ARGS) -ldflags "$(LDFLAGS)" -o ./nebula-cert${NEBULA_CMD_SUFFIX} ./cmd/nebula-cert
install:
go install -trimpath -ldflags "-X main.Build=$(BUILD_NUMBER)" ${NEBULA_CMD_PATH}
go install -trimpath -ldflags "-X main.Build=$(BUILD_NUMBER)" ./cmd/nebula-cert
$(GOENV) go install $(BUILD_ARGS) -ldflags "$(LDFLAGS)" ${NEBULA_CMD_PATH}
$(GOENV) go install $(BUILD_ARGS) -ldflags "$(LDFLAGS)" ./cmd/nebula-cert
build/linux-arm-%: GOENV += GOARM=$(word 3, $(subst -, ,$*))
build/linux-mips-%: GOENV += GOMIPS=$(word 3, $(subst -, ,$*))
# Build an extra small binary for mips-softfloat
build/linux-mips-softfloat/%: LDFLAGS += -s -w
# boringcrypto
build/linux-amd64-boringcrypto/%: GOENV += GOEXPERIMENT=boringcrypto CGO_ENABLED=1
build/linux-arm64-boringcrypto/%: GOENV += GOEXPERIMENT=boringcrypto CGO_ENABLED=1
build/linux-amd64-boringcrypto/%: LDFLAGS += -checklinkname=0
build/linux-arm64-boringcrypto/%: LDFLAGS += -checklinkname=0
build/%/nebula: .FORCE
GOOS=$(firstword $(subst -, , $*)) \
GOARCH=$(word 2, $(subst -, ,$*)) \
GOARM=$(word 3, $(subst -, ,$*)) \
go build -trimpath -o $@ -ldflags "-X main.Build=$(BUILD_NUMBER)" ${NEBULA_CMD_PATH}
GOARCH=$(word 2, $(subst -, ,$*)) $(GOENV) \
go build $(BUILD_ARGS) -o $@ -ldflags "$(LDFLAGS)" ${NEBULA_CMD_PATH}
build/%/nebula-cert: .FORCE
GOOS=$(firstword $(subst -, , $*)) \
GOARCH=$(word 2, $(subst -, ,$*)) \
GOARM=$(word 3, $(subst -, ,$*)) \
go build -trimpath -o $@ -ldflags "-X main.Build=$(BUILD_NUMBER)" ./cmd/nebula-cert
GOARCH=$(word 2, $(subst -, ,$*)) $(GOENV) \
go build $(BUILD_ARGS) -o $@ -ldflags "$(LDFLAGS)" ./cmd/nebula-cert
build/%/nebula.exe: build/%/nebula
mv $< $@
@@ -63,16 +162,31 @@ build/nebula-%.tar.gz: build/%/nebula build/%/nebula-cert
build/nebula-%.zip: build/%/nebula.exe build/%/nebula-cert.exe
cd build/$* && zip ../nebula-$*.zip nebula.exe nebula-cert.exe
docker/%: build/%/nebula build/%/nebula-cert
docker build . $(DOCKER_BUILD_ARGS) -f docker/Dockerfile --platform "$(subst -,/,$*)" --tag "${DOCKER_IMAGE_REPO}:${DOCKER_IMAGE_TAG}" --tag "${DOCKER_IMAGE_REPO}:$(BUILD_NUMBER)"
vet:
go vet -v ./...
go vet $(VET_FLAGS) -v ./...
test:
go test -v ./...
test-boringcrypto:
GOEXPERIMENT=boringcrypto CGO_ENABLED=1 go test -ldflags "-checklinkname=0" -v ./...
test-pkcs11:
CGO_ENABLED=1 go test -v -tags pkcs11 ./...
test-cov-html:
go test -coverprofile=coverage.out
go tool cover -html=coverage.out
build-test-mobile:
GOARCH=amd64 GOOS=ios go build $(shell go list ./... | grep -v '/cmd/\|/examples/')
GOARCH=arm64 GOOS=ios go build $(shell go list ./... | grep -v '/cmd/\|/examples/')
GOARCH=amd64 GOOS=android go build $(shell go list ./... | grep -v '/cmd/\|/examples/')
GOARCH=arm64 GOOS=android go build $(shell go list ./... | grep -v '/cmd/\|/examples/')
bench:
go test -bench=.
@@ -84,23 +198,51 @@ bench-cpu-long:
go test -bench=. -benchtime=60s -cpuprofile=cpu.pprof
go tool pprof go-audit.test cpu.pprof
proto: nebula.pb.go cert/cert.pb.go
proto: nebula.pb.go cert/cert_v1.pb.go
nebula.pb.go: nebula.proto .FORCE
go build github.com/golang/protobuf/protoc-gen-go
PATH="$(PWD):$(PATH)" protoc --go_out=. $<
rm protoc-gen-go
go build github.com/gogo/protobuf/protoc-gen-gogofaster
PATH="$(CURDIR):$(PATH)" protoc --gogofaster_out=paths=source_relative:. $<
rm protoc-gen-gogofaster
cert/cert.pb.go: cert/cert.proto .FORCE
$(MAKE) -C cert cert.pb.go
service:
@echo > /dev/null
@echo > $(NULL_FILE)
$(eval NEBULA_CMD_PATH := "./cmd/nebula-service")
ifeq ($(words $(MAKECMDGOALS)),1)
$(MAKE) service ${.DEFAULT_GOAL} --no-print-directory
@$(MAKE) service ${.DEFAULT_GOAL} --no-print-directory
endif
fips140:
@echo > $(NULL_FILE)
$(eval GOENV += GOFIPS140=v1.0.0)
$(eval LDFLAGS += -checklinkname=0)
ifeq ($(words $(MAKECMDGOALS)),1)
@$(MAKE) fips140 ${.DEFAULT_GOAL} --no-print-directory
endif
bin-docker: bin build/linux-amd64/nebula build/linux-amd64/nebula-cert
smoke-docker: bin-docker
cd .github/workflows/smoke/ && ./build.sh
cd .github/workflows/smoke/ && ./smoke.sh
cd .github/workflows/smoke/ && NAME="smoke-p256" CURVE="P256" ./build.sh
cd .github/workflows/smoke/ && NAME="smoke-p256" ./smoke.sh
smoke-relay-docker: bin-docker
cd .github/workflows/smoke/ && ./build-relay.sh
cd .github/workflows/smoke/ && ./smoke-relay.sh
smoke-docker-race: BUILD_ARGS = -race
smoke-docker-race: CGO_ENABLED = 1
smoke-docker-race: smoke-docker
smoke-vagrant/%: bin-docker build/%/nebula
cd .github/workflows/smoke/ && ./build.sh $*
cd .github/workflows/smoke/ && ./smoke-vagrant.sh $*
.FORCE:
.PHONY: test test-cov-html bench bench-cpu bench-cpu-long bin proto release service
.PHONY: bench bench-cpu bench-cpu-long bin build-test-mobile e2e e2ev e2evv e2evvv e2evvvv fips140 proto release service smoke-docker smoke-docker-race test test-cov-html smoke-vagrant/%
.DEFAULT_GOAL := bin

124
README.md
View File

@@ -1,60 +1,115 @@
## What is Nebula?
## What is Nebula?
Nebula is a scalable overlay networking tool with a focus on performance, simplicity and security.
It lets you seamlessly connect computers anywhere in the world. Nebula is portable, and runs on Linux, OSX, and Windows.
(Also: keep this quiet, but we have an early prototype running on iOS).
It lets you seamlessly connect computers anywhere in the world. Nebula is portable, and runs on Linux, OSX, Windows, iOS, and Android.
It can be used to connect a small number of computers, but is also able to connect tens of thousands of computers.
Nebula incorporates a number of existing concepts like encryption, security groups, certificates,
and tunneling, and each of those individual pieces existed before Nebula in various forms.
and tunneling.
What makes Nebula different to existing offerings is that it brings all of these ideas together,
resulting in a sum that is greater than its individual parts.
Further documentation can be found [here](https://nebula.defined.net/docs/).
You can read more about Nebula [here](https://medium.com/p/884110a5579).
You can also join the NebulaOSS Slack group [here](https://join.slack.com/t/nebulaoss/shared_invite/enQtOTA5MDI4NDg3MTg4LTkwY2EwNTI4NzQyMzc0M2ZlODBjNWI3NTY1MzhiOThiMmZlZjVkMTI0NGY4YTMyNjUwMWEyNzNkZTJmYzQxOGU)
You can also join the NebulaOSS Slack group [here](https://join.slack.com/t/nebulaoss/shared_invite/zt-39pk4xopc-CUKlGcb5Z39dQ0cK1v7ehA).
## Supported Platforms
#### Desktop and Server
Check the [releases](https://github.com/slackhq/nebula/releases/latest) page for downloads or see the [Distribution Packages](https://github.com/slackhq/nebula#distribution-packages) section.
- Linux - 64 and 32 bit, arm, and others
- Windows
- MacOS
- Freebsd
#### Distribution Packages
- [Arch Linux](https://archlinux.org/packages/extra/x86_64/nebula/)
```sh
sudo pacman -S nebula
```
- [Fedora Linux](https://src.fedoraproject.org/rpms/nebula)
```sh
sudo dnf install nebula
```
- [Debian Linux](https://packages.debian.org/source/stable/nebula)
```sh
sudo apt install nebula
```
- [Alpine Linux](https://pkgs.alpinelinux.org/packages?name=nebula)
```sh
sudo apk add nebula
```
- [macOS Homebrew](https://github.com/Homebrew/homebrew-core/blob/HEAD/Formula/n/nebula.rb)
```sh
brew install nebula
```
- [Docker](https://hub.docker.com/r/nebulaoss/nebula)
```sh
docker pull nebulaoss/nebula
```
#### Mobile
- [iOS](https://apps.apple.com/us/app/mobile-nebula/id1509587936?itsct=apps_box&amp;itscg=30200)
- [Android](https://play.google.com/store/apps/details?id=net.defined.mobile_nebula&pcampaignid=pcampaignidMKT-Other-global-all-co-prtnr-py-PartBadge-Mar2515-1)
## Technical Overview
Nebula is a mutually authenticated peer-to-peer software defined network based on the [Noise Protocol Framework](https://noiseprotocol.org/).
Nebula is a mutually authenticated peer-to-peer software-defined network based on the [Noise Protocol Framework](https://noiseprotocol.org/).
Nebula uses certificates to assert a node's IP address, name, and membership within user-defined groups.
Nebula's user-defined groups allow for provider agnostic traffic filtering between nodes.
Discovery nodes allow individual peers to find each other and optionally use UDP hole punching to establish connections from behind most firewalls or NATs.
Discovery nodes (aka lighthouses) allow individual peers to find each other and optionally use UDP hole punching to establish connections from behind most firewalls or NATs.
Users can move data between nodes in any number of cloud service providers, datacenters, and endpoints, without needing to maintain a particular addressing scheme.
Nebula uses elliptic curve Diffie-Hellman key exchange, and AES-256-GCM in its default configuration.
Nebula uses Elliptic-curve Diffie-Hellman (`ECDH`) key exchange and `AES-256-GCM` in its default configuration.
Nebula was created to provide a mechanism for groups hosts to communicate securely, even across the internet, while enabling expressive firewall definitions similar in style to cloud security groups.
Nebula was created to provide a mechanism for groups of hosts to communicate securely, even across the internet, while enabling expressive firewall definitions similar in style to cloud security groups.
## Getting started (quickly)
To set up a Nebula network, you'll need:
#### 1. The [Nebula binaries](https://github.com/slackhq/nebula/releases) for your specific platform. Specifically you'll need `nebula-cert` and the specific nebula binary for each platform you use.
#### 1. The [Nebula binaries](https://github.com/slackhq/nebula/releases) or [Distribution Packages](https://github.com/slackhq/nebula#distribution-packages) for your specific platform. Specifically you'll need `nebula-cert` and the specific nebula binary for each platform you use.
#### 2. (Optional, but you really should..) At least one discovery node with a routable IP address, which we call a lighthouse.
Nebula lighthouses allow nodes to find each other, anywhere in the world. A lighthouse is the only node in a Nebula network whose IP should not change. Running a lighthouse requires very few compute resources, and you can easily use the least expensive option from a cloud hosting provider. If you're not sure which provider to use, a number of us have used $5/mo [DigitalOcean](https://digitalocean.com) droplets as lighthouses.
Once you have launched an instance, ensure that Nebula udp traffic (default port udp/4242) can reach it over the internet.
Nebula lighthouses allow nodes to find each other, anywhere in the world. A lighthouse is the only node in a Nebula network whose IP should not change. Running a lighthouse requires very few compute resources, and you can easily use the least expensive option from a cloud hosting provider. If you're not sure which provider to use, a number of us have used $6/mo [DigitalOcean](https://digitalocean.com) droplets as lighthouses.
Once you have launched an instance, ensure that Nebula udp traffic (default port udp/4242) can reach it over the internet.
#### 3. A Nebula certificate authority, which will be the root of trust for a particular Nebula network.
```
./nebula-cert ca -name "Myorganization, Inc"
```
This will create files named `ca.key` and `ca.cert` in the current directory. The `ca.key` file is the most sensitive file you'll create, because it is the key used to sign the certificates for individual nebula nodes/hosts. Please store this file somewhere safe, preferably with strong encryption.
```sh
./nebula-cert ca -name "Myorganization, Inc"
```
This will create files named `ca.key` and `ca.cert` in the current directory. The `ca.key` file is the most sensitive file you'll create, because it is the key used to sign the certificates for individual nebula nodes/hosts. Please store this file somewhere safe, preferably with strong encryption.
**Be aware!** By default, certificate authorities have a 1-year lifetime before expiration. See [this guide](https://nebula.defined.net/docs/guides/rotating-certificate-authority/) for details on rotating a CA.
#### 4. Nebula host keys and certificates generated from that certificate authority
This assumes you have four nodes, named lighthouse1, laptop, server1, host3. You can name the nodes any way you'd like, including FQDN. You'll also need to choose IP addresses and the associated subnet. In this example, we are creating a nebula network that will use 192.168.100.x/24 as its network range. This example also demonstrates nebula groups, which can later be used to define traffic rules in a nebula network.
```
```sh
./nebula-cert sign -name "lighthouse1" -ip "192.168.100.1/24"
./nebula-cert sign -name "laptop" -ip "192.168.100.2/24" -groups "laptop,home,ssh"
./nebula-cert sign -name "server1" -ip "192.168.100.9/24" -groups "servers"
./nebula-cert sign -name "host3" -ip "192.168.100.10/24"
```
By default, host certificates will expire 1 second before the CA expires. Use the `-duration` flag to specify a shorter lifetime.
#### 5. Configuration files for each host
Download a copy of the nebula [example configuration](https://github.com/slackhq/nebula/blob/master/examples/config.yml).
* On the lighthouse node, you'll need to ensure `am_lighthouse: true` is set.
@@ -64,18 +119,21 @@ Download a copy of the nebula [example configuration](https://github.com/slackhq
#### 6. Copy nebula credentials, configuration, and binaries to each host
For each host, copy the nebula binary to the host, along with `config.yaml` from step 5, and the files `ca.crt`, `{host}.crt`, and `{host}.key` from step 4.
For each host, copy the nebula binary to the host, along with `config.yml` from step 5, and the files `ca.crt`, `{host}.crt`, and `{host}.key` from step 4.
**DO NOT COPY `ca.key` TO INDIVIDUAL NODES.**
#### 7. Run nebula on each host
```sh
./nebula -config /path/to/config.yml
```
./nebula -config /path/to/config.yaml
```
For more detailed instructions, [find the full documentation here](https://nebula.defined.net/docs/).
## Building Nebula from source
Download go and clone this repo. Change to the nebula directory.
Make sure you have [go](https://go.dev/doc/install) installed and clone this repo. Change to the nebula directory.
To build nebula for all platforms:
`make all`
@@ -85,9 +143,27 @@ To build nebula for a specific platform (ex, Windows):
See the [Makefile](Makefile) for more details on build targets
## Curve P256, BoringCrypto and FIPS 140-3 mode
The default curve used for cryptographic handshakes and signatures is Curve25519. This is the recommended setting for most users. If your deployment has certain compliance requirements, you have the option of creating your CA using `nebula-cert ca -curve P256` to use NIST Curve P256. The CA will then sign certificates using ECDSA P256, and any hosts using these certificates will use P256 for ECDH handshakes.
Nebula can be built using the [BoringCrypto GOEXPERIMENT](https://github.com/golang/go/blob/go1.20/src/crypto/internal/boring/README.md) by running either of the following make targets:
```sh
make bin-boringcrypto
make release-boringcrypto
```
Nebula can also be built using the [FIPS 140-3](https://go.dev/doc/security/fips140) mode of Go by running either of the following make targets:
```sh
make fips140
make fips140 release
```
This is not the recommended default deployment, but may be useful based on your compliance requirements.
## Credits
Nebula was created at Slack Technologies, Inc by Nate Brown and Ryan Huber, with contributions from Oliver Fross, Alan Lam, Wade Simmons, and Lining Wang.

12
SECURITY.md Normal file
View File

@@ -0,0 +1,12 @@
Security Policy
===============
Reporting a Vulnerability
-------------------------
If you believe you have found a security vulnerability with Nebula, please let
us know right away. We will investigate all reports and do our best to quickly
fix valid issues.
You can submit your report on [HackerOne](https://hackerone.com/slack) and our
security team will respond as soon as possible.

View File

@@ -2,12 +2,28 @@ package nebula
import (
"fmt"
"net/netip"
"regexp"
"github.com/gaissmai/bart"
"github.com/slackhq/nebula/config"
)
type AllowList struct {
// The values of this cidrTree are `bool`, signifying allow/deny
cidrTree *CIDRTree
cidrTree *bart.Table[bool]
}
type RemoteAllowList struct {
AllowList *AllowList
// Inside Range Specific, keys of this tree are inside CIDRs and values
// are *AllowList
insideAllowLists *bart.Table[*AllowList]
}
type LocalAllowList struct {
AllowList *AllowList
// To avoid ambiguity, all rules must be true, or all rules must be false.
nameRules []AllowListNameRule
@@ -18,21 +34,225 @@ type AllowListNameRule struct {
Allow bool
}
func (al *AllowList) Allow(ip uint32) bool {
func NewLocalAllowListFromConfig(c *config.C, k string) (*LocalAllowList, error) {
var nameRules []AllowListNameRule
handleKey := func(key string, value any) (bool, error) {
if key == "interfaces" {
var err error
nameRules, err = getAllowListInterfaces(k, value)
if err != nil {
return false, err
}
return true, nil
}
return false, nil
}
al, err := newAllowListFromConfig(c, k, handleKey)
if err != nil {
return nil, err
}
return &LocalAllowList{AllowList: al, nameRules: nameRules}, nil
}
func NewRemoteAllowListFromConfig(c *config.C, k, rangesKey string) (*RemoteAllowList, error) {
al, err := newAllowListFromConfig(c, k, nil)
if err != nil {
return nil, err
}
remoteAllowRanges, err := getRemoteAllowRanges(c, rangesKey)
if err != nil {
return nil, err
}
return &RemoteAllowList{AllowList: al, insideAllowLists: remoteAllowRanges}, nil
}
// If the handleKey func returns true, the rest of the parsing is skipped
// for this key. This allows parsing of special values like `interfaces`.
func newAllowListFromConfig(c *config.C, k string, handleKey func(key string, value any) (bool, error)) (*AllowList, error) {
r := c.Get(k)
if r == nil {
return nil, nil
}
return newAllowList(k, r, handleKey)
}
// If the handleKey func returns true, the rest of the parsing is skipped
// for this key. This allows parsing of special values like `interfaces`.
func newAllowList(k string, raw any, handleKey func(key string, value any) (bool, error)) (*AllowList, error) {
rawMap, ok := raw.(map[string]any)
if !ok {
return nil, fmt.Errorf("config `%s` has invalid type: %T", k, raw)
}
tree := new(bart.Table[bool])
// Keep track of the rules we have added for both ipv4 and ipv6
type allowListRules struct {
firstValue bool
allValuesMatch bool
defaultSet bool
allValues bool
}
rules4 := allowListRules{firstValue: true, allValuesMatch: true, defaultSet: false}
rules6 := allowListRules{firstValue: true, allValuesMatch: true, defaultSet: false}
for rawCIDR, rawValue := range rawMap {
if handleKey != nil {
handled, err := handleKey(rawCIDR, rawValue)
if err != nil {
return nil, err
}
if handled {
continue
}
}
value, ok := config.AsBool(rawValue)
if !ok {
return nil, fmt.Errorf("config `%s` has invalid value (type %T): %v", k, rawValue, rawValue)
}
ipNet, err := netip.ParsePrefix(rawCIDR)
if err != nil {
return nil, fmt.Errorf("config `%s` has invalid CIDR: %s. %w", k, rawCIDR, err)
}
ipNet = netip.PrefixFrom(ipNet.Addr().Unmap(), ipNet.Bits())
tree.Insert(ipNet, value)
maskBits := ipNet.Bits()
var rules *allowListRules
if ipNet.Addr().Is4() {
rules = &rules4
} else {
rules = &rules6
}
if rules.firstValue {
rules.allValues = value
rules.firstValue = false
} else {
if value != rules.allValues {
rules.allValuesMatch = false
}
}
// Check if this is 0.0.0.0/0 or ::/0
if maskBits == 0 {
rules.defaultSet = true
}
}
if !rules4.defaultSet {
if rules4.allValuesMatch {
tree.Insert(netip.PrefixFrom(netip.IPv4Unspecified(), 0), !rules4.allValues)
} else {
return nil, fmt.Errorf("config `%s` contains both true and false rules, but no default set for 0.0.0.0/0", k)
}
}
if !rules6.defaultSet {
if rules6.allValuesMatch {
tree.Insert(netip.PrefixFrom(netip.IPv6Unspecified(), 0), !rules6.allValues)
} else {
return nil, fmt.Errorf("config `%s` contains both true and false rules, but no default set for ::/0", k)
}
}
return &AllowList{cidrTree: tree}, nil
}
func getAllowListInterfaces(k string, v any) ([]AllowListNameRule, error) {
var nameRules []AllowListNameRule
rawRules, ok := v.(map[string]any)
if !ok {
return nil, fmt.Errorf("config `%s.interfaces` is invalid (type %T): %v", k, v, v)
}
firstEntry := true
var allValues bool
for name, rawAllow := range rawRules {
allow, ok := config.AsBool(rawAllow)
if !ok {
return nil, fmt.Errorf("config `%s.interfaces` has invalid value (type %T): %v", k, rawAllow, rawAllow)
}
nameRE, err := regexp.Compile("^" + name + "$")
if err != nil {
return nil, fmt.Errorf("config `%s.interfaces` has invalid key: %s: %v", k, name, err)
}
nameRules = append(nameRules, AllowListNameRule{
Name: nameRE,
Allow: allow,
})
if firstEntry {
allValues = allow
firstEntry = false
} else {
if allow != allValues {
return nil, fmt.Errorf("config `%s.interfaces` values must all be the same true/false value", k)
}
}
}
return nameRules, nil
}
func getRemoteAllowRanges(c *config.C, k string) (*bart.Table[*AllowList], error) {
value := c.Get(k)
if value == nil {
return nil, nil
}
remoteAllowRanges := new(bart.Table[*AllowList])
rawMap, ok := value.(map[string]any)
if !ok {
return nil, fmt.Errorf("config `%s` has invalid type: %T", k, value)
}
for rawCIDR, rawValue := range rawMap {
allowList, err := newAllowList(fmt.Sprintf("%s.%s", k, rawCIDR), rawValue, nil)
if err != nil {
return nil, err
}
ipNet, err := netip.ParsePrefix(rawCIDR)
if err != nil {
return nil, fmt.Errorf("config `%s` has invalid CIDR: %s. %w", k, rawCIDR, err)
}
remoteAllowRanges.Insert(netip.PrefixFrom(ipNet.Addr().Unmap(), ipNet.Bits()), allowList)
}
return remoteAllowRanges, nil
}
func (al *AllowList) Allow(addr netip.Addr) bool {
if al == nil {
return true
}
result := al.cidrTree.MostSpecificContains(ip)
switch v := result.(type) {
case bool:
return v
default:
panic(fmt.Errorf("invalid state, allowlist returned: %T %v", result, result))
}
result, _ := al.cidrTree.Lookup(addr)
return result
}
func (al *AllowList) AllowName(name string) bool {
func (al *LocalAllowList) Allow(udpAddr netip.Addr) bool {
if al == nil {
return true
}
return al.AllowList.Allow(udpAddr)
}
func (al *LocalAllowList) AllowName(name string) bool {
if al == nil || len(al.nameRules) == 0 {
return true
}
@@ -46,3 +266,41 @@ func (al *AllowList) AllowName(name string) bool {
// If no rules match, return the default, which is the inverse of the rules
return !al.nameRules[0].Allow
}
func (al *RemoteAllowList) AllowUnknownVpnAddr(vpnAddr netip.Addr) bool {
if al == nil {
return true
}
return al.AllowList.Allow(vpnAddr)
}
func (al *RemoteAllowList) Allow(vpnAddr netip.Addr, udpAddr netip.Addr) bool {
if !al.getInsideAllowList(vpnAddr).Allow(udpAddr) {
return false
}
return al.AllowList.Allow(udpAddr)
}
func (al *RemoteAllowList) AllowAll(vpnAddrs []netip.Addr, udpAddr netip.Addr) bool {
if !al.AllowList.Allow(udpAddr) {
return false
}
for _, vpnAddr := range vpnAddrs {
if !al.getInsideAllowList(vpnAddr).Allow(udpAddr) {
return false
}
}
return true
}
func (al *RemoteAllowList) getInsideAllowList(vpnAddr netip.Addr) *AllowList {
if al.insideAllowLists != nil {
inside, ok := al.insideAllowLists.Lookup(vpnAddr)
if ok {
return inside
}
}
return nil
}

View File

@@ -1,47 +1,146 @@
package nebula
import (
"net"
"net/netip"
"regexp"
"testing"
"github.com/gaissmai/bart"
"github.com/slackhq/nebula/config"
"github.com/slackhq/nebula/test"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestAllowList_Allow(t *testing.T) {
assert.Equal(t, true, ((*AllowList)(nil)).Allow(ip2int(net.ParseIP("1.1.1.1"))))
func TestNewAllowListFromConfig(t *testing.T) {
l := test.NewLogger()
c := config.NewC(l)
c.Settings["allowlist"] = map[string]any{
"192.168.0.0": true,
}
r, err := newAllowListFromConfig(c, "allowlist", nil)
require.EqualError(t, err, "config `allowlist` has invalid CIDR: 192.168.0.0. netip.ParsePrefix(\"192.168.0.0\"): no '/'")
assert.Nil(t, r)
tree := NewCIDRTree()
tree.AddCIDR(getCIDR("0.0.0.0/0"), true)
tree.AddCIDR(getCIDR("10.0.0.0/8"), false)
tree.AddCIDR(getCIDR("10.42.42.0/24"), true)
al := &AllowList{cidrTree: tree}
c.Settings["allowlist"] = map[string]any{
"192.168.0.0/16": "abc",
}
r, err = newAllowListFromConfig(c, "allowlist", nil)
require.EqualError(t, err, "config `allowlist` has invalid value (type string): abc")
assert.Equal(t, true, al.Allow(ip2int(net.ParseIP("1.1.1.1"))))
assert.Equal(t, false, al.Allow(ip2int(net.ParseIP("10.0.0.4"))))
assert.Equal(t, true, al.Allow(ip2int(net.ParseIP("10.42.42.42"))))
c.Settings["allowlist"] = map[string]any{
"192.168.0.0/16": true,
"10.0.0.0/8": false,
}
r, err = newAllowListFromConfig(c, "allowlist", nil)
require.EqualError(t, err, "config `allowlist` contains both true and false rules, but no default set for 0.0.0.0/0")
c.Settings["allowlist"] = map[string]any{
"0.0.0.0/0": true,
"10.0.0.0/8": false,
"10.42.42.0/24": true,
"fd00::/8": true,
"fd00:fd00::/16": false,
}
r, err = newAllowListFromConfig(c, "allowlist", nil)
require.EqualError(t, err, "config `allowlist` contains both true and false rules, but no default set for ::/0")
c.Settings["allowlist"] = map[string]any{
"0.0.0.0/0": true,
"10.0.0.0/8": false,
"10.42.42.0/24": true,
}
r, err = newAllowListFromConfig(c, "allowlist", nil)
if assert.NoError(t, err) {
assert.NotNil(t, r)
}
c.Settings["allowlist"] = map[string]any{
"0.0.0.0/0": true,
"10.0.0.0/8": false,
"10.42.42.0/24": true,
"::/0": false,
"fd00::/8": true,
"fd00:fd00::/16": false,
}
r, err = newAllowListFromConfig(c, "allowlist", nil)
if assert.NoError(t, err) {
assert.NotNil(t, r)
}
// Test interface names
c.Settings["allowlist"] = map[string]any{
"interfaces": map[string]any{
`docker.*`: "foo",
},
}
lr, err := NewLocalAllowListFromConfig(c, "allowlist")
require.EqualError(t, err, "config `allowlist.interfaces` has invalid value (type string): foo")
c.Settings["allowlist"] = map[string]any{
"interfaces": map[string]any{
`docker.*`: false,
`eth.*`: true,
},
}
lr, err = NewLocalAllowListFromConfig(c, "allowlist")
require.EqualError(t, err, "config `allowlist.interfaces` values must all be the same true/false value")
c.Settings["allowlist"] = map[string]any{
"interfaces": map[string]any{
`docker.*`: false,
},
}
lr, err = NewLocalAllowListFromConfig(c, "allowlist")
if assert.NoError(t, err) {
assert.NotNil(t, lr)
}
}
func TestAllowList_AllowName(t *testing.T) {
assert.Equal(t, true, ((*AllowList)(nil)).AllowName("docker0"))
func TestAllowList_Allow(t *testing.T) {
assert.True(t, ((*AllowList)(nil)).Allow(netip.MustParseAddr("1.1.1.1")))
tree := new(bart.Table[bool])
tree.Insert(netip.MustParsePrefix("0.0.0.0/0"), true)
tree.Insert(netip.MustParsePrefix("10.0.0.0/8"), false)
tree.Insert(netip.MustParsePrefix("10.42.42.42/32"), true)
tree.Insert(netip.MustParsePrefix("10.42.0.0/16"), true)
tree.Insert(netip.MustParsePrefix("10.42.42.0/24"), true)
tree.Insert(netip.MustParsePrefix("10.42.42.0/24"), false)
tree.Insert(netip.MustParsePrefix("::1/128"), true)
tree.Insert(netip.MustParsePrefix("::2/128"), false)
al := &AllowList{cidrTree: tree}
assert.True(t, al.Allow(netip.MustParseAddr("1.1.1.1")))
assert.False(t, al.Allow(netip.MustParseAddr("10.0.0.4")))
assert.True(t, al.Allow(netip.MustParseAddr("10.42.42.42")))
assert.False(t, al.Allow(netip.MustParseAddr("10.42.42.41")))
assert.True(t, al.Allow(netip.MustParseAddr("10.42.0.1")))
assert.True(t, al.Allow(netip.MustParseAddr("::1")))
assert.False(t, al.Allow(netip.MustParseAddr("::2")))
}
func TestLocalAllowList_AllowName(t *testing.T) {
assert.True(t, ((*LocalAllowList)(nil)).AllowName("docker0"))
rules := []AllowListNameRule{
{Name: regexp.MustCompile("^docker.*$"), Allow: false},
{Name: regexp.MustCompile("^tun.*$"), Allow: false},
}
al := &AllowList{nameRules: rules}
al := &LocalAllowList{nameRules: rules}
assert.Equal(t, false, al.AllowName("docker0"))
assert.Equal(t, false, al.AllowName("tun0"))
assert.Equal(t, true, al.AllowName("eth0"))
assert.False(t, al.AllowName("docker0"))
assert.False(t, al.AllowName("tun0"))
assert.True(t, al.AllowName("eth0"))
rules = []AllowListNameRule{
{Name: regexp.MustCompile("^eth.*$"), Allow: true},
{Name: regexp.MustCompile("^ens.*$"), Allow: true},
}
al = &AllowList{nameRules: rules}
al = &LocalAllowList{nameRules: rules}
assert.Equal(t, false, al.AllowName("docker0"))
assert.Equal(t, true, al.AllowName("eth0"))
assert.Equal(t, true, al.AllowName("ens5"))
assert.False(t, al.AllowName("docker0"))
assert.True(t, al.AllowName("eth0"))
assert.True(t, al.AllowName("ens5"))
}

View File

@@ -26,7 +26,7 @@ func NewBits(bits uint64) *Bits {
}
}
func (b *Bits) Check(i uint64) bool {
func (b *Bits) Check(l logrus.FieldLogger, i uint64) bool {
// If i is the next number, return true.
if i > b.current || (i == 0 && b.firstSeen == false && b.current < b.length) {
return true
@@ -47,7 +47,7 @@ func (b *Bits) Check(i uint64) bool {
return false
}
func (b *Bits) Update(i uint64) bool {
func (b *Bits) Update(l *logrus.Logger, i uint64) bool {
// If i is the next number, return true and update current.
if i == b.current+1 {
// Report missed packets, we can only understand what was missed after the first window has been gone through

View File

@@ -3,10 +3,12 @@ package nebula
import (
"testing"
"github.com/slackhq/nebula/test"
"github.com/stretchr/testify/assert"
)
func TestBits(t *testing.T) {
l := test.NewLogger()
b := NewBits(10)
// make sure it is the right size
@@ -14,46 +16,46 @@ func TestBits(t *testing.T) {
// This is initialized to zero - receive one. This should work.
assert.True(t, b.Check(1))
u := b.Update(1)
assert.True(t, b.Check(l, 1))
u := b.Update(l, 1)
assert.True(t, u)
assert.EqualValues(t, 1, b.current)
g := []bool{false, true, false, false, false, false, false, false, false, false}
assert.Equal(t, g, b.bits)
// Receive two
assert.True(t, b.Check(2))
u = b.Update(2)
assert.True(t, b.Check(l, 2))
u = b.Update(l, 2)
assert.True(t, u)
assert.EqualValues(t, 2, b.current)
g = []bool{false, true, true, false, false, false, false, false, false, false}
assert.Equal(t, g, b.bits)
// Receive two again - it will fail
assert.False(t, b.Check(2))
u = b.Update(2)
assert.False(t, b.Check(l, 2))
u = b.Update(l, 2)
assert.False(t, u)
assert.EqualValues(t, 2, b.current)
// Jump ahead to 15, which should clear everything and set the 6th element
assert.True(t, b.Check(15))
u = b.Update(15)
assert.True(t, b.Check(l, 15))
u = b.Update(l, 15)
assert.True(t, u)
assert.EqualValues(t, 15, b.current)
g = []bool{false, false, false, false, false, true, false, false, false, false}
assert.Equal(t, g, b.bits)
// Mark 14, which is allowed because it is in the window
assert.True(t, b.Check(14))
u = b.Update(14)
assert.True(t, b.Check(l, 14))
u = b.Update(l, 14)
assert.True(t, u)
assert.EqualValues(t, 15, b.current)
g = []bool{false, false, false, false, true, true, false, false, false, false}
assert.Equal(t, g, b.bits)
// Mark 5, which is not allowed because it is not in the window
assert.False(t, b.Check(5))
u = b.Update(5)
assert.False(t, b.Check(l, 5))
u = b.Update(l, 5)
assert.False(t, u)
assert.EqualValues(t, 15, b.current)
g = []bool{false, false, false, false, true, true, false, false, false, false}
@@ -61,63 +63,65 @@ func TestBits(t *testing.T) {
// make sure we handle wrapping around once to the current position
b = NewBits(10)
assert.True(t, b.Update(1))
assert.True(t, b.Update(11))
assert.True(t, b.Update(l, 1))
assert.True(t, b.Update(l, 11))
assert.Equal(t, []bool{false, true, false, false, false, false, false, false, false, false}, b.bits)
// Walk through a few windows in order
b = NewBits(10)
for i := uint64(0); i <= 100; i++ {
assert.True(t, b.Check(i), "Error while checking %v", i)
assert.True(t, b.Update(i), "Error while updating %v", i)
assert.True(t, b.Check(l, i), "Error while checking %v", i)
assert.True(t, b.Update(l, i), "Error while updating %v", i)
}
}
func TestBitsDupeCounter(t *testing.T) {
l := test.NewLogger()
b := NewBits(10)
b.lostCounter.Clear()
b.dupeCounter.Clear()
b.outOfWindowCounter.Clear()
assert.True(t, b.Update(1))
assert.True(t, b.Update(l, 1))
assert.Equal(t, int64(0), b.dupeCounter.Count())
assert.False(t, b.Update(1))
assert.False(t, b.Update(l, 1))
assert.Equal(t, int64(1), b.dupeCounter.Count())
assert.True(t, b.Update(2))
assert.True(t, b.Update(l, 2))
assert.Equal(t, int64(1), b.dupeCounter.Count())
assert.True(t, b.Update(3))
assert.True(t, b.Update(l, 3))
assert.Equal(t, int64(1), b.dupeCounter.Count())
assert.False(t, b.Update(1))
assert.False(t, b.Update(l, 1))
assert.Equal(t, int64(0), b.lostCounter.Count())
assert.Equal(t, int64(2), b.dupeCounter.Count())
assert.Equal(t, int64(0), b.outOfWindowCounter.Count())
}
func TestBitsOutOfWindowCounter(t *testing.T) {
l := test.NewLogger()
b := NewBits(10)
b.lostCounter.Clear()
b.dupeCounter.Clear()
b.outOfWindowCounter.Clear()
assert.True(t, b.Update(20))
assert.True(t, b.Update(l, 20))
assert.Equal(t, int64(0), b.outOfWindowCounter.Count())
assert.True(t, b.Update(21))
assert.True(t, b.Update(22))
assert.True(t, b.Update(23))
assert.True(t, b.Update(24))
assert.True(t, b.Update(25))
assert.True(t, b.Update(26))
assert.True(t, b.Update(27))
assert.True(t, b.Update(28))
assert.True(t, b.Update(29))
assert.True(t, b.Update(l, 21))
assert.True(t, b.Update(l, 22))
assert.True(t, b.Update(l, 23))
assert.True(t, b.Update(l, 24))
assert.True(t, b.Update(l, 25))
assert.True(t, b.Update(l, 26))
assert.True(t, b.Update(l, 27))
assert.True(t, b.Update(l, 28))
assert.True(t, b.Update(l, 29))
assert.Equal(t, int64(0), b.outOfWindowCounter.Count())
assert.False(t, b.Update(0))
assert.False(t, b.Update(l, 0))
assert.Equal(t, int64(1), b.outOfWindowCounter.Count())
//tODO: make sure lostcounter doesn't increase in orderly increment
@@ -127,23 +131,24 @@ func TestBitsOutOfWindowCounter(t *testing.T) {
}
func TestBitsLostCounter(t *testing.T) {
l := test.NewLogger()
b := NewBits(10)
b.lostCounter.Clear()
b.dupeCounter.Clear()
b.outOfWindowCounter.Clear()
//assert.True(t, b.Update(0))
assert.True(t, b.Update(0))
assert.True(t, b.Update(20))
assert.True(t, b.Update(21))
assert.True(t, b.Update(22))
assert.True(t, b.Update(23))
assert.True(t, b.Update(24))
assert.True(t, b.Update(25))
assert.True(t, b.Update(26))
assert.True(t, b.Update(27))
assert.True(t, b.Update(28))
assert.True(t, b.Update(29))
assert.True(t, b.Update(l, 0))
assert.True(t, b.Update(l, 20))
assert.True(t, b.Update(l, 21))
assert.True(t, b.Update(l, 22))
assert.True(t, b.Update(l, 23))
assert.True(t, b.Update(l, 24))
assert.True(t, b.Update(l, 25))
assert.True(t, b.Update(l, 26))
assert.True(t, b.Update(l, 27))
assert.True(t, b.Update(l, 28))
assert.True(t, b.Update(l, 29))
assert.Equal(t, int64(20), b.lostCounter.Count())
assert.Equal(t, int64(0), b.dupeCounter.Count())
assert.Equal(t, int64(0), b.outOfWindowCounter.Count())
@@ -153,56 +158,56 @@ func TestBitsLostCounter(t *testing.T) {
b.dupeCounter.Clear()
b.outOfWindowCounter.Clear()
assert.True(t, b.Update(0))
assert.True(t, b.Update(l, 0))
assert.Equal(t, int64(0), b.lostCounter.Count())
assert.True(t, b.Update(9))
assert.True(t, b.Update(l, 9))
assert.Equal(t, int64(0), b.lostCounter.Count())
// 10 will set 0 index, 0 was already set, no lost packets
assert.True(t, b.Update(10))
assert.True(t, b.Update(l, 10))
assert.Equal(t, int64(0), b.lostCounter.Count())
// 11 will set 1 index, 1 was missed, we should see 1 packet lost
assert.True(t, b.Update(11))
assert.True(t, b.Update(l, 11))
assert.Equal(t, int64(1), b.lostCounter.Count())
// Now let's fill in the window, should end up with 8 lost packets
assert.True(t, b.Update(12))
assert.True(t, b.Update(13))
assert.True(t, b.Update(14))
assert.True(t, b.Update(15))
assert.True(t, b.Update(16))
assert.True(t, b.Update(17))
assert.True(t, b.Update(18))
assert.True(t, b.Update(19))
assert.True(t, b.Update(l, 12))
assert.True(t, b.Update(l, 13))
assert.True(t, b.Update(l, 14))
assert.True(t, b.Update(l, 15))
assert.True(t, b.Update(l, 16))
assert.True(t, b.Update(l, 17))
assert.True(t, b.Update(l, 18))
assert.True(t, b.Update(l, 19))
assert.Equal(t, int64(8), b.lostCounter.Count())
// Jump ahead by a window size
assert.True(t, b.Update(29))
assert.True(t, b.Update(l, 29))
assert.Equal(t, int64(8), b.lostCounter.Count())
// Now lets walk ahead normally through the window, the missed packets should fill in
assert.True(t, b.Update(30))
assert.True(t, b.Update(31))
assert.True(t, b.Update(32))
assert.True(t, b.Update(33))
assert.True(t, b.Update(34))
assert.True(t, b.Update(35))
assert.True(t, b.Update(36))
assert.True(t, b.Update(37))
assert.True(t, b.Update(38))
assert.True(t, b.Update(l, 30))
assert.True(t, b.Update(l, 31))
assert.True(t, b.Update(l, 32))
assert.True(t, b.Update(l, 33))
assert.True(t, b.Update(l, 34))
assert.True(t, b.Update(l, 35))
assert.True(t, b.Update(l, 36))
assert.True(t, b.Update(l, 37))
assert.True(t, b.Update(l, 38))
// 39 packets tracked, 22 seen, 17 lost
assert.Equal(t, int64(17), b.lostCounter.Count())
// Jump ahead by 2 windows, should have recording 1 full window missing
assert.True(t, b.Update(58))
assert.True(t, b.Update(l, 58))
assert.Equal(t, int64(27), b.lostCounter.Count())
// Now lets walk ahead normally through the window, the missed packets should fill in from this window
assert.True(t, b.Update(59))
assert.True(t, b.Update(60))
assert.True(t, b.Update(61))
assert.True(t, b.Update(62))
assert.True(t, b.Update(63))
assert.True(t, b.Update(64))
assert.True(t, b.Update(65))
assert.True(t, b.Update(66))
assert.True(t, b.Update(67))
assert.True(t, b.Update(l, 59))
assert.True(t, b.Update(l, 60))
assert.True(t, b.Update(l, 61))
assert.True(t, b.Update(l, 62))
assert.True(t, b.Update(l, 63))
assert.True(t, b.Update(l, 64))
assert.True(t, b.Update(l, 65))
assert.True(t, b.Update(l, 66))
assert.True(t, b.Update(l, 67))
// 68 packets tracked, 32 seen, 36 missed
assert.Equal(t, int64(36), b.lostCounter.Count())
assert.Equal(t, int64(0), b.dupeCounter.Count())
@@ -212,10 +217,10 @@ func TestBitsLostCounter(t *testing.T) {
func BenchmarkBits(b *testing.B) {
z := NewBits(10)
for n := 0; n < b.N; n++ {
for i, _ := range z.bits {
for i := range z.bits {
z.bits[i] = true
}
for i, _ := range z.bits {
for i := range z.bits {
z.bits[i] = false
}

8
boring.go Normal file
View File

@@ -0,0 +1,8 @@
//go:build boringcrypto
// +build boringcrypto
package nebula
import "crypto/boring"
var boringEnabled = boring.Enabled

168
calculated_remote.go Normal file
View File

@@ -0,0 +1,168 @@
package nebula
import (
"encoding/binary"
"fmt"
"math"
"net"
"net/netip"
"strconv"
"github.com/gaissmai/bart"
"github.com/slackhq/nebula/config"
)
// This allows us to "guess" what the remote might be for a host while we wait
// for the lighthouse response. See "lighthouse.calculated_remotes" in the
// example config file.
type calculatedRemote struct {
ipNet netip.Prefix
mask netip.Prefix
port uint32
}
func newCalculatedRemote(cidr, maskCidr netip.Prefix, port int) (*calculatedRemote, error) {
if maskCidr.Addr().BitLen() != cidr.Addr().BitLen() {
return nil, fmt.Errorf("invalid mask: %s for cidr: %s", maskCidr, cidr)
}
masked := maskCidr.Masked()
if port < 0 || port > math.MaxUint16 {
return nil, fmt.Errorf("invalid port: %d", port)
}
return &calculatedRemote{
ipNet: maskCidr,
mask: masked,
port: uint32(port),
}, nil
}
func (c *calculatedRemote) String() string {
return fmt.Sprintf("CalculatedRemote(mask=%v port=%d)", c.ipNet, c.port)
}
func (c *calculatedRemote) ApplyV4(addr netip.Addr) *V4AddrPort {
// Combine the masked bytes of the "mask" IP with the unmasked bytes of the overlay IP
maskb := net.CIDRMask(c.mask.Bits(), c.mask.Addr().BitLen())
mask := binary.BigEndian.Uint32(maskb[:])
b := c.mask.Addr().As4()
maskAddr := binary.BigEndian.Uint32(b[:])
b = addr.As4()
intAddr := binary.BigEndian.Uint32(b[:])
return &V4AddrPort{(maskAddr & mask) | (intAddr & ^mask), c.port}
}
func (c *calculatedRemote) ApplyV6(addr netip.Addr) *V6AddrPort {
mask := net.CIDRMask(c.mask.Bits(), c.mask.Addr().BitLen())
maskAddr := c.mask.Addr().As16()
calcAddr := addr.As16()
ap := V6AddrPort{Port: c.port}
maskb := binary.BigEndian.Uint64(mask[:8])
maskAddrb := binary.BigEndian.Uint64(maskAddr[:8])
calcAddrb := binary.BigEndian.Uint64(calcAddr[:8])
ap.Hi = (maskAddrb & maskb) | (calcAddrb & ^maskb)
maskb = binary.BigEndian.Uint64(mask[8:])
maskAddrb = binary.BigEndian.Uint64(maskAddr[8:])
calcAddrb = binary.BigEndian.Uint64(calcAddr[8:])
ap.Lo = (maskAddrb & maskb) | (calcAddrb & ^maskb)
return &ap
}
func NewCalculatedRemotesFromConfig(c *config.C, k string) (*bart.Table[[]*calculatedRemote], error) {
value := c.Get(k)
if value == nil {
return nil, nil
}
calculatedRemotes := new(bart.Table[[]*calculatedRemote])
rawMap, ok := value.(map[any]any)
if !ok {
return nil, fmt.Errorf("config `%s` has invalid type: %T", k, value)
}
for rawKey, rawValue := range rawMap {
rawCIDR, ok := rawKey.(string)
if !ok {
return nil, fmt.Errorf("config `%s` has invalid key (type %T): %v", k, rawKey, rawKey)
}
cidr, err := netip.ParsePrefix(rawCIDR)
if err != nil {
return nil, fmt.Errorf("config `%s` has invalid CIDR: %s", k, rawCIDR)
}
entry, err := newCalculatedRemotesListFromConfig(cidr, rawValue)
if err != nil {
return nil, fmt.Errorf("config '%s.%s': %w", k, rawCIDR, err)
}
calculatedRemotes.Insert(cidr, entry)
}
return calculatedRemotes, nil
}
func newCalculatedRemotesListFromConfig(cidr netip.Prefix, raw any) ([]*calculatedRemote, error) {
rawList, ok := raw.([]any)
if !ok {
return nil, fmt.Errorf("calculated_remotes entry has invalid type: %T", raw)
}
var l []*calculatedRemote
for _, e := range rawList {
c, err := newCalculatedRemotesEntryFromConfig(cidr, e)
if err != nil {
return nil, fmt.Errorf("calculated_remotes entry: %w", err)
}
l = append(l, c)
}
return l, nil
}
func newCalculatedRemotesEntryFromConfig(cidr netip.Prefix, raw any) (*calculatedRemote, error) {
rawMap, ok := raw.(map[any]any)
if !ok {
return nil, fmt.Errorf("invalid type: %T", raw)
}
rawValue := rawMap["mask"]
if rawValue == nil {
return nil, fmt.Errorf("missing mask: %v", rawMap)
}
rawMask, ok := rawValue.(string)
if !ok {
return nil, fmt.Errorf("invalid mask (type %T): %v", rawValue, rawValue)
}
maskCidr, err := netip.ParsePrefix(rawMask)
if err != nil {
return nil, fmt.Errorf("invalid mask: %s", rawMask)
}
var port int
rawValue = rawMap["port"]
if rawValue == nil {
return nil, fmt.Errorf("missing port: %v", rawMap)
}
switch v := rawValue.(type) {
case int:
port = v
case string:
port, err = strconv.Atoi(v)
if err != nil {
return nil, fmt.Errorf("invalid port: %s: %w", v, err)
}
default:
return nil, fmt.Errorf("invalid port (type %T): %v", rawValue, rawValue)
}
return newCalculatedRemote(cidr, maskCidr, port)
}

81
calculated_remote_test.go Normal file
View File

@@ -0,0 +1,81 @@
package nebula
import (
"net/netip"
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestCalculatedRemoteApply(t *testing.T) {
// Test v4 addresses
ipNet := netip.MustParsePrefix("192.168.1.0/24")
c, err := newCalculatedRemote(ipNet, ipNet, 4242)
require.NoError(t, err)
input, err := netip.ParseAddr("10.0.10.182")
require.NoError(t, err)
expected, err := netip.ParseAddr("192.168.1.182")
require.NoError(t, err)
assert.Equal(t, netAddrToProtoV4AddrPort(expected, 4242), c.ApplyV4(input))
// Test v6 addresses
ipNet = netip.MustParsePrefix("ffff:ffff:ffff:ffff::0/64")
c, err = newCalculatedRemote(ipNet, ipNet, 4242)
require.NoError(t, err)
input, err = netip.ParseAddr("beef:beef:beef:beef:beef:beef:beef:beef")
require.NoError(t, err)
expected, err = netip.ParseAddr("ffff:ffff:ffff:ffff:beef:beef:beef:beef")
require.NoError(t, err)
assert.Equal(t, netAddrToProtoV6AddrPort(expected, 4242), c.ApplyV6(input))
// Test v6 addresses part 2
ipNet = netip.MustParsePrefix("ffff:ffff:ffff:ffff:ffff::0/80")
c, err = newCalculatedRemote(ipNet, ipNet, 4242)
require.NoError(t, err)
input, err = netip.ParseAddr("beef:beef:beef:beef:beef:beef:beef:beef")
require.NoError(t, err)
expected, err = netip.ParseAddr("ffff:ffff:ffff:ffff:ffff:beef:beef:beef")
require.NoError(t, err)
assert.Equal(t, netAddrToProtoV6AddrPort(expected, 4242), c.ApplyV6(input))
// Test v6 addresses part 2
ipNet = netip.MustParsePrefix("ffff:ffff:ffff::0/48")
c, err = newCalculatedRemote(ipNet, ipNet, 4242)
require.NoError(t, err)
input, err = netip.ParseAddr("beef:beef:beef:beef:beef:beef:beef:beef")
require.NoError(t, err)
expected, err = netip.ParseAddr("ffff:ffff:ffff:beef:beef:beef:beef:beef")
require.NoError(t, err)
assert.Equal(t, netAddrToProtoV6AddrPort(expected, 4242), c.ApplyV6(input))
}
func Test_newCalculatedRemote(t *testing.T) {
c, err := newCalculatedRemote(netip.MustParsePrefix("1::1/128"), netip.MustParsePrefix("1.0.0.0/32"), 4242)
require.EqualError(t, err, "invalid mask: 1.0.0.0/32 for cidr: 1::1/128")
require.Nil(t, c)
c, err = newCalculatedRemote(netip.MustParsePrefix("1.0.0.0/32"), netip.MustParsePrefix("1::1/128"), 4242)
require.EqualError(t, err, "invalid mask: 1::1/128 for cidr: 1.0.0.0/32")
require.Nil(t, c)
c, err = newCalculatedRemote(netip.MustParsePrefix("1.0.0.0/32"), netip.MustParsePrefix("1.0.0.0/32"), 4242)
require.NoError(t, err)
require.NotNil(t, c)
c, err = newCalculatedRemote(netip.MustParsePrefix("1::1/128"), netip.MustParsePrefix("1::1/128"), 4242)
require.NoError(t, err)
require.NotNil(t, c)
}

159
cert.go
View File

@@ -1,159 +0,0 @@
package nebula
import (
"errors"
"fmt"
"io/ioutil"
"strings"
"time"
"github.com/slackhq/nebula/cert"
)
var trustedCAs *cert.NebulaCAPool
type CertState struct {
certificate *cert.NebulaCertificate
rawCertificate []byte
rawCertificateNoKey []byte
publicKey []byte
privateKey []byte
}
func NewCertState(certificate *cert.NebulaCertificate, privateKey []byte) (*CertState, error) {
// Marshal the certificate to ensure it is valid
rawCertificate, err := certificate.Marshal()
if err != nil {
return nil, fmt.Errorf("invalid nebula certificate on interface: %s", err)
}
publicKey := certificate.Details.PublicKey
cs := &CertState{
rawCertificate: rawCertificate,
certificate: certificate, // PublicKey has been set to nil above
privateKey: privateKey,
publicKey: publicKey,
}
cs.certificate.Details.PublicKey = nil
rawCertNoKey, err := cs.certificate.Marshal()
if err != nil {
return nil, fmt.Errorf("error marshalling certificate no key: %s", err)
}
cs.rawCertificateNoKey = rawCertNoKey
// put public key back
cs.certificate.Details.PublicKey = cs.publicKey
return cs, nil
}
func NewCertStateFromConfig(c *Config) (*CertState, error) {
var pemPrivateKey []byte
var err error
privPathOrPEM := c.GetString("pki.key", "")
if privPathOrPEM == "" {
// Support backwards compat with the old x509
//TODO: remove after this is rolled out everywhere - NB 2018/02/23
privPathOrPEM = c.GetString("x509.key", "")
}
if privPathOrPEM == "" {
return nil, errors.New("no pki.key path or PEM data provided")
}
if strings.Contains(privPathOrPEM, "-----BEGIN") {
pemPrivateKey = []byte(privPathOrPEM)
privPathOrPEM = "<inline>"
} else {
pemPrivateKey, err = ioutil.ReadFile(privPathOrPEM)
if err != nil {
return nil, fmt.Errorf("unable to read pki.key file %s: %s", privPathOrPEM, err)
}
}
rawKey, _, err := cert.UnmarshalX25519PrivateKey(pemPrivateKey)
if err != nil {
return nil, fmt.Errorf("error while unmarshaling pki.key %s: %s", privPathOrPEM, err)
}
var rawCert []byte
pubPathOrPEM := c.GetString("pki.cert", "")
if pubPathOrPEM == "" {
// Support backwards compat with the old x509
//TODO: remove after this is rolled out everywhere - NB 2018/02/23
pubPathOrPEM = c.GetString("x509.cert", "")
}
if pubPathOrPEM == "" {
return nil, errors.New("no pki.cert path or PEM data provided")
}
if strings.Contains(pubPathOrPEM, "-----BEGIN") {
rawCert = []byte(pubPathOrPEM)
pubPathOrPEM = "<inline>"
} else {
rawCert, err = ioutil.ReadFile(pubPathOrPEM)
if err != nil {
return nil, fmt.Errorf("unable to read pki.cert file %s: %s", pubPathOrPEM, err)
}
}
nebulaCert, _, err := cert.UnmarshalNebulaCertificateFromPEM(rawCert)
if err != nil {
return nil, fmt.Errorf("error while unmarshaling pki.cert %s: %s", pubPathOrPEM, err)
}
if nebulaCert.Expired(time.Now()) {
return nil, fmt.Errorf("nebula certificate for this host is expired")
}
if len(nebulaCert.Details.Ips) == 0 {
return nil, fmt.Errorf("no IPs encoded in certificate")
}
if err = nebulaCert.VerifyPrivateKey(rawKey); err != nil {
return nil, fmt.Errorf("private key is not a pair with public key in nebula cert")
}
return NewCertState(nebulaCert, rawKey)
}
func loadCAFromConfig(c *Config) (*cert.NebulaCAPool, error) {
var rawCA []byte
var err error
caPathOrPEM := c.GetString("pki.ca", "")
if caPathOrPEM == "" {
// Support backwards compat with the old x509
//TODO: remove after this is rolled out everywhere - NB 2018/02/23
caPathOrPEM = c.GetString("x509.ca", "")
}
if caPathOrPEM == "" {
return nil, errors.New("no pki.ca path or PEM data provided")
}
if strings.Contains(caPathOrPEM, "-----BEGIN") {
rawCA = []byte(caPathOrPEM)
caPathOrPEM = "<inline>"
} else {
rawCA, err = ioutil.ReadFile(caPathOrPEM)
if err != nil {
return nil, fmt.Errorf("unable to read pki.ca file %s: %s", caPathOrPEM, err)
}
}
CAs, err := cert.NewCAPoolFromBytes(rawCA)
if err != nil {
return nil, fmt.Errorf("error while adding CA certificate to CA trust store: %s", err)
}
// pki.blacklist entered the scene at about the same time we aliased x509 to pki, not supporting backwards compat
for _, fp := range c.GetStringSlice("pki.blacklist", []string{}) {
l.WithField("fingerprint", fp).Infof("Blacklisting cert")
CAs.BlacklistFingerprint(fp)
}
return CAs, nil
}

View File

@@ -1,9 +1,9 @@
GO111MODULE = on
export GO111MODULE
cert.pb.go: cert.proto .FORCE
go build github.com/golang/protobuf/protoc-gen-go
PATH="$(PWD):$(PATH)" protoc --go_out=. $<
cert_v1.pb.go: cert_v1.proto .FORCE
go build google.golang.org/protobuf/cmd/protoc-gen-go
PATH="$(CURDIR):$(PATH)" protoc --go_out=. --go_opt=paths=source_relative $<
rm protoc-gen-go
.FORCE:

View File

@@ -2,14 +2,25 @@
This is a library for interacting with `nebula` style certificates and authorities.
A `protobuf` definition of the certificate format is also included
There are now 2 versions of `nebula` certificates:
### Compiling the protobuf definition
## v1
Make sure you have `protoc` installed.
This version is deprecated.
A `protobuf` definition of the certificate format is included at `cert_v1.proto`
To compile the definition you will need `protoc` installed.
To compile for `go` with the same version of protobuf specified in go.mod:
```bash
make
make proto
```
## v2
This is the latest version which uses asn.1 DER encoding. It can support ipv4 and ipv6 and tolerate
future certificate changes better than v1.
`cert_v2.asn1` defines the wire format and can be used to compile marshalers.

52
cert/asn1.go Normal file
View File

@@ -0,0 +1,52 @@
package cert
import (
"golang.org/x/crypto/cryptobyte"
"golang.org/x/crypto/cryptobyte/asn1"
)
// readOptionalASN1Boolean reads an asn.1 boolean with a specific tag instead of a asn.1 tag wrapping a boolean with a value
// https://github.com/golang/go/issues/64811#issuecomment-1944446920
func readOptionalASN1Boolean(b *cryptobyte.String, out *bool, tag asn1.Tag, defaultValue bool) bool {
var present bool
var child cryptobyte.String
if !b.ReadOptionalASN1(&child, &present, tag) {
return false
}
if !present {
*out = defaultValue
return true
}
// Ensure we have 1 byte
if len(child) == 1 {
*out = child[0] > 0
return true
}
return false
}
// readOptionalASN1Byte reads an asn.1 uint8 with a specific tag instead of a asn.1 tag wrapping a uint8 with a value
// Similar issue as with readOptionalASN1Boolean
func readOptionalASN1Byte(b *cryptobyte.String, out *byte, tag asn1.Tag, defaultValue byte) bool {
var present bool
var child cryptobyte.String
if !b.ReadOptionalASN1(&child, &present, tag) {
return false
}
if !present {
*out = defaultValue
return true
}
// Ensure we have 1 byte
if len(child) == 1 {
*out = child[0]
return true
}
return false
}

View File

@@ -1,120 +0,0 @@
package cert
import (
"fmt"
"strings"
"time"
)
type NebulaCAPool struct {
CAs map[string]*NebulaCertificate
certBlacklist map[string]struct{}
}
// NewCAPool creates a CAPool
func NewCAPool() *NebulaCAPool {
ca := NebulaCAPool{
CAs: make(map[string]*NebulaCertificate),
certBlacklist: make(map[string]struct{}),
}
return &ca
}
func NewCAPoolFromBytes(caPEMs []byte) (*NebulaCAPool, error) {
pool := NewCAPool()
var err error
for {
caPEMs, err = pool.AddCACertificate(caPEMs)
if err != nil {
return nil, err
}
if caPEMs == nil || len(caPEMs) == 0 || strings.TrimSpace(string(caPEMs)) == "" {
break
}
}
return pool, nil
}
// AddCACertificate verifies a Nebula CA certificate and adds it to the pool
// Only the first pem encoded object will be consumed, any remaining bytes are returned.
// Parsed certificates will be verified and must be a CA
func (ncp *NebulaCAPool) AddCACertificate(pemBytes []byte) ([]byte, error) {
c, pemBytes, err := UnmarshalNebulaCertificateFromPEM(pemBytes)
if err != nil {
return pemBytes, err
}
if !c.Details.IsCA {
return pemBytes, fmt.Errorf("provided certificate was not a CA; %s", c.Details.Name)
}
if !c.CheckSignature(c.Details.PublicKey) {
return pemBytes, fmt.Errorf("provided certificate was not self signed; %s", c.Details.Name)
}
if c.Expired(time.Now()) {
return pemBytes, fmt.Errorf("provided CA certificate is expired; %s", c.Details.Name)
}
sum, err := c.Sha256Sum()
if err != nil {
return pemBytes, fmt.Errorf("could not calculate shasum for provided CA; error: %s; %s", err, c.Details.Name)
}
ncp.CAs[sum] = c
return pemBytes, nil
}
// BlacklistFingerprint adds a cert fingerprint to the blacklist
func (ncp *NebulaCAPool) BlacklistFingerprint(f string) {
ncp.certBlacklist[f] = struct{}{}
}
// ResetCertBlacklist removes all previously blacklisted cert fingerprints
func (ncp *NebulaCAPool) ResetCertBlacklist() {
ncp.certBlacklist = make(map[string]struct{})
}
// IsBlacklisted returns true if the fingerprint fails to generate or has been explicitly blacklisted
func (ncp *NebulaCAPool) IsBlacklisted(c *NebulaCertificate) bool {
h, err := c.Sha256Sum()
if err != nil {
return true
}
if _, ok := ncp.certBlacklist[h]; ok {
return true
}
return false
}
// GetCAForCert attempts to return the signing certificate for the provided certificate.
// No signature validation is performed
func (ncp *NebulaCAPool) GetCAForCert(c *NebulaCertificate) (*NebulaCertificate, error) {
if c.Details.Issuer == "" {
return nil, fmt.Errorf("no issuer in certificate")
}
signer, ok := ncp.CAs[c.Details.Issuer]
if ok {
return signer, nil
}
return nil, fmt.Errorf("could not find ca for the certificate")
}
// GetFingerprints returns an array of trusted CA fingerprints
func (ncp *NebulaCAPool) GetFingerprints() []string {
fp := make([]string, len(ncp.CAs))
i := 0
for k := range ncp.CAs {
fp[i] = k
i++
}
return fp
}

296
cert/ca_pool.go Normal file
View File

@@ -0,0 +1,296 @@
package cert
import (
"errors"
"fmt"
"net/netip"
"slices"
"strings"
"time"
)
type CAPool struct {
CAs map[string]*CachedCertificate
certBlocklist map[string]struct{}
}
// NewCAPool creates an empty CAPool
func NewCAPool() *CAPool {
ca := CAPool{
CAs: make(map[string]*CachedCertificate),
certBlocklist: make(map[string]struct{}),
}
return &ca
}
// NewCAPoolFromPEM will create a new CA pool from the provided
// input bytes, which must be a PEM-encoded set of nebula certificates.
// If the pool contains any expired certificates, an ErrExpired will be
// returned along with the pool. The caller must handle any such errors.
func NewCAPoolFromPEM(caPEMs []byte) (*CAPool, error) {
pool := NewCAPool()
var err error
var expired bool
for {
caPEMs, err = pool.AddCAFromPEM(caPEMs)
if errors.Is(err, ErrExpired) {
expired = true
err = nil
}
if err != nil {
return nil, err
}
if len(caPEMs) == 0 || strings.TrimSpace(string(caPEMs)) == "" {
break
}
}
if expired {
return pool, ErrExpired
}
return pool, nil
}
// AddCAFromPEM verifies a Nebula CA certificate and adds it to the pool.
// Only the first pem encoded object will be consumed, any remaining bytes are returned.
// Parsed certificates will be verified and must be a CA
func (ncp *CAPool) AddCAFromPEM(pemBytes []byte) ([]byte, error) {
c, pemBytes, err := UnmarshalCertificateFromPEM(pemBytes)
if err != nil {
return pemBytes, err
}
err = ncp.AddCA(c)
if err != nil {
return pemBytes, err
}
return pemBytes, nil
}
// AddCA verifies a Nebula CA certificate and adds it to the pool.
func (ncp *CAPool) AddCA(c Certificate) error {
if !c.IsCA() {
return fmt.Errorf("%s: %w", c.Name(), ErrNotCA)
}
if !c.CheckSignature(c.PublicKey()) {
return fmt.Errorf("%s: %w", c.Name(), ErrNotSelfSigned)
}
sum, err := c.Fingerprint()
if err != nil {
return fmt.Errorf("could not calculate fingerprint for provided CA; error: %w; %s", err, c.Name())
}
cc := &CachedCertificate{
Certificate: c,
Fingerprint: sum,
InvertedGroups: make(map[string]struct{}),
}
for _, g := range c.Groups() {
cc.InvertedGroups[g] = struct{}{}
}
ncp.CAs[sum] = cc
if c.Expired(time.Now()) {
return fmt.Errorf("%s: %w", c.Name(), ErrExpired)
}
return nil
}
// BlocklistFingerprint adds a cert fingerprint to the blocklist
func (ncp *CAPool) BlocklistFingerprint(f string) {
ncp.certBlocklist[f] = struct{}{}
}
// ResetCertBlocklist removes all previously blocklisted cert fingerprints
func (ncp *CAPool) ResetCertBlocklist() {
ncp.certBlocklist = make(map[string]struct{})
}
// IsBlocklisted tests the provided fingerprint against the pools blocklist.
// Returns true if the fingerprint is blocked.
func (ncp *CAPool) IsBlocklisted(fingerprint string) bool {
if _, ok := ncp.certBlocklist[fingerprint]; ok {
return true
}
return false
}
// VerifyCertificate verifies the certificate is valid and is signed by a trusted CA in the pool.
// If the certificate is valid then the returned CachedCertificate can be used in subsequent verification attempts
// to increase performance.
func (ncp *CAPool) VerifyCertificate(now time.Time, c Certificate) (*CachedCertificate, error) {
if c == nil {
return nil, fmt.Errorf("no certificate")
}
fp, err := c.Fingerprint()
if err != nil {
return nil, fmt.Errorf("could not calculate fingerprint to verify: %w", err)
}
signer, err := ncp.verify(c, now, fp, "")
if err != nil {
return nil, err
}
cc := CachedCertificate{
Certificate: c,
InvertedGroups: make(map[string]struct{}),
Fingerprint: fp,
signerFingerprint: signer.Fingerprint,
}
for _, g := range c.Groups() {
cc.InvertedGroups[g] = struct{}{}
}
return &cc, nil
}
// VerifyCachedCertificate is the same as VerifyCertificate other than it operates on a pre-verified structure and
// is a cheaper operation to perform as a result.
func (ncp *CAPool) VerifyCachedCertificate(now time.Time, c *CachedCertificate) error {
_, err := ncp.verify(c.Certificate, now, c.Fingerprint, c.signerFingerprint)
return err
}
func (ncp *CAPool) verify(c Certificate, now time.Time, certFp string, signerFp string) (*CachedCertificate, error) {
if ncp.IsBlocklisted(certFp) {
return nil, ErrBlockListed
}
signer, err := ncp.GetCAForCert(c)
if err != nil {
return nil, err
}
if signer.Certificate.Expired(now) {
return nil, ErrRootExpired
}
if c.Expired(now) {
return nil, ErrExpired
}
// If we are checking a cached certificate then we can bail early here
// Either the root is no longer trusted or everything is fine
if len(signerFp) > 0 {
if signerFp != signer.Fingerprint {
return nil, ErrFingerprintMismatch
}
return signer, nil
}
if !c.CheckSignature(signer.Certificate.PublicKey()) {
return nil, ErrSignatureMismatch
}
err = CheckCAConstraints(signer.Certificate, c)
if err != nil {
return nil, err
}
return signer, nil
}
// GetCAForCert attempts to return the signing certificate for the provided certificate.
// No signature validation is performed
func (ncp *CAPool) GetCAForCert(c Certificate) (*CachedCertificate, error) {
issuer := c.Issuer()
if issuer == "" {
return nil, fmt.Errorf("no issuer in certificate")
}
signer, ok := ncp.CAs[issuer]
if ok {
return signer, nil
}
return nil, ErrCaNotFound
}
// GetFingerprints returns an array of trusted CA fingerprints
func (ncp *CAPool) GetFingerprints() []string {
fp := make([]string, len(ncp.CAs))
i := 0
for k := range ncp.CAs {
fp[i] = k
i++
}
return fp
}
// CheckCAConstraints returns an error if the sub certificate violates constraints present in the signer certificate.
func CheckCAConstraints(signer Certificate, sub Certificate) error {
return checkCAConstraints(signer, sub.NotBefore(), sub.NotAfter(), sub.Groups(), sub.Networks(), sub.UnsafeNetworks())
}
// checkCAConstraints is a very generic function allowing both Certificates and TBSCertificates to be tested.
func checkCAConstraints(signer Certificate, notBefore, notAfter time.Time, groups []string, networks, unsafeNetworks []netip.Prefix) error {
// Make sure this cert isn't valid after the root
if notAfter.After(signer.NotAfter()) {
return fmt.Errorf("certificate expires after signing certificate")
}
// Make sure this cert wasn't valid before the root
if notBefore.Before(signer.NotBefore()) {
return fmt.Errorf("certificate is valid before the signing certificate")
}
// If the signer has a limited set of groups make sure the cert only contains a subset
signerGroups := signer.Groups()
if len(signerGroups) > 0 {
for _, g := range groups {
if !slices.Contains(signerGroups, g) {
return fmt.Errorf("certificate contained a group not present on the signing ca: %s", g)
}
}
}
// If the signer has a limited set of ip ranges to issue from make sure the cert only contains a subset
signingNetworks := signer.Networks()
if len(signingNetworks) > 0 {
for _, certNetwork := range networks {
found := false
for _, signingNetwork := range signingNetworks {
if signingNetwork.Contains(certNetwork.Addr()) && signingNetwork.Bits() <= certNetwork.Bits() {
found = true
break
}
}
if !found {
return fmt.Errorf("certificate contained a network assignment outside the limitations of the signing ca: %s", certNetwork.String())
}
}
}
// If the signer has a limited set of subnet ranges to issue from make sure the cert only contains a subset
signingUnsafeNetworks := signer.UnsafeNetworks()
if len(signingUnsafeNetworks) > 0 {
for _, certUnsafeNetwork := range unsafeNetworks {
found := false
for _, caNetwork := range signingUnsafeNetworks {
if caNetwork.Contains(certUnsafeNetwork.Addr()) && caNetwork.Bits() <= certUnsafeNetwork.Bits() {
found = true
break
}
}
if !found {
return fmt.Errorf("certificate contained an unsafe network assignment outside the limitations of the signing ca: %s", certUnsafeNetwork.String())
}
}
}
return nil
}

560
cert/ca_pool_test.go Normal file
View File

@@ -0,0 +1,560 @@
package cert
import (
"net/netip"
"testing"
"time"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestNewCAPoolFromBytes(t *testing.T) {
noNewLines := `
# Current provisional, Remove once everything moves over to the real root.
-----BEGIN NEBULA CERTIFICATE-----
Cj4KDm5lYnVsYSByb290IGNhKM0cMM24zPCvBzogV24YEw5YiqeI/oYo8XXFsoo+
PBmiOafNJhLacf9rsspAARJAz9OAnh8TKAUKix1kKVMyQU4iM3LsFfZRf6ODWXIf
2qWMpB6fpd3PSoVYziPoOt2bIHIFLlgRLPJz3I3xBEdBCQ==
-----END NEBULA CERTIFICATE-----
# root-ca01
-----BEGIN NEBULA CERTIFICATE-----
CkEKEW5lYnVsYSByb290IGNhIDAxKM0cMM24zPCvBzogPzbWTxt8ZgXPQEwup7Br
BrtIt1O0q5AuTRT3+t2x1VJAARJAZ+2ib23qBXjdy49oU1YysrwuKkWWKrtJ7Jye
rFBQpDXikOukhQD/mfkloFwJ+Yjsfru7IpTN4ZfjXL+kN/2sCA==
-----END NEBULA CERTIFICATE-----
`
withNewLines := `
# Current provisional, Remove once everything moves over to the real root.
-----BEGIN NEBULA CERTIFICATE-----
Cj4KDm5lYnVsYSByb290IGNhKM0cMM24zPCvBzogV24YEw5YiqeI/oYo8XXFsoo+
PBmiOafNJhLacf9rsspAARJAz9OAnh8TKAUKix1kKVMyQU4iM3LsFfZRf6ODWXIf
2qWMpB6fpd3PSoVYziPoOt2bIHIFLlgRLPJz3I3xBEdBCQ==
-----END NEBULA CERTIFICATE-----
# root-ca01
-----BEGIN NEBULA CERTIFICATE-----
CkEKEW5lYnVsYSByb290IGNhIDAxKM0cMM24zPCvBzogPzbWTxt8ZgXPQEwup7Br
BrtIt1O0q5AuTRT3+t2x1VJAARJAZ+2ib23qBXjdy49oU1YysrwuKkWWKrtJ7Jye
rFBQpDXikOukhQD/mfkloFwJ+Yjsfru7IpTN4ZfjXL+kN/2sCA==
-----END NEBULA CERTIFICATE-----
`
expired := `
# expired certificate
-----BEGIN NEBULA CERTIFICATE-----
CjMKB2V4cGlyZWQozRwwzRw6ICJSG94CqX8wn5I65Pwn25V6HftVfWeIySVtp2DA
7TY/QAESQMaAk5iJT5EnQwK524ZaaHGEJLUqqbh5yyOHhboIGiVTWkFeH3HccTW8
Tq5a8AyWDQdfXbtEZ1FwabeHfH5Asw0=
-----END NEBULA CERTIFICATE-----
`
p256 := `
# p256 certificate
-----BEGIN NEBULA CERTIFICATE-----
CmQKEG5lYnVsYSBQMjU2IHRlc3QozRwwzbjM8K8HOkEEdrmmg40zQp44AkMq6DZp
k+coOv04r+zh33ISyhbsafnYduN17p2eD7CmHvHuerguXD9f32gcxo/KsFCKEjMe
+0ABoAYBEkcwRQIgVoTg38L7uWku9xQgsr06kxZ/viQLOO/w1Qj1vFUEnhcCIQCq
75SjTiV92kv/1GcbT3wWpAZQQDBiUHVMVmh1822szA==
-----END NEBULA CERTIFICATE-----
`
rootCA := certificateV1{
details: detailsV1{
name: "nebula root ca",
},
}
rootCA01 := certificateV1{
details: detailsV1{
name: "nebula root ca 01",
},
}
rootCAP256 := certificateV1{
details: detailsV1{
name: "nebula P256 test",
},
}
p, err := NewCAPoolFromPEM([]byte(noNewLines))
require.NoError(t, err)
assert.Equal(t, p.CAs["ce4e6c7a596996eb0d82a8875f0f0137a4b53ce22d2421c9fd7150e7a26f6300"].Certificate.Name(), rootCA.details.name)
assert.Equal(t, p.CAs["04c585fcd9a49b276df956a22b7ebea3bf23f1fca5a17c0b56ce2e626631969e"].Certificate.Name(), rootCA01.details.name)
pp, err := NewCAPoolFromPEM([]byte(withNewLines))
require.NoError(t, err)
assert.Equal(t, pp.CAs["ce4e6c7a596996eb0d82a8875f0f0137a4b53ce22d2421c9fd7150e7a26f6300"].Certificate.Name(), rootCA.details.name)
assert.Equal(t, pp.CAs["04c585fcd9a49b276df956a22b7ebea3bf23f1fca5a17c0b56ce2e626631969e"].Certificate.Name(), rootCA01.details.name)
// expired cert, no valid certs
ppp, err := NewCAPoolFromPEM([]byte(expired))
assert.Equal(t, ErrExpired, err)
assert.Equal(t, "expired", ppp.CAs["c39b35a0e8f246203fe4f32b9aa8bfd155f1ae6a6be9d78370641e43397f48f5"].Certificate.Name())
// expired cert, with valid certs
pppp, err := NewCAPoolFromPEM(append([]byte(expired), noNewLines...))
assert.Equal(t, ErrExpired, err)
assert.Equal(t, pppp.CAs["ce4e6c7a596996eb0d82a8875f0f0137a4b53ce22d2421c9fd7150e7a26f6300"].Certificate.Name(), rootCA.details.name)
assert.Equal(t, pppp.CAs["04c585fcd9a49b276df956a22b7ebea3bf23f1fca5a17c0b56ce2e626631969e"].Certificate.Name(), rootCA01.details.name)
assert.Equal(t, "expired", pppp.CAs["c39b35a0e8f246203fe4f32b9aa8bfd155f1ae6a6be9d78370641e43397f48f5"].Certificate.Name())
assert.Len(t, pppp.CAs, 3)
ppppp, err := NewCAPoolFromPEM([]byte(p256))
require.NoError(t, err)
assert.Equal(t, ppppp.CAs["552bf7d99bec1fc775a0e4c324bf6d8f789b3078f1919c7960d2e5e0c351ee97"].Certificate.Name(), rootCAP256.details.name)
assert.Len(t, ppppp.CAs, 1)
}
func TestCertificateV1_Verify(t *testing.T) {
ca, _, caKey, _ := NewTestCaCert(Version1, Curve_CURVE25519, time.Now(), time.Now().Add(10*time.Minute), nil, nil, nil)
c, _, _, _ := NewTestCert(Version1, Curve_CURVE25519, ca, caKey, "test cert", time.Now(), time.Now().Add(5*time.Minute), nil, nil, nil)
caPool := NewCAPool()
require.NoError(t, caPool.AddCA(ca))
f, err := c.Fingerprint()
require.NoError(t, err)
caPool.BlocklistFingerprint(f)
_, err = caPool.VerifyCertificate(time.Now(), c)
require.EqualError(t, err, "certificate is in the block list")
caPool.ResetCertBlocklist()
_, err = caPool.VerifyCertificate(time.Now(), c)
require.NoError(t, err)
_, err = caPool.VerifyCertificate(time.Now().Add(time.Hour*1000), c)
require.EqualError(t, err, "root certificate is expired")
assert.PanicsWithError(t, "certificate is valid before the signing certificate", func() {
NewTestCert(Version1, Curve_CURVE25519, ca, caKey, "test cert2", time.Time{}, time.Time{}, nil, nil, nil)
})
// Test group assertion
ca, _, caKey, _ = NewTestCaCert(Version1, Curve_CURVE25519, time.Now(), time.Now().Add(10*time.Minute), nil, nil, []string{"test1", "test2"})
caPem, err := ca.MarshalPEM()
require.NoError(t, err)
caPool = NewCAPool()
b, err := caPool.AddCAFromPEM(caPem)
require.NoError(t, err)
assert.Empty(t, b)
assert.PanicsWithError(t, "certificate contained a group not present on the signing ca: bad", func() {
NewTestCert(Version1, Curve_CURVE25519, ca, caKey, "test", time.Now(), time.Now().Add(5*time.Minute), nil, nil, []string{"test1", "bad"})
})
c, _, _, _ = NewTestCert(Version1, Curve_CURVE25519, ca, caKey, "test2", time.Now(), time.Now().Add(5*time.Minute), nil, nil, []string{"test1"})
require.NoError(t, err)
_, err = caPool.VerifyCertificate(time.Now(), c)
require.NoError(t, err)
}
func TestCertificateV1_VerifyP256(t *testing.T) {
ca, _, caKey, _ := NewTestCaCert(Version1, Curve_P256, time.Now(), time.Now().Add(10*time.Minute), nil, nil, nil)
c, _, _, _ := NewTestCert(Version1, Curve_P256, ca, caKey, "test", time.Now(), time.Now().Add(5*time.Minute), nil, nil, nil)
caPool := NewCAPool()
require.NoError(t, caPool.AddCA(ca))
f, err := c.Fingerprint()
require.NoError(t, err)
caPool.BlocklistFingerprint(f)
_, err = caPool.VerifyCertificate(time.Now(), c)
require.EqualError(t, err, "certificate is in the block list")
caPool.ResetCertBlocklist()
_, err = caPool.VerifyCertificate(time.Now(), c)
require.NoError(t, err)
_, err = caPool.VerifyCertificate(time.Now().Add(time.Hour*1000), c)
require.EqualError(t, err, "root certificate is expired")
assert.PanicsWithError(t, "certificate is valid before the signing certificate", func() {
NewTestCert(Version1, Curve_P256, ca, caKey, "test", time.Time{}, time.Time{}, nil, nil, nil)
})
// Test group assertion
ca, _, caKey, _ = NewTestCaCert(Version1, Curve_P256, time.Now(), time.Now().Add(10*time.Minute), nil, nil, []string{"test1", "test2"})
caPem, err := ca.MarshalPEM()
require.NoError(t, err)
caPool = NewCAPool()
b, err := caPool.AddCAFromPEM(caPem)
require.NoError(t, err)
assert.Empty(t, b)
assert.PanicsWithError(t, "certificate contained a group not present on the signing ca: bad", func() {
NewTestCert(Version1, Curve_P256, ca, caKey, "test", time.Now(), time.Now().Add(5*time.Minute), nil, nil, []string{"test1", "bad"})
})
c, _, _, _ = NewTestCert(Version1, Curve_P256, ca, caKey, "test", time.Now(), time.Now().Add(5*time.Minute), nil, nil, []string{"test1"})
_, err = caPool.VerifyCertificate(time.Now(), c)
require.NoError(t, err)
}
func TestCertificateV1_Verify_IPs(t *testing.T) {
caIp1 := mustParsePrefixUnmapped("10.0.0.0/16")
caIp2 := mustParsePrefixUnmapped("192.168.0.0/24")
ca, _, caKey, _ := NewTestCaCert(Version1, Curve_CURVE25519, time.Now(), time.Now().Add(10*time.Minute), []netip.Prefix{caIp1, caIp2}, nil, []string{"test"})
caPem, err := ca.MarshalPEM()
require.NoError(t, err)
caPool := NewCAPool()
b, err := caPool.AddCAFromPEM(caPem)
require.NoError(t, err)
assert.Empty(t, b)
// ip is outside the network
cIp1 := mustParsePrefixUnmapped("10.1.0.0/24")
cIp2 := mustParsePrefixUnmapped("192.168.0.1/16")
assert.PanicsWithError(t, "certificate contained a network assignment outside the limitations of the signing ca: 10.1.0.0/24", func() {
NewTestCert(Version1, Curve_CURVE25519, ca, caKey, "test", time.Now(), time.Now().Add(5*time.Minute), []netip.Prefix{cIp1, cIp2}, nil, []string{"test"})
})
// ip is outside the network reversed order of above
cIp1 = mustParsePrefixUnmapped("192.168.0.1/24")
cIp2 = mustParsePrefixUnmapped("10.1.0.0/24")
assert.PanicsWithError(t, "certificate contained a network assignment outside the limitations of the signing ca: 10.1.0.0/24", func() {
NewTestCert(Version1, Curve_CURVE25519, ca, caKey, "test", time.Now(), time.Now().Add(5*time.Minute), []netip.Prefix{cIp1, cIp2}, nil, []string{"test"})
})
// ip is within the network but mask is outside
cIp1 = mustParsePrefixUnmapped("10.0.1.0/15")
cIp2 = mustParsePrefixUnmapped("192.168.0.1/24")
assert.PanicsWithError(t, "certificate contained a network assignment outside the limitations of the signing ca: 10.0.1.0/15", func() {
NewTestCert(Version1, Curve_CURVE25519, ca, caKey, "test", time.Now(), time.Now().Add(5*time.Minute), []netip.Prefix{cIp1, cIp2}, nil, []string{"test"})
})
// ip is within the network but mask is outside reversed order of above
cIp1 = mustParsePrefixUnmapped("192.168.0.1/24")
cIp2 = mustParsePrefixUnmapped("10.0.1.0/15")
assert.PanicsWithError(t, "certificate contained a network assignment outside the limitations of the signing ca: 10.0.1.0/15", func() {
NewTestCert(Version1, Curve_CURVE25519, ca, caKey, "test", time.Now(), time.Now().Add(5*time.Minute), []netip.Prefix{cIp1, cIp2}, nil, []string{"test"})
})
// ip and mask are within the network
cIp1 = mustParsePrefixUnmapped("10.0.1.0/16")
cIp2 = mustParsePrefixUnmapped("192.168.0.1/25")
c, _, _, _ := NewTestCert(Version1, Curve_CURVE25519, ca, caKey, "test", time.Now(), time.Now().Add(5*time.Minute), []netip.Prefix{cIp1, cIp2}, nil, []string{"test"})
_, err = caPool.VerifyCertificate(time.Now(), c)
require.NoError(t, err)
// Exact matches
c, _, _, _ = NewTestCert(Version1, Curve_CURVE25519, ca, caKey, "test", time.Now(), time.Now().Add(5*time.Minute), []netip.Prefix{caIp1, caIp2}, nil, []string{"test"})
require.NoError(t, err)
_, err = caPool.VerifyCertificate(time.Now(), c)
require.NoError(t, err)
// Exact matches reversed
c, _, _, _ = NewTestCert(Version1, Curve_CURVE25519, ca, caKey, "test", time.Now(), time.Now().Add(5*time.Minute), []netip.Prefix{caIp2, caIp1}, nil, []string{"test"})
require.NoError(t, err)
_, err = caPool.VerifyCertificate(time.Now(), c)
require.NoError(t, err)
// Exact matches reversed with just 1
c, _, _, _ = NewTestCert(Version1, Curve_CURVE25519, ca, caKey, "test", time.Now(), time.Now().Add(5*time.Minute), []netip.Prefix{caIp1}, nil, []string{"test"})
require.NoError(t, err)
_, err = caPool.VerifyCertificate(time.Now(), c)
require.NoError(t, err)
}
func TestCertificateV1_Verify_Subnets(t *testing.T) {
caIp1 := mustParsePrefixUnmapped("10.0.0.0/16")
caIp2 := mustParsePrefixUnmapped("192.168.0.0/24")
ca, _, caKey, _ := NewTestCaCert(Version1, Curve_CURVE25519, time.Now(), time.Now().Add(10*time.Minute), nil, []netip.Prefix{caIp1, caIp2}, []string{"test"})
caPem, err := ca.MarshalPEM()
require.NoError(t, err)
caPool := NewCAPool()
b, err := caPool.AddCAFromPEM(caPem)
require.NoError(t, err)
assert.Empty(t, b)
// ip is outside the network
cIp1 := mustParsePrefixUnmapped("10.1.0.0/24")
cIp2 := mustParsePrefixUnmapped("192.168.0.1/16")
assert.PanicsWithError(t, "certificate contained an unsafe network assignment outside the limitations of the signing ca: 10.1.0.0/24", func() {
NewTestCert(Version1, Curve_CURVE25519, ca, caKey, "test", time.Now(), time.Now().Add(5*time.Minute), nil, []netip.Prefix{cIp1, cIp2}, []string{"test"})
})
// ip is outside the network reversed order of above
cIp1 = mustParsePrefixUnmapped("192.168.0.1/24")
cIp2 = mustParsePrefixUnmapped("10.1.0.0/24")
assert.PanicsWithError(t, "certificate contained an unsafe network assignment outside the limitations of the signing ca: 10.1.0.0/24", func() {
NewTestCert(Version1, Curve_CURVE25519, ca, caKey, "test", time.Now(), time.Now().Add(5*time.Minute), nil, []netip.Prefix{cIp1, cIp2}, []string{"test"})
})
// ip is within the network but mask is outside
cIp1 = mustParsePrefixUnmapped("10.0.1.0/15")
cIp2 = mustParsePrefixUnmapped("192.168.0.1/24")
assert.PanicsWithError(t, "certificate contained an unsafe network assignment outside the limitations of the signing ca: 10.0.1.0/15", func() {
NewTestCert(Version1, Curve_CURVE25519, ca, caKey, "test", time.Now(), time.Now().Add(5*time.Minute), nil, []netip.Prefix{cIp1, cIp2}, []string{"test"})
})
// ip is within the network but mask is outside reversed order of above
cIp1 = mustParsePrefixUnmapped("192.168.0.1/24")
cIp2 = mustParsePrefixUnmapped("10.0.1.0/15")
assert.PanicsWithError(t, "certificate contained an unsafe network assignment outside the limitations of the signing ca: 10.0.1.0/15", func() {
NewTestCert(Version1, Curve_CURVE25519, ca, caKey, "test", time.Now(), time.Now().Add(5*time.Minute), nil, []netip.Prefix{cIp1, cIp2}, []string{"test"})
})
// ip and mask are within the network
cIp1 = mustParsePrefixUnmapped("10.0.1.0/16")
cIp2 = mustParsePrefixUnmapped("192.168.0.1/25")
c, _, _, _ := NewTestCert(Version1, Curve_CURVE25519, ca, caKey, "test", time.Now(), time.Now().Add(5*time.Minute), nil, []netip.Prefix{cIp1, cIp2}, []string{"test"})
require.NoError(t, err)
_, err = caPool.VerifyCertificate(time.Now(), c)
require.NoError(t, err)
// Exact matches
c, _, _, _ = NewTestCert(Version1, Curve_CURVE25519, ca, caKey, "test", time.Now(), time.Now().Add(5*time.Minute), nil, []netip.Prefix{caIp1, caIp2}, []string{"test"})
require.NoError(t, err)
_, err = caPool.VerifyCertificate(time.Now(), c)
require.NoError(t, err)
// Exact matches reversed
c, _, _, _ = NewTestCert(Version1, Curve_CURVE25519, ca, caKey, "test", time.Now(), time.Now().Add(5*time.Minute), nil, []netip.Prefix{caIp2, caIp1}, []string{"test"})
require.NoError(t, err)
_, err = caPool.VerifyCertificate(time.Now(), c)
require.NoError(t, err)
// Exact matches reversed with just 1
c, _, _, _ = NewTestCert(Version1, Curve_CURVE25519, ca, caKey, "test", time.Now(), time.Now().Add(5*time.Minute), nil, []netip.Prefix{caIp1}, []string{"test"})
require.NoError(t, err)
_, err = caPool.VerifyCertificate(time.Now(), c)
require.NoError(t, err)
}
func TestCertificateV2_Verify(t *testing.T) {
ca, _, caKey, _ := NewTestCaCert(Version2, Curve_CURVE25519, time.Now(), time.Now().Add(10*time.Minute), nil, nil, nil)
c, _, _, _ := NewTestCert(Version2, Curve_CURVE25519, ca, caKey, "test cert", time.Now(), time.Now().Add(5*time.Minute), nil, nil, nil)
caPool := NewCAPool()
require.NoError(t, caPool.AddCA(ca))
f, err := c.Fingerprint()
require.NoError(t, err)
caPool.BlocklistFingerprint(f)
_, err = caPool.VerifyCertificate(time.Now(), c)
require.EqualError(t, err, "certificate is in the block list")
caPool.ResetCertBlocklist()
_, err = caPool.VerifyCertificate(time.Now(), c)
require.NoError(t, err)
_, err = caPool.VerifyCertificate(time.Now().Add(time.Hour*1000), c)
require.EqualError(t, err, "root certificate is expired")
assert.PanicsWithError(t, "certificate is valid before the signing certificate", func() {
NewTestCert(Version2, Curve_CURVE25519, ca, caKey, "test cert2", time.Time{}, time.Time{}, nil, nil, nil)
})
// Test group assertion
ca, _, caKey, _ = NewTestCaCert(Version2, Curve_CURVE25519, time.Now(), time.Now().Add(10*time.Minute), nil, nil, []string{"test1", "test2"})
caPem, err := ca.MarshalPEM()
require.NoError(t, err)
caPool = NewCAPool()
b, err := caPool.AddCAFromPEM(caPem)
require.NoError(t, err)
assert.Empty(t, b)
assert.PanicsWithError(t, "certificate contained a group not present on the signing ca: bad", func() {
NewTestCert(Version2, Curve_CURVE25519, ca, caKey, "test", time.Now(), time.Now().Add(5*time.Minute), nil, nil, []string{"test1", "bad"})
})
c, _, _, _ = NewTestCert(Version2, Curve_CURVE25519, ca, caKey, "test2", time.Now(), time.Now().Add(5*time.Minute), nil, nil, []string{"test1"})
require.NoError(t, err)
_, err = caPool.VerifyCertificate(time.Now(), c)
require.NoError(t, err)
}
func TestCertificateV2_VerifyP256(t *testing.T) {
ca, _, caKey, _ := NewTestCaCert(Version2, Curve_P256, time.Now(), time.Now().Add(10*time.Minute), nil, nil, nil)
c, _, _, _ := NewTestCert(Version2, Curve_P256, ca, caKey, "test", time.Now(), time.Now().Add(5*time.Minute), nil, nil, nil)
caPool := NewCAPool()
require.NoError(t, caPool.AddCA(ca))
f, err := c.Fingerprint()
require.NoError(t, err)
caPool.BlocklistFingerprint(f)
_, err = caPool.VerifyCertificate(time.Now(), c)
require.EqualError(t, err, "certificate is in the block list")
caPool.ResetCertBlocklist()
_, err = caPool.VerifyCertificate(time.Now(), c)
require.NoError(t, err)
_, err = caPool.VerifyCertificate(time.Now().Add(time.Hour*1000), c)
require.EqualError(t, err, "root certificate is expired")
assert.PanicsWithError(t, "certificate is valid before the signing certificate", func() {
NewTestCert(Version2, Curve_P256, ca, caKey, "test", time.Time{}, time.Time{}, nil, nil, nil)
})
// Test group assertion
ca, _, caKey, _ = NewTestCaCert(Version2, Curve_P256, time.Now(), time.Now().Add(10*time.Minute), nil, nil, []string{"test1", "test2"})
caPem, err := ca.MarshalPEM()
require.NoError(t, err)
caPool = NewCAPool()
b, err := caPool.AddCAFromPEM(caPem)
require.NoError(t, err)
assert.Empty(t, b)
assert.PanicsWithError(t, "certificate contained a group not present on the signing ca: bad", func() {
NewTestCert(Version2, Curve_P256, ca, caKey, "test", time.Now(), time.Now().Add(5*time.Minute), nil, nil, []string{"test1", "bad"})
})
c, _, _, _ = NewTestCert(Version2, Curve_P256, ca, caKey, "test", time.Now(), time.Now().Add(5*time.Minute), nil, nil, []string{"test1"})
_, err = caPool.VerifyCertificate(time.Now(), c)
require.NoError(t, err)
}
func TestCertificateV2_Verify_IPs(t *testing.T) {
caIp1 := mustParsePrefixUnmapped("10.0.0.0/16")
caIp2 := mustParsePrefixUnmapped("192.168.0.0/24")
ca, _, caKey, _ := NewTestCaCert(Version2, Curve_CURVE25519, time.Now(), time.Now().Add(10*time.Minute), []netip.Prefix{caIp1, caIp2}, nil, []string{"test"})
caPem, err := ca.MarshalPEM()
require.NoError(t, err)
caPool := NewCAPool()
b, err := caPool.AddCAFromPEM(caPem)
require.NoError(t, err)
assert.Empty(t, b)
// ip is outside the network
cIp1 := mustParsePrefixUnmapped("10.1.0.0/24")
cIp2 := mustParsePrefixUnmapped("192.168.0.1/16")
assert.PanicsWithError(t, "certificate contained a network assignment outside the limitations of the signing ca: 10.1.0.0/24", func() {
NewTestCert(Version2, Curve_CURVE25519, ca, caKey, "test", time.Now(), time.Now().Add(5*time.Minute), []netip.Prefix{cIp1, cIp2}, nil, []string{"test"})
})
// ip is outside the network reversed order of above
cIp1 = mustParsePrefixUnmapped("192.168.0.1/24")
cIp2 = mustParsePrefixUnmapped("10.1.0.0/24")
assert.PanicsWithError(t, "certificate contained a network assignment outside the limitations of the signing ca: 10.1.0.0/24", func() {
NewTestCert(Version2, Curve_CURVE25519, ca, caKey, "test", time.Now(), time.Now().Add(5*time.Minute), []netip.Prefix{cIp1, cIp2}, nil, []string{"test"})
})
// ip is within the network but mask is outside
cIp1 = mustParsePrefixUnmapped("10.0.1.0/15")
cIp2 = mustParsePrefixUnmapped("192.168.0.1/24")
assert.PanicsWithError(t, "certificate contained a network assignment outside the limitations of the signing ca: 10.0.1.0/15", func() {
NewTestCert(Version2, Curve_CURVE25519, ca, caKey, "test", time.Now(), time.Now().Add(5*time.Minute), []netip.Prefix{cIp1, cIp2}, nil, []string{"test"})
})
// ip is within the network but mask is outside reversed order of above
cIp1 = mustParsePrefixUnmapped("192.168.0.1/24")
cIp2 = mustParsePrefixUnmapped("10.0.1.0/15")
assert.PanicsWithError(t, "certificate contained a network assignment outside the limitations of the signing ca: 10.0.1.0/15", func() {
NewTestCert(Version2, Curve_CURVE25519, ca, caKey, "test", time.Now(), time.Now().Add(5*time.Minute), []netip.Prefix{cIp1, cIp2}, nil, []string{"test"})
})
// ip and mask are within the network
cIp1 = mustParsePrefixUnmapped("10.0.1.0/16")
cIp2 = mustParsePrefixUnmapped("192.168.0.1/25")
c, _, _, _ := NewTestCert(Version2, Curve_CURVE25519, ca, caKey, "test", time.Now(), time.Now().Add(5*time.Minute), []netip.Prefix{cIp1, cIp2}, nil, []string{"test"})
_, err = caPool.VerifyCertificate(time.Now(), c)
require.NoError(t, err)
// Exact matches
c, _, _, _ = NewTestCert(Version2, Curve_CURVE25519, ca, caKey, "test", time.Now(), time.Now().Add(5*time.Minute), []netip.Prefix{caIp1, caIp2}, nil, []string{"test"})
require.NoError(t, err)
_, err = caPool.VerifyCertificate(time.Now(), c)
require.NoError(t, err)
// Exact matches reversed
c, _, _, _ = NewTestCert(Version2, Curve_CURVE25519, ca, caKey, "test", time.Now(), time.Now().Add(5*time.Minute), []netip.Prefix{caIp2, caIp1}, nil, []string{"test"})
require.NoError(t, err)
_, err = caPool.VerifyCertificate(time.Now(), c)
require.NoError(t, err)
// Exact matches reversed with just 1
c, _, _, _ = NewTestCert(Version2, Curve_CURVE25519, ca, caKey, "test", time.Now(), time.Now().Add(5*time.Minute), []netip.Prefix{caIp1}, nil, []string{"test"})
require.NoError(t, err)
_, err = caPool.VerifyCertificate(time.Now(), c)
require.NoError(t, err)
}
func TestCertificateV2_Verify_Subnets(t *testing.T) {
caIp1 := mustParsePrefixUnmapped("10.0.0.0/16")
caIp2 := mustParsePrefixUnmapped("192.168.0.0/24")
ca, _, caKey, _ := NewTestCaCert(Version2, Curve_CURVE25519, time.Now(), time.Now().Add(10*time.Minute), nil, []netip.Prefix{caIp1, caIp2}, []string{"test"})
caPem, err := ca.MarshalPEM()
require.NoError(t, err)
caPool := NewCAPool()
b, err := caPool.AddCAFromPEM(caPem)
require.NoError(t, err)
assert.Empty(t, b)
// ip is outside the network
cIp1 := mustParsePrefixUnmapped("10.1.0.0/24")
cIp2 := mustParsePrefixUnmapped("192.168.0.1/16")
assert.PanicsWithError(t, "certificate contained an unsafe network assignment outside the limitations of the signing ca: 10.1.0.0/24", func() {
NewTestCert(Version2, Curve_CURVE25519, ca, caKey, "test", time.Now(), time.Now().Add(5*time.Minute), nil, []netip.Prefix{cIp1, cIp2}, []string{"test"})
})
// ip is outside the network reversed order of above
cIp1 = mustParsePrefixUnmapped("192.168.0.1/24")
cIp2 = mustParsePrefixUnmapped("10.1.0.0/24")
assert.PanicsWithError(t, "certificate contained an unsafe network assignment outside the limitations of the signing ca: 10.1.0.0/24", func() {
NewTestCert(Version2, Curve_CURVE25519, ca, caKey, "test", time.Now(), time.Now().Add(5*time.Minute), nil, []netip.Prefix{cIp1, cIp2}, []string{"test"})
})
// ip is within the network but mask is outside
cIp1 = mustParsePrefixUnmapped("10.0.1.0/15")
cIp2 = mustParsePrefixUnmapped("192.168.0.1/24")
assert.PanicsWithError(t, "certificate contained an unsafe network assignment outside the limitations of the signing ca: 10.0.1.0/15", func() {
NewTestCert(Version2, Curve_CURVE25519, ca, caKey, "test", time.Now(), time.Now().Add(5*time.Minute), nil, []netip.Prefix{cIp1, cIp2}, []string{"test"})
})
// ip is within the network but mask is outside reversed order of above
cIp1 = mustParsePrefixUnmapped("192.168.0.1/24")
cIp2 = mustParsePrefixUnmapped("10.0.1.0/15")
assert.PanicsWithError(t, "certificate contained an unsafe network assignment outside the limitations of the signing ca: 10.0.1.0/15", func() {
NewTestCert(Version2, Curve_CURVE25519, ca, caKey, "test", time.Now(), time.Now().Add(5*time.Minute), nil, []netip.Prefix{cIp1, cIp2}, []string{"test"})
})
// ip and mask are within the network
cIp1 = mustParsePrefixUnmapped("10.0.1.0/16")
cIp2 = mustParsePrefixUnmapped("192.168.0.1/25")
c, _, _, _ := NewTestCert(Version2, Curve_CURVE25519, ca, caKey, "test", time.Now(), time.Now().Add(5*time.Minute), nil, []netip.Prefix{cIp1, cIp2}, []string{"test"})
require.NoError(t, err)
_, err = caPool.VerifyCertificate(time.Now(), c)
require.NoError(t, err)
// Exact matches
c, _, _, _ = NewTestCert(Version2, Curve_CURVE25519, ca, caKey, "test", time.Now(), time.Now().Add(5*time.Minute), nil, []netip.Prefix{caIp1, caIp2}, []string{"test"})
require.NoError(t, err)
_, err = caPool.VerifyCertificate(time.Now(), c)
require.NoError(t, err)
// Exact matches reversed
c, _, _, _ = NewTestCert(Version2, Curve_CURVE25519, ca, caKey, "test", time.Now(), time.Now().Add(5*time.Minute), nil, []netip.Prefix{caIp2, caIp1}, []string{"test"})
require.NoError(t, err)
_, err = caPool.VerifyCertificate(time.Now(), c)
require.NoError(t, err)
// Exact matches reversed with just 1
c, _, _, _ = NewTestCert(Version2, Curve_CURVE25519, ca, caKey, "test", time.Now(), time.Now().Add(5*time.Minute), nil, []netip.Prefix{caIp1}, []string{"test"})
require.NoError(t, err)
_, err = caPool.VerifyCertificate(time.Now(), c)
require.NoError(t, err)
}

View File

@@ -1,531 +1,151 @@
package cert
import (
"crypto"
"crypto/rand"
"crypto/sha256"
"encoding/binary"
"encoding/hex"
"encoding/pem"
"fmt"
"net"
"net/netip"
"time"
"bytes"
"encoding/json"
"github.com/golang/protobuf/proto"
"golang.org/x/crypto/curve25519"
"golang.org/x/crypto/ed25519"
)
const publicKeyLen = 32
type Version uint8
const (
CertBanner = "NEBULA CERTIFICATE"
X25519PrivateKeyBanner = "NEBULA X25519 PRIVATE KEY"
X25519PublicKeyBanner = "NEBULA X25519 PUBLIC KEY"
Ed25519PrivateKeyBanner = "NEBULA ED25519 PRIVATE KEY"
Ed25519PublicKeyBanner = "NEBULA ED25519 PUBLIC KEY"
VersionPre1 Version = 0
Version1 Version = 1
Version2 Version = 2
)
type NebulaCertificate struct {
Details NebulaCertificateDetails
Signature []byte
type Certificate interface {
// Version defines the underlying certificate structure and wire protocol version
// Version1 certificates are ipv4 only and uses protobuf serialization
// Version2 certificates are ipv4 or ipv6 and uses asn.1 serialization
Version() Version
// Name is the human-readable name that identifies this certificate.
Name() string
// Networks is a list of ip addresses and network sizes assigned to this certificate.
// If IsCA is true then certificates signed by this CA can only have ip addresses and
// networks that are contained by an entry in this list.
Networks() []netip.Prefix
// UnsafeNetworks is a list of networks that this host can act as an unsafe router for.
// If IsCA is true then certificates signed by this CA can only have networks that are
// contained by an entry in this list.
UnsafeNetworks() []netip.Prefix
// Groups is a list of identities that can be used to write more general firewall rule
// definitions.
// If IsCA is true then certificates signed by this CA can only use groups that are
// in this list.
Groups() []string
// IsCA signifies if this is a certificate authority (true) or a host certificate (false).
// It is invalid to use a CA certificate as a host certificate.
IsCA() bool
// NotBefore is the time at which this certificate becomes valid.
// If IsCA is true then certificate signed by this CA can not have a time before this.
NotBefore() time.Time
// NotAfter is the time at which this certificate becomes invalid.
// If IsCA is true then certificate signed by this CA can not have a time after this.
NotAfter() time.Time
// Issuer is the fingerprint of the CA that signed this certificate.
// If IsCA is true then this will be empty.
Issuer() string
// PublicKey is the raw bytes to be used in asymmetric cryptographic operations.
PublicKey() []byte
// Curve identifies which curve was used for the PublicKey and Signature.
Curve() Curve
// Signature is the cryptographic seal for all the details of this certificate.
// CheckSignature can be used to verify that the details of this certificate are valid.
Signature() []byte
// CheckSignature will check that the certificate Signature() matches the
// computed signature. A true result means this certificate has not been tampered with.
CheckSignature(signingPublicKey []byte) bool
// Fingerprint returns the hex encoded sha256 sum of the certificate.
// This acts as a unique fingerprint and can be used to blocklist certificates.
Fingerprint() (string, error)
// Expired tests if the certificate is valid for the provided time.
Expired(t time.Time) bool
// VerifyPrivateKey returns an error if the private key is not a pair with the certificates public key.
VerifyPrivateKey(curve Curve, privateKey []byte) error
// Marshal will return the byte representation of this certificate
// This is primarily the format transmitted on the wire.
Marshal() ([]byte, error)
// MarshalForHandshakes prepares the bytes needed to use directly in a handshake
MarshalForHandshakes() ([]byte, error)
// MarshalPEM will return a PEM encoded representation of this certificate
// This is primarily the format stored on disk
MarshalPEM() ([]byte, error)
// MarshalJSON will return the json representation of this certificate
MarshalJSON() ([]byte, error)
// String will return a human-readable representation of this certificate
String() string
// Copy creates a copy of the certificate
Copy() Certificate
}
type NebulaCertificateDetails struct {
Name string
Ips []*net.IPNet
Subnets []*net.IPNet
Groups []string
NotBefore time.Time
NotAfter time.Time
PublicKey []byte
IsCA bool
Issuer string
// Map of groups for faster lookup
InvertedGroups map[string]struct{}
// CachedCertificate represents a verified certificate with some cached fields to improve
// performance.
type CachedCertificate struct {
Certificate Certificate
InvertedGroups map[string]struct{}
Fingerprint string
signerFingerprint string
}
type m map[string]interface{}
func (cc *CachedCertificate) String() string {
return cc.Certificate.String()
}
// UnmarshalNebulaCertificate will unmarshal a protobuf byte representation of a nebula cert
func UnmarshalNebulaCertificate(b []byte) (*NebulaCertificate, error) {
if len(b) == 0 {
return nil, fmt.Errorf("nil byte array")
// Recombine will attempt to unmarshal a certificate received in a handshake.
// Handshakes save space by placing the peers public key in a different part of the packet, we have to
// reassemble the actual certificate structure with that in mind.
func Recombine(v Version, rawCertBytes, publicKey []byte, curve Curve) (Certificate, error) {
if publicKey == nil {
return nil, ErrNoPeerStaticKey
}
var rc RawNebulaCertificate
err := proto.Unmarshal(b, &rc)
if rawCertBytes == nil {
return nil, ErrNoPayload
}
var c Certificate
var err error
switch v {
// Implementations must ensure the result is a valid cert!
case VersionPre1, Version1:
c, err = unmarshalCertificateV1(rawCertBytes, publicKey)
case Version2:
c, err = unmarshalCertificateV2(rawCertBytes, publicKey, curve)
default:
//TODO: CERT-V2 make a static var
return nil, fmt.Errorf("unknown certificate version %d", v)
}
if err != nil {
return nil, err
}
if len(rc.Details.Ips)%2 != 0 {
return nil, fmt.Errorf("encoded IPs should be in pairs, an odd number was found")
if c.Curve() != curve {
return nil, fmt.Errorf("certificate curve %s does not match expected %s", c.Curve().String(), curve.String())
}
if len(rc.Details.Subnets)%2 != 0 {
return nil, fmt.Errorf("encoded Subnets should be in pairs, an odd number was found")
}
nc := NebulaCertificate{
Details: NebulaCertificateDetails{
Name: rc.Details.Name,
Groups: make([]string, len(rc.Details.Groups)),
Ips: make([]*net.IPNet, len(rc.Details.Ips)/2),
Subnets: make([]*net.IPNet, len(rc.Details.Subnets)/2),
NotBefore: time.Unix(rc.Details.NotBefore, 0),
NotAfter: time.Unix(rc.Details.NotAfter, 0),
PublicKey: make([]byte, len(rc.Details.PublicKey)),
IsCA: rc.Details.IsCA,
InvertedGroups: make(map[string]struct{}),
},
Signature: make([]byte, len(rc.Signature)),
}
copy(nc.Signature, rc.Signature)
copy(nc.Details.Groups, rc.Details.Groups)
nc.Details.Issuer = hex.EncodeToString(rc.Details.Issuer)
if len(rc.Details.PublicKey) < publicKeyLen {
return nil, fmt.Errorf("Public key was fewer than 32 bytes; %v", len(rc.Details.PublicKey))
}
copy(nc.Details.PublicKey, rc.Details.PublicKey)
for i, rawIp := range rc.Details.Ips {
if i%2 == 0 {
nc.Details.Ips[i/2] = &net.IPNet{IP: int2ip(rawIp)}
} else {
nc.Details.Ips[i/2].Mask = net.IPMask(int2ip(rawIp))
}
}
for i, rawIp := range rc.Details.Subnets {
if i%2 == 0 {
nc.Details.Subnets[i/2] = &net.IPNet{IP: int2ip(rawIp)}
} else {
nc.Details.Subnets[i/2].Mask = net.IPMask(int2ip(rawIp))
}
}
for _, g := range rc.Details.Groups {
nc.Details.InvertedGroups[g] = struct{}{}
}
return &nc, nil
}
// UnmarshalNebulaCertificateFromPEM will unmarshal the first pem block in a byte array, returning any non consumed data
// or an error on failure
func UnmarshalNebulaCertificateFromPEM(b []byte) (*NebulaCertificate, []byte, error) {
p, r := pem.Decode(b)
if p == nil {
return nil, r, fmt.Errorf("input did not contain a valid PEM encoded block")
}
nc, err := UnmarshalNebulaCertificate(p.Bytes)
return nc, r, err
}
// MarshalX25519PrivateKey is a simple helper to PEM encode an X25519 private key
func MarshalX25519PrivateKey(b []byte) []byte {
return pem.EncodeToMemory(&pem.Block{Type: X25519PrivateKeyBanner, Bytes: b})
}
// MarshalEd25519PrivateKey is a simple helper to PEM encode an Ed25519 private key
func MarshalEd25519PrivateKey(key ed25519.PrivateKey) []byte {
return pem.EncodeToMemory(&pem.Block{Type: Ed25519PrivateKeyBanner, Bytes: key})
}
// UnmarshalX25519PrivateKey will try to pem decode an X25519 private key, returning any other bytes b
// or an error on failure
func UnmarshalX25519PrivateKey(b []byte) ([]byte, []byte, error) {
k, r := pem.Decode(b)
if k == nil {
return nil, r, fmt.Errorf("input did not contain a valid PEM encoded block")
}
if k.Type != X25519PrivateKeyBanner {
return nil, r, fmt.Errorf("bytes did not contain a proper nebula X25519 private key banner")
}
if len(k.Bytes) != publicKeyLen {
return nil, r, fmt.Errorf("key was not 32 bytes, is invalid X25519 private key")
}
return k.Bytes, r, nil
}
// UnmarshalEd25519PrivateKey will try to pem decode an Ed25519 private key, returning any other bytes b
// or an error on failure
func UnmarshalEd25519PrivateKey(b []byte) (ed25519.PrivateKey, []byte, error) {
k, r := pem.Decode(b)
if k == nil {
return nil, r, fmt.Errorf("input did not contain a valid PEM encoded block")
}
if k.Type != Ed25519PrivateKeyBanner {
return nil, r, fmt.Errorf("bytes did not contain a proper nebula Ed25519 private key banner")
}
if len(k.Bytes) != ed25519.PrivateKeySize {
return nil, r, fmt.Errorf("key was not 64 bytes, is invalid ed25519 private key")
}
return k.Bytes, r, nil
}
// MarshalX25519PublicKey is a simple helper to PEM encode an X25519 public key
func MarshalX25519PublicKey(b []byte) []byte {
return pem.EncodeToMemory(&pem.Block{Type: X25519PublicKeyBanner, Bytes: b})
}
// MarshalEd25519PublicKey is a simple helper to PEM encode an Ed25519 public key
func MarshalEd25519PublicKey(key ed25519.PublicKey) []byte {
return pem.EncodeToMemory(&pem.Block{Type: Ed25519PublicKeyBanner, Bytes: key})
}
// UnmarshalX25519PublicKey will try to pem decode an X25519 public key, returning any other bytes b
// or an error on failure
func UnmarshalX25519PublicKey(b []byte) ([]byte, []byte, error) {
k, r := pem.Decode(b)
if k == nil {
return nil, r, fmt.Errorf("input did not contain a valid PEM encoded block")
}
if k.Type != X25519PublicKeyBanner {
return nil, r, fmt.Errorf("bytes did not contain a proper nebula X25519 public key banner")
}
if len(k.Bytes) != publicKeyLen {
return nil, r, fmt.Errorf("key was not 32 bytes, is invalid X25519 public key")
}
return k.Bytes, r, nil
}
// UnmarshalEd25519PublicKey will try to pem decode an Ed25519 public key, returning any other bytes b
// or an error on failure
func UnmarshalEd25519PublicKey(b []byte) (ed25519.PublicKey, []byte, error) {
k, r := pem.Decode(b)
if k == nil {
return nil, r, fmt.Errorf("input did not contain a valid PEM encoded block")
}
if k.Type != Ed25519PublicKeyBanner {
return nil, r, fmt.Errorf("bytes did not contain a proper nebula Ed25519 public key banner")
}
if len(k.Bytes) != ed25519.PublicKeySize {
return nil, r, fmt.Errorf("key was not 32 bytes, is invalid ed25519 public key")
}
return k.Bytes, r, nil
}
// Sign signs a nebula cert with the provided private key
func (nc *NebulaCertificate) Sign(key ed25519.PrivateKey) error {
b, err := proto.Marshal(nc.getRawDetails())
if err != nil {
return err
}
sig, err := key.Sign(rand.Reader, b, crypto.Hash(0))
if err != nil {
return err
}
nc.Signature = sig
return nil
}
// CheckSignature verifies the signature against the provided public key
func (nc *NebulaCertificate) CheckSignature(key ed25519.PublicKey) bool {
b, err := proto.Marshal(nc.getRawDetails())
if err != nil {
return false
}
return ed25519.Verify(key, b, nc.Signature)
}
// Expired will return true if the nebula cert is too young or too old compared to the provided time, otherwise false
func (nc *NebulaCertificate) Expired(t time.Time) bool {
return nc.Details.NotBefore.After(t) || nc.Details.NotAfter.Before(t)
}
// Verify will ensure a certificate is good in all respects (expiry, group membership, signature, cert blacklist, etc)
func (nc *NebulaCertificate) Verify(t time.Time, ncp *NebulaCAPool) (bool, error) {
if ncp.IsBlacklisted(nc) {
return false, fmt.Errorf("certificate has been blacklisted")
}
signer, err := ncp.GetCAForCert(nc)
if err != nil {
return false, err
}
if signer.Expired(t) {
return false, fmt.Errorf("root certificate is expired")
}
if nc.Expired(t) {
return false, fmt.Errorf("certificate is expired")
}
if !nc.CheckSignature(signer.Details.PublicKey) {
return false, fmt.Errorf("certificate signature did not match")
}
if err := nc.CheckRootConstrains(signer); err != nil {
return false, err
}
return true, nil
}
// CheckRootConstrains returns an error if the certificate violates constraints set on the root (groups, ips, subnets)
func (nc *NebulaCertificate) CheckRootConstrains(signer *NebulaCertificate) error {
// Make sure this cert wasn't valid before the root
if signer.Details.NotAfter.Before(nc.Details.NotAfter) {
return fmt.Errorf("certificate expires after signing certificate")
}
// Make sure this cert isn't valid after the root
if signer.Details.NotBefore.After(nc.Details.NotBefore) {
return fmt.Errorf("certificate is valid before the signing certificate")
}
// If the signer has a limited set of groups make sure the cert only contains a subset
if len(signer.Details.InvertedGroups) > 0 {
for _, g := range nc.Details.Groups {
if _, ok := signer.Details.InvertedGroups[g]; !ok {
return fmt.Errorf("certificate contained a group not present on the signing ca: %s", g)
}
}
}
// If the signer has a limited set of ip ranges to issue from make sure the cert only contains a subset
if len(signer.Details.Ips) > 0 {
for _, ip := range nc.Details.Ips {
if !netMatch(ip, signer.Details.Ips) {
return fmt.Errorf("certificate contained an ip assignment outside the limitations of the signing ca: %s", ip.String())
}
}
}
// If the signer has a limited set of subnet ranges to issue from make sure the cert only contains a subset
if len(signer.Details.Subnets) > 0 {
for _, subnet := range nc.Details.Subnets {
if !netMatch(subnet, signer.Details.Subnets) {
return fmt.Errorf("certificate contained a subnet assignment outside the limitations of the signing ca: %s", subnet)
}
}
}
return nil
}
// VerifyPrivateKey checks that the public key in the Nebula certificate and a supplied private key match
func (nc *NebulaCertificate) VerifyPrivateKey(key []byte) error {
var dst, key32 [32]byte
copy(key32[:], key)
curve25519.ScalarBaseMult(&dst, &key32)
if !bytes.Equal(dst[:], nc.Details.PublicKey) {
return fmt.Errorf("public key in cert and private key supplied don't match")
}
return nil
}
// String will return a pretty printed representation of a nebula cert
func (nc *NebulaCertificate) String() string {
if nc == nil {
return "NebulaCertificate {}\n"
}
s := "NebulaCertificate {\n"
s += "\tDetails {\n"
s += fmt.Sprintf("\t\tName: %v\n", nc.Details.Name)
if len(nc.Details.Ips) > 0 {
s += "\t\tIps: [\n"
for _, ip := range nc.Details.Ips {
s += fmt.Sprintf("\t\t\t%v\n", ip.String())
}
s += "\t\t]\n"
} else {
s += "\t\tIps: []\n"
}
if len(nc.Details.Subnets) > 0 {
s += "\t\tSubnets: [\n"
for _, ip := range nc.Details.Subnets {
s += fmt.Sprintf("\t\t\t%v\n", ip.String())
}
s += "\t\t]\n"
} else {
s += "\t\tSubnets: []\n"
}
if len(nc.Details.Groups) > 0 {
s += "\t\tGroups: [\n"
for _, g := range nc.Details.Groups {
s += fmt.Sprintf("\t\t\t\"%v\"\n", g)
}
s += "\t\t]\n"
} else {
s += "\t\tGroups: []\n"
}
s += fmt.Sprintf("\t\tNot before: %v\n", nc.Details.NotBefore)
s += fmt.Sprintf("\t\tNot After: %v\n", nc.Details.NotAfter)
s += fmt.Sprintf("\t\tIs CA: %v\n", nc.Details.IsCA)
s += fmt.Sprintf("\t\tIssuer: %s\n", nc.Details.Issuer)
s += fmt.Sprintf("\t\tPublic key: %x\n", nc.Details.PublicKey)
s += "\t}\n"
fp, err := nc.Sha256Sum()
if err == nil {
s += fmt.Sprintf("\tFingerprint: %s\n", fp)
}
s += fmt.Sprintf("\tSignature: %x\n", nc.Signature)
s += "}"
return s
}
// getRawDetails marshals the raw details into protobuf ready struct
func (nc *NebulaCertificate) getRawDetails() *RawNebulaCertificateDetails {
rd := &RawNebulaCertificateDetails{
Name: nc.Details.Name,
Groups: nc.Details.Groups,
NotBefore: nc.Details.NotBefore.Unix(),
NotAfter: nc.Details.NotAfter.Unix(),
PublicKey: make([]byte, len(nc.Details.PublicKey)),
IsCA: nc.Details.IsCA,
}
for _, ipNet := range nc.Details.Ips {
rd.Ips = append(rd.Ips, ip2int(ipNet.IP), ip2int(ipNet.Mask))
}
for _, ipNet := range nc.Details.Subnets {
rd.Subnets = append(rd.Subnets, ip2int(ipNet.IP), ip2int(ipNet.Mask))
}
copy(rd.PublicKey, nc.Details.PublicKey[:])
// I know, this is terrible
rd.Issuer, _ = hex.DecodeString(nc.Details.Issuer)
return rd
}
// Marshal will marshal a nebula cert into a protobuf byte array
func (nc *NebulaCertificate) Marshal() ([]byte, error) {
rc := RawNebulaCertificate{
Details: nc.getRawDetails(),
Signature: nc.Signature,
}
return proto.Marshal(&rc)
}
// MarshalToPEM will marshal a nebula cert into a protobuf byte array and pem encode the result
func (nc *NebulaCertificate) MarshalToPEM() ([]byte, error) {
b, err := nc.Marshal()
if err != nil {
return nil, err
}
return pem.EncodeToMemory(&pem.Block{Type: CertBanner, Bytes: b}), nil
}
// Sha256Sum calculates a sha-256 sum of the marshaled certificate
func (nc *NebulaCertificate) Sha256Sum() (string, error) {
b, err := nc.Marshal()
if err != nil {
return "", err
}
sum := sha256.Sum256(b)
return hex.EncodeToString(sum[:]), nil
}
func (nc *NebulaCertificate) MarshalJSON() ([]byte, error) {
toString := func(ips []*net.IPNet) []string {
s := []string{}
for _, ip := range ips {
s = append(s, ip.String())
}
return s
}
fp, _ := nc.Sha256Sum()
jc := m{
"details": m{
"name": nc.Details.Name,
"ips": toString(nc.Details.Ips),
"subnets": toString(nc.Details.Subnets),
"groups": nc.Details.Groups,
"notBefore": nc.Details.NotBefore,
"notAfter": nc.Details.NotAfter,
"publicKey": fmt.Sprintf("%x", nc.Details.PublicKey),
"isCa": nc.Details.IsCA,
"issuer": nc.Details.Issuer,
},
"fingerprint": fp,
"signature": fmt.Sprintf("%x", nc.Signature),
}
return json.Marshal(jc)
}
func netMatch(certIp *net.IPNet, rootIps []*net.IPNet) bool {
for _, net := range rootIps {
if net.Contains(certIp.IP) && maskContains(net.Mask, certIp.Mask) {
return true
}
}
return false
}
func maskContains(caMask, certMask net.IPMask) bool {
caM := maskTo4(caMask)
cM := maskTo4(certMask)
// Make sure forcing to ipv4 didn't nuke us
if caM == nil || cM == nil {
return false
}
// Make sure the cert mask is not greater than the ca mask
for i := 0; i < len(caMask); i++ {
if caM[i] > cM[i] {
return false
}
}
return true
}
func maskTo4(ip net.IPMask) net.IPMask {
if len(ip) == net.IPv4len {
return ip
}
if len(ip) == net.IPv6len && isZeros(ip[0:10]) && ip[10] == 0xff && ip[11] == 0xff {
return ip[12:16]
}
return nil
}
func isZeros(b []byte) bool {
for i := 0; i < len(b); i++ {
if b[i] != 0 {
return false
}
}
return true
}
func ip2int(ip []byte) uint32 {
if len(ip) == 16 {
return binary.BigEndian.Uint32(ip[12:16])
}
return binary.BigEndian.Uint32(ip)
}
func int2ip(nn uint32) net.IP {
ip := make(net.IP, net.IPv4len)
binary.BigEndian.PutUint32(ip, nn)
return ip
return c, nil
}

View File

@@ -1,202 +0,0 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// source: cert.proto
package cert
import (
fmt "fmt"
proto "github.com/golang/protobuf/proto"
math "math"
)
// Reference imports to suppress errors if they are not otherwise used.
var _ = proto.Marshal
var _ = fmt.Errorf
var _ = math.Inf
// This is a compile-time assertion to ensure that this generated file
// is compatible with the proto package it is being compiled against.
// A compilation error at this line likely means your copy of the
// proto package needs to be updated.
const _ = proto.ProtoPackageIsVersion3 // please upgrade the proto package
type RawNebulaCertificate struct {
Details *RawNebulaCertificateDetails `protobuf:"bytes,1,opt,name=Details,json=details,proto3" json:"Details,omitempty"`
Signature []byte `protobuf:"bytes,2,opt,name=Signature,json=signature,proto3" json:"Signature,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
func (m *RawNebulaCertificate) Reset() { *m = RawNebulaCertificate{} }
func (m *RawNebulaCertificate) String() string { return proto.CompactTextString(m) }
func (*RawNebulaCertificate) ProtoMessage() {}
func (*RawNebulaCertificate) Descriptor() ([]byte, []int) {
return fileDescriptor_a142e29cbef9b1cf, []int{0}
}
func (m *RawNebulaCertificate) XXX_Unmarshal(b []byte) error {
return xxx_messageInfo_RawNebulaCertificate.Unmarshal(m, b)
}
func (m *RawNebulaCertificate) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
return xxx_messageInfo_RawNebulaCertificate.Marshal(b, m, deterministic)
}
func (m *RawNebulaCertificate) XXX_Merge(src proto.Message) {
xxx_messageInfo_RawNebulaCertificate.Merge(m, src)
}
func (m *RawNebulaCertificate) XXX_Size() int {
return xxx_messageInfo_RawNebulaCertificate.Size(m)
}
func (m *RawNebulaCertificate) XXX_DiscardUnknown() {
xxx_messageInfo_RawNebulaCertificate.DiscardUnknown(m)
}
var xxx_messageInfo_RawNebulaCertificate proto.InternalMessageInfo
func (m *RawNebulaCertificate) GetDetails() *RawNebulaCertificateDetails {
if m != nil {
return m.Details
}
return nil
}
func (m *RawNebulaCertificate) GetSignature() []byte {
if m != nil {
return m.Signature
}
return nil
}
type RawNebulaCertificateDetails struct {
Name string `protobuf:"bytes,1,opt,name=Name,json=name,proto3" json:"Name,omitempty"`
// Ips and Subnets are in big endian 32 bit pairs, 1st the ip, 2nd the mask
Ips []uint32 `protobuf:"varint,2,rep,packed,name=Ips,json=ips,proto3" json:"Ips,omitempty"`
Subnets []uint32 `protobuf:"varint,3,rep,packed,name=Subnets,json=subnets,proto3" json:"Subnets,omitempty"`
Groups []string `protobuf:"bytes,4,rep,name=Groups,json=groups,proto3" json:"Groups,omitempty"`
NotBefore int64 `protobuf:"varint,5,opt,name=NotBefore,json=notBefore,proto3" json:"NotBefore,omitempty"`
NotAfter int64 `protobuf:"varint,6,opt,name=NotAfter,json=notAfter,proto3" json:"NotAfter,omitempty"`
PublicKey []byte `protobuf:"bytes,7,opt,name=PublicKey,json=publicKey,proto3" json:"PublicKey,omitempty"`
IsCA bool `protobuf:"varint,8,opt,name=IsCA,json=isCA,proto3" json:"IsCA,omitempty"`
// sha-256 of the issuer certificate, if this field is blank the cert is self-signed
Issuer []byte `protobuf:"bytes,9,opt,name=Issuer,json=issuer,proto3" json:"Issuer,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
func (m *RawNebulaCertificateDetails) Reset() { *m = RawNebulaCertificateDetails{} }
func (m *RawNebulaCertificateDetails) String() string { return proto.CompactTextString(m) }
func (*RawNebulaCertificateDetails) ProtoMessage() {}
func (*RawNebulaCertificateDetails) Descriptor() ([]byte, []int) {
return fileDescriptor_a142e29cbef9b1cf, []int{1}
}
func (m *RawNebulaCertificateDetails) XXX_Unmarshal(b []byte) error {
return xxx_messageInfo_RawNebulaCertificateDetails.Unmarshal(m, b)
}
func (m *RawNebulaCertificateDetails) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
return xxx_messageInfo_RawNebulaCertificateDetails.Marshal(b, m, deterministic)
}
func (m *RawNebulaCertificateDetails) XXX_Merge(src proto.Message) {
xxx_messageInfo_RawNebulaCertificateDetails.Merge(m, src)
}
func (m *RawNebulaCertificateDetails) XXX_Size() int {
return xxx_messageInfo_RawNebulaCertificateDetails.Size(m)
}
func (m *RawNebulaCertificateDetails) XXX_DiscardUnknown() {
xxx_messageInfo_RawNebulaCertificateDetails.DiscardUnknown(m)
}
var xxx_messageInfo_RawNebulaCertificateDetails proto.InternalMessageInfo
func (m *RawNebulaCertificateDetails) GetName() string {
if m != nil {
return m.Name
}
return ""
}
func (m *RawNebulaCertificateDetails) GetIps() []uint32 {
if m != nil {
return m.Ips
}
return nil
}
func (m *RawNebulaCertificateDetails) GetSubnets() []uint32 {
if m != nil {
return m.Subnets
}
return nil
}
func (m *RawNebulaCertificateDetails) GetGroups() []string {
if m != nil {
return m.Groups
}
return nil
}
func (m *RawNebulaCertificateDetails) GetNotBefore() int64 {
if m != nil {
return m.NotBefore
}
return 0
}
func (m *RawNebulaCertificateDetails) GetNotAfter() int64 {
if m != nil {
return m.NotAfter
}
return 0
}
func (m *RawNebulaCertificateDetails) GetPublicKey() []byte {
if m != nil {
return m.PublicKey
}
return nil
}
func (m *RawNebulaCertificateDetails) GetIsCA() bool {
if m != nil {
return m.IsCA
}
return false
}
func (m *RawNebulaCertificateDetails) GetIssuer() []byte {
if m != nil {
return m.Issuer
}
return nil
}
func init() {
proto.RegisterType((*RawNebulaCertificate)(nil), "cert.RawNebulaCertificate")
proto.RegisterType((*RawNebulaCertificateDetails)(nil), "cert.RawNebulaCertificateDetails")
}
func init() { proto.RegisterFile("cert.proto", fileDescriptor_a142e29cbef9b1cf) }
var fileDescriptor_a142e29cbef9b1cf = []byte{
// 279 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x7c, 0x90, 0xcf, 0x4a, 0xf4, 0x30,
0x14, 0xc5, 0xc9, 0xa4, 0x5f, 0xdb, 0xe4, 0x53, 0x90, 0x20, 0x12, 0xd4, 0x45, 0x9c, 0x55, 0x56,
0xb3, 0xd0, 0xa5, 0xab, 0x71, 0x04, 0x29, 0x42, 0x91, 0xcc, 0x13, 0xa4, 0xf5, 0x76, 0x08, 0x74,
0x9a, 0x9a, 0x3f, 0x88, 0x8f, 0xee, 0x4e, 0x9a, 0x4e, 0x77, 0xe2, 0xee, 0x9e, 0x5f, 0xce, 0x49,
0x4e, 0x2e, 0xa5, 0x2d, 0xb8, 0xb0, 0x19, 0x9d, 0x0d, 0x96, 0x65, 0xd3, 0xbc, 0xfe, 0xa0, 0x97,
0x4a, 0x7f, 0xd6, 0xd0, 0xc4, 0x5e, 0xef, 0xc0, 0x05, 0xd3, 0x99, 0x56, 0x07, 0x60, 0x8f, 0xb4,
0x78, 0x86, 0xa0, 0x4d, 0xef, 0x39, 0x12, 0x48, 0xfe, 0xbf, 0xbf, 0xdb, 0xa4, 0xec, 0x6f, 0xe6,
0x93, 0x51, 0x15, 0xef, 0xf3, 0xc0, 0x6e, 0x29, 0xd9, 0x9b, 0xc3, 0xa0, 0x43, 0x74, 0xc0, 0x57,
0x02, 0xc9, 0x33, 0x45, 0xfc, 0x02, 0xd6, 0xdf, 0x88, 0xde, 0xfc, 0x71, 0x0d, 0x63, 0x34, 0xab,
0xf5, 0x11, 0xd2, 0xbb, 0x44, 0x65, 0x83, 0x3e, 0x02, 0xbb, 0xa0, 0xb8, 0x1a, 0x3d, 0x5f, 0x09,
0x2c, 0xcf, 0x15, 0x36, 0xa3, 0x67, 0x9c, 0x16, 0xfb, 0xd8, 0x0c, 0x10, 0x3c, 0xc7, 0x89, 0x16,
0x7e, 0x96, 0xec, 0x8a, 0xe6, 0x2f, 0xce, 0xc6, 0xd1, 0xf3, 0x4c, 0x60, 0x49, 0x54, 0x7e, 0x48,
0x6a, 0x6a, 0x55, 0xdb, 0xf0, 0x04, 0x9d, 0x75, 0xc0, 0xff, 0x09, 0x24, 0xb1, 0x22, 0xc3, 0x02,
0xd8, 0x35, 0x2d, 0x6b, 0x1b, 0xb6, 0x5d, 0x00, 0xc7, 0xf3, 0x74, 0x58, 0x0e, 0x27, 0x3d, 0x25,
0xdf, 0x62, 0xd3, 0x9b, 0xf6, 0x15, 0xbe, 0x78, 0x31, 0xff, 0x67, 0x5c, 0xc0, 0xd4, 0xb7, 0xf2,
0xbb, 0x2d, 0x2f, 0x05, 0x92, 0xa5, 0xca, 0x8c, 0xdf, 0x6d, 0xa7, 0x0e, 0x95, 0xf7, 0x11, 0x1c,
0x27, 0xc9, 0x9e, 0x9b, 0xa4, 0x9a, 0x3c, 0xed, 0xfe, 0xe1, 0x27, 0x00, 0x00, 0xff, 0xff, 0x2c,
0xe3, 0x08, 0x37, 0x89, 0x01, 0x00, 0x00,
}

View File

@@ -1,592 +0,0 @@
package cert
import (
"crypto/rand"
"fmt"
"io"
"net"
"testing"
"time"
"github.com/golang/protobuf/proto"
"github.com/stretchr/testify/assert"
"golang.org/x/crypto/curve25519"
"golang.org/x/crypto/ed25519"
)
func TestMarshalingNebulaCertificate(t *testing.T) {
before := time.Now().Add(time.Second * -60).Round(time.Second)
after := time.Now().Add(time.Second * 60).Round(time.Second)
pubKey := []byte("1234567890abcedfghij1234567890ab")
nc := NebulaCertificate{
Details: NebulaCertificateDetails{
Name: "testing",
Ips: []*net.IPNet{
{IP: net.ParseIP("10.1.1.1"), Mask: net.IPMask(net.ParseIP("255.255.255.0"))},
{IP: net.ParseIP("10.1.1.2"), Mask: net.IPMask(net.ParseIP("255.255.0.0"))},
{IP: net.ParseIP("10.1.1.3"), Mask: net.IPMask(net.ParseIP("255.0.255.0"))},
},
Subnets: []*net.IPNet{
{IP: net.ParseIP("9.1.1.1"), Mask: net.IPMask(net.ParseIP("255.0.255.0"))},
{IP: net.ParseIP("9.1.1.2"), Mask: net.IPMask(net.ParseIP("255.255.255.0"))},
{IP: net.ParseIP("9.1.1.3"), Mask: net.IPMask(net.ParseIP("255.255.0.0"))},
},
Groups: []string{"test-group1", "test-group2", "test-group3"},
NotBefore: before,
NotAfter: after,
PublicKey: pubKey,
IsCA: false,
Issuer: "1234567890abcedfghij1234567890ab",
},
Signature: []byte("1234567890abcedfghij1234567890ab"),
}
b, err := nc.Marshal()
assert.Nil(t, err)
//t.Log("Cert size:", len(b))
nc2, err := UnmarshalNebulaCertificate(b)
assert.Nil(t, err)
assert.Equal(t, nc.Signature, nc2.Signature)
assert.Equal(t, nc.Details.Name, nc2.Details.Name)
assert.Equal(t, nc.Details.NotBefore, nc2.Details.NotBefore)
assert.Equal(t, nc.Details.NotAfter, nc2.Details.NotAfter)
assert.Equal(t, nc.Details.PublicKey, nc2.Details.PublicKey)
assert.Equal(t, nc.Details.IsCA, nc2.Details.IsCA)
// IP byte arrays can be 4 or 16 in length so we have to go this route
assert.Equal(t, len(nc.Details.Ips), len(nc2.Details.Ips))
for i, wIp := range nc.Details.Ips {
assert.Equal(t, wIp.String(), nc2.Details.Ips[i].String())
}
assert.Equal(t, len(nc.Details.Subnets), len(nc2.Details.Subnets))
for i, wIp := range nc.Details.Subnets {
assert.Equal(t, wIp.String(), nc2.Details.Subnets[i].String())
}
assert.EqualValues(t, nc.Details.Groups, nc2.Details.Groups)
}
func TestNebulaCertificate_Sign(t *testing.T) {
before := time.Now().Add(time.Second * -60).Round(time.Second)
after := time.Now().Add(time.Second * 60).Round(time.Second)
pubKey := []byte("1234567890abcedfghij1234567890ab")
nc := NebulaCertificate{
Details: NebulaCertificateDetails{
Name: "testing",
Ips: []*net.IPNet{
{IP: net.ParseIP("10.1.1.1"), Mask: net.IPMask(net.ParseIP("255.255.255.0"))},
{IP: net.ParseIP("10.1.1.2"), Mask: net.IPMask(net.ParseIP("255.255.0.0"))},
{IP: net.ParseIP("10.1.1.3"), Mask: net.IPMask(net.ParseIP("255.0.255.0"))},
},
Subnets: []*net.IPNet{
{IP: net.ParseIP("9.1.1.1"), Mask: net.IPMask(net.ParseIP("255.0.255.0"))},
{IP: net.ParseIP("9.1.1.2"), Mask: net.IPMask(net.ParseIP("255.255.255.0"))},
{IP: net.ParseIP("9.1.1.3"), Mask: net.IPMask(net.ParseIP("255.255.0.0"))},
},
Groups: []string{"test-group1", "test-group2", "test-group3"},
NotBefore: before,
NotAfter: after,
PublicKey: pubKey,
IsCA: false,
Issuer: "1234567890abcedfghij1234567890ab",
},
}
pub, priv, err := ed25519.GenerateKey(rand.Reader)
assert.Nil(t, err)
assert.False(t, nc.CheckSignature(pub))
assert.Nil(t, nc.Sign(priv))
assert.True(t, nc.CheckSignature(pub))
_, err = nc.Marshal()
assert.Nil(t, err)
//t.Log("Cert size:", len(b))
}
func TestNebulaCertificate_Expired(t *testing.T) {
nc := NebulaCertificate{
Details: NebulaCertificateDetails{
NotBefore: time.Now().Add(time.Second * -60).Round(time.Second),
NotAfter: time.Now().Add(time.Second * 60).Round(time.Second),
},
}
assert.True(t, nc.Expired(time.Now().Add(time.Hour)))
assert.True(t, nc.Expired(time.Now().Add(-time.Hour)))
assert.False(t, nc.Expired(time.Now()))
}
func TestNebulaCertificate_MarshalJSON(t *testing.T) {
time.Local = time.UTC
pubKey := []byte("1234567890abcedfghij1234567890ab")
nc := NebulaCertificate{
Details: NebulaCertificateDetails{
Name: "testing",
Ips: []*net.IPNet{
{IP: net.ParseIP("10.1.1.1"), Mask: net.IPMask(net.ParseIP("255.255.255.0"))},
{IP: net.ParseIP("10.1.1.2"), Mask: net.IPMask(net.ParseIP("255.255.0.0"))},
{IP: net.ParseIP("10.1.1.3"), Mask: net.IPMask(net.ParseIP("255.0.255.0"))},
},
Subnets: []*net.IPNet{
{IP: net.ParseIP("9.1.1.1"), Mask: net.IPMask(net.ParseIP("255.0.255.0"))},
{IP: net.ParseIP("9.1.1.2"), Mask: net.IPMask(net.ParseIP("255.255.255.0"))},
{IP: net.ParseIP("9.1.1.3"), Mask: net.IPMask(net.ParseIP("255.255.0.0"))},
},
Groups: []string{"test-group1", "test-group2", "test-group3"},
NotBefore: time.Date(1, 0, 0, 1, 0, 0, 0, time.UTC),
NotAfter: time.Date(1, 0, 0, 2, 0, 0, 0, time.UTC),
PublicKey: pubKey,
IsCA: false,
Issuer: "1234567890abcedfghij1234567890ab",
},
Signature: []byte("1234567890abcedfghij1234567890ab"),
}
b, err := nc.MarshalJSON()
assert.Nil(t, err)
assert.Equal(
t,
"{\"details\":{\"groups\":[\"test-group1\",\"test-group2\",\"test-group3\"],\"ips\":[\"10.1.1.1/24\",\"10.1.1.2/16\",\"10.1.1.3/ff00ff00\"],\"isCa\":false,\"issuer\":\"1234567890abcedfghij1234567890ab\",\"name\":\"testing\",\"notAfter\":\"0000-11-30T02:00:00Z\",\"notBefore\":\"0000-11-30T01:00:00Z\",\"publicKey\":\"313233343536373839306162636564666768696a313233343536373839306162\",\"subnets\":[\"9.1.1.1/ff00ff00\",\"9.1.1.2/24\",\"9.1.1.3/16\"]},\"fingerprint\":\"26cb1c30ad7872c804c166b5150fa372f437aa3856b04edb4334b4470ec728e4\",\"signature\":\"313233343536373839306162636564666768696a313233343536373839306162\"}",
string(b),
)
}
func TestNebulaCertificate_Verify(t *testing.T) {
ca, _, caKey, err := newTestCaCert(time.Now(), time.Now().Add(10*time.Minute), []*net.IPNet{}, []*net.IPNet{}, []string{})
assert.Nil(t, err)
c, _, _, err := newTestCert(ca, caKey, time.Now(), time.Now().Add(5*time.Minute), []*net.IPNet{}, []*net.IPNet{}, []string{})
assert.Nil(t, err)
h, err := ca.Sha256Sum()
assert.Nil(t, err)
caPool := NewCAPool()
caPool.CAs[h] = ca
f, err := c.Sha256Sum()
assert.Nil(t, err)
caPool.BlacklistFingerprint(f)
v, err := c.Verify(time.Now(), caPool)
assert.False(t, v)
assert.EqualError(t, err, "certificate has been blacklisted")
caPool.ResetCertBlacklist()
v, err = c.Verify(time.Now(), caPool)
assert.True(t, v)
assert.Nil(t, err)
v, err = c.Verify(time.Now().Add(time.Hour*1000), caPool)
assert.False(t, v)
assert.EqualError(t, err, "root certificate is expired")
c, _, _, err = newTestCert(ca, caKey, time.Time{}, time.Time{}, []*net.IPNet{}, []*net.IPNet{}, []string{})
assert.Nil(t, err)
v, err = c.Verify(time.Now().Add(time.Minute*6), caPool)
assert.False(t, v)
assert.EqualError(t, err, "certificate is expired")
// Test group assertion
ca, _, caKey, err = newTestCaCert(time.Now(), time.Now().Add(10*time.Minute), []*net.IPNet{}, []*net.IPNet{}, []string{"test1", "test2"})
assert.Nil(t, err)
caPem, err := ca.MarshalToPEM()
assert.Nil(t, err)
caPool = NewCAPool()
caPool.AddCACertificate(caPem)
c, _, _, err = newTestCert(ca, caKey, time.Now(), time.Now().Add(5*time.Minute), []*net.IPNet{}, []*net.IPNet{}, []string{"test1", "bad"})
assert.Nil(t, err)
v, err = c.Verify(time.Now(), caPool)
assert.False(t, v)
assert.EqualError(t, err, "certificate contained a group not present on the signing ca: bad")
c, _, _, err = newTestCert(ca, caKey, time.Now(), time.Now().Add(5*time.Minute), []*net.IPNet{}, []*net.IPNet{}, []string{"test1"})
assert.Nil(t, err)
v, err = c.Verify(time.Now(), caPool)
assert.True(t, v)
assert.Nil(t, err)
}
func TestNebulaCertificate_Verify_IPs(t *testing.T) {
_, caIp1, _ := net.ParseCIDR("10.0.0.0/16")
_, caIp2, _ := net.ParseCIDR("192.168.0.0/24")
ca, _, caKey, err := newTestCaCert(time.Now(), time.Now().Add(10*time.Minute), []*net.IPNet{caIp1, caIp2}, []*net.IPNet{}, []string{"test"})
assert.Nil(t, err)
caPem, err := ca.MarshalToPEM()
assert.Nil(t, err)
caPool := NewCAPool()
caPool.AddCACertificate(caPem)
// ip is outside the network
cIp1 := &net.IPNet{IP: net.ParseIP("10.1.0.0"), Mask: []byte{255, 255, 255, 0}}
cIp2 := &net.IPNet{IP: net.ParseIP("192.168.0.1"), Mask: []byte{255, 255, 0, 0}}
c, _, _, err := newTestCert(ca, caKey, time.Now(), time.Now().Add(5*time.Minute), []*net.IPNet{cIp1, cIp2}, []*net.IPNet{}, []string{"test"})
assert.Nil(t, err)
v, err := c.Verify(time.Now(), caPool)
assert.False(t, v)
assert.EqualError(t, err, "certificate contained an ip assignment outside the limitations of the signing ca: 10.1.0.0/24")
// ip is outside the network reversed order of above
cIp1 = &net.IPNet{IP: net.ParseIP("192.168.0.1"), Mask: []byte{255, 255, 255, 0}}
cIp2 = &net.IPNet{IP: net.ParseIP("10.1.0.0"), Mask: []byte{255, 255, 255, 0}}
c, _, _, err = newTestCert(ca, caKey, time.Now(), time.Now().Add(5*time.Minute), []*net.IPNet{cIp1, cIp2}, []*net.IPNet{}, []string{"test"})
assert.Nil(t, err)
v, err = c.Verify(time.Now(), caPool)
assert.False(t, v)
assert.EqualError(t, err, "certificate contained an ip assignment outside the limitations of the signing ca: 10.1.0.0/24")
// ip is within the network but mask is outside
cIp1 = &net.IPNet{IP: net.ParseIP("10.0.1.0"), Mask: []byte{255, 254, 0, 0}}
cIp2 = &net.IPNet{IP: net.ParseIP("192.168.0.1"), Mask: []byte{255, 255, 255, 0}}
c, _, _, err = newTestCert(ca, caKey, time.Now(), time.Now().Add(5*time.Minute), []*net.IPNet{cIp1, cIp2}, []*net.IPNet{}, []string{"test"})
assert.Nil(t, err)
v, err = c.Verify(time.Now(), caPool)
assert.False(t, v)
assert.EqualError(t, err, "certificate contained an ip assignment outside the limitations of the signing ca: 10.0.1.0/15")
// ip is within the network but mask is outside reversed order of above
cIp1 = &net.IPNet{IP: net.ParseIP("192.168.0.1"), Mask: []byte{255, 255, 255, 0}}
cIp2 = &net.IPNet{IP: net.ParseIP("10.0.1.0"), Mask: []byte{255, 254, 0, 0}}
c, _, _, err = newTestCert(ca, caKey, time.Now(), time.Now().Add(5*time.Minute), []*net.IPNet{cIp1, cIp2}, []*net.IPNet{}, []string{"test"})
assert.Nil(t, err)
v, err = c.Verify(time.Now(), caPool)
assert.False(t, v)
assert.EqualError(t, err, "certificate contained an ip assignment outside the limitations of the signing ca: 10.0.1.0/15")
// ip and mask are within the network
cIp1 = &net.IPNet{IP: net.ParseIP("10.0.1.0"), Mask: []byte{255, 255, 0, 0}}
cIp2 = &net.IPNet{IP: net.ParseIP("192.168.0.1"), Mask: []byte{255, 255, 255, 128}}
c, _, _, err = newTestCert(ca, caKey, time.Now(), time.Now().Add(5*time.Minute), []*net.IPNet{cIp1, cIp2}, []*net.IPNet{}, []string{"test"})
assert.Nil(t, err)
v, err = c.Verify(time.Now(), caPool)
assert.True(t, v)
assert.Nil(t, err)
// Exact matches
c, _, _, err = newTestCert(ca, caKey, time.Now(), time.Now().Add(5*time.Minute), []*net.IPNet{caIp1, caIp2}, []*net.IPNet{}, []string{"test"})
assert.Nil(t, err)
v, err = c.Verify(time.Now(), caPool)
assert.True(t, v)
assert.Nil(t, err)
// Exact matches reversed
c, _, _, err = newTestCert(ca, caKey, time.Now(), time.Now().Add(5*time.Minute), []*net.IPNet{caIp2, caIp1}, []*net.IPNet{}, []string{"test"})
assert.Nil(t, err)
v, err = c.Verify(time.Now(), caPool)
assert.True(t, v)
assert.Nil(t, err)
// Exact matches reversed with just 1
c, _, _, err = newTestCert(ca, caKey, time.Now(), time.Now().Add(5*time.Minute), []*net.IPNet{caIp1}, []*net.IPNet{}, []string{"test"})
assert.Nil(t, err)
v, err = c.Verify(time.Now(), caPool)
assert.True(t, v)
assert.Nil(t, err)
}
func TestNebulaCertificate_Verify_Subnets(t *testing.T) {
_, caIp1, _ := net.ParseCIDR("10.0.0.0/16")
_, caIp2, _ := net.ParseCIDR("192.168.0.0/24")
ca, _, caKey, err := newTestCaCert(time.Now(), time.Now().Add(10*time.Minute), []*net.IPNet{}, []*net.IPNet{caIp1, caIp2}, []string{"test"})
assert.Nil(t, err)
caPem, err := ca.MarshalToPEM()
assert.Nil(t, err)
caPool := NewCAPool()
caPool.AddCACertificate(caPem)
// ip is outside the network
cIp1 := &net.IPNet{IP: net.ParseIP("10.1.0.0"), Mask: []byte{255, 255, 255, 0}}
cIp2 := &net.IPNet{IP: net.ParseIP("192.168.0.1"), Mask: []byte{255, 255, 0, 0}}
c, _, _, err := newTestCert(ca, caKey, time.Now(), time.Now().Add(5*time.Minute), []*net.IPNet{}, []*net.IPNet{cIp1, cIp2}, []string{"test"})
assert.Nil(t, err)
v, err := c.Verify(time.Now(), caPool)
assert.False(t, v)
assert.EqualError(t, err, "certificate contained a subnet assignment outside the limitations of the signing ca: 10.1.0.0/24")
// ip is outside the network reversed order of above
cIp1 = &net.IPNet{IP: net.ParseIP("192.168.0.1"), Mask: []byte{255, 255, 255, 0}}
cIp2 = &net.IPNet{IP: net.ParseIP("10.1.0.0"), Mask: []byte{255, 255, 255, 0}}
c, _, _, err = newTestCert(ca, caKey, time.Now(), time.Now().Add(5*time.Minute), []*net.IPNet{}, []*net.IPNet{cIp1, cIp2}, []string{"test"})
assert.Nil(t, err)
v, err = c.Verify(time.Now(), caPool)
assert.False(t, v)
assert.EqualError(t, err, "certificate contained a subnet assignment outside the limitations of the signing ca: 10.1.0.0/24")
// ip is within the network but mask is outside
cIp1 = &net.IPNet{IP: net.ParseIP("10.0.1.0"), Mask: []byte{255, 254, 0, 0}}
cIp2 = &net.IPNet{IP: net.ParseIP("192.168.0.1"), Mask: []byte{255, 255, 255, 0}}
c, _, _, err = newTestCert(ca, caKey, time.Now(), time.Now().Add(5*time.Minute), []*net.IPNet{}, []*net.IPNet{cIp1, cIp2}, []string{"test"})
assert.Nil(t, err)
v, err = c.Verify(time.Now(), caPool)
assert.False(t, v)
assert.EqualError(t, err, "certificate contained a subnet assignment outside the limitations of the signing ca: 10.0.1.0/15")
// ip is within the network but mask is outside reversed order of above
cIp1 = &net.IPNet{IP: net.ParseIP("192.168.0.1"), Mask: []byte{255, 255, 255, 0}}
cIp2 = &net.IPNet{IP: net.ParseIP("10.0.1.0"), Mask: []byte{255, 254, 0, 0}}
c, _, _, err = newTestCert(ca, caKey, time.Now(), time.Now().Add(5*time.Minute), []*net.IPNet{}, []*net.IPNet{cIp1, cIp2}, []string{"test"})
assert.Nil(t, err)
v, err = c.Verify(time.Now(), caPool)
assert.False(t, v)
assert.EqualError(t, err, "certificate contained a subnet assignment outside the limitations of the signing ca: 10.0.1.0/15")
// ip and mask are within the network
cIp1 = &net.IPNet{IP: net.ParseIP("10.0.1.0"), Mask: []byte{255, 255, 0, 0}}
cIp2 = &net.IPNet{IP: net.ParseIP("192.168.0.1"), Mask: []byte{255, 255, 255, 128}}
c, _, _, err = newTestCert(ca, caKey, time.Now(), time.Now().Add(5*time.Minute), []*net.IPNet{}, []*net.IPNet{cIp1, cIp2}, []string{"test"})
assert.Nil(t, err)
v, err = c.Verify(time.Now(), caPool)
assert.True(t, v)
assert.Nil(t, err)
// Exact matches
c, _, _, err = newTestCert(ca, caKey, time.Now(), time.Now().Add(5*time.Minute), []*net.IPNet{}, []*net.IPNet{caIp1, caIp2}, []string{"test"})
assert.Nil(t, err)
v, err = c.Verify(time.Now(), caPool)
assert.True(t, v)
assert.Nil(t, err)
// Exact matches reversed
c, _, _, err = newTestCert(ca, caKey, time.Now(), time.Now().Add(5*time.Minute), []*net.IPNet{}, []*net.IPNet{caIp2, caIp1}, []string{"test"})
assert.Nil(t, err)
v, err = c.Verify(time.Now(), caPool)
assert.True(t, v)
assert.Nil(t, err)
// Exact matches reversed with just 1
c, _, _, err = newTestCert(ca, caKey, time.Now(), time.Now().Add(5*time.Minute), []*net.IPNet{}, []*net.IPNet{caIp1}, []string{"test"})
assert.Nil(t, err)
v, err = c.Verify(time.Now(), caPool)
assert.True(t, v)
assert.Nil(t, err)
}
func TestNebulaVerifyPrivateKey(t *testing.T) {
ca, _, caKey, err := newTestCaCert(time.Time{}, time.Time{}, []*net.IPNet{}, []*net.IPNet{}, []string{})
assert.Nil(t, err)
c, _, priv, err := newTestCert(ca, caKey, time.Time{}, time.Time{}, []*net.IPNet{}, []*net.IPNet{}, []string{})
err = c.VerifyPrivateKey(priv)
assert.Nil(t, err)
_, priv2 := x25519Keypair()
err = c.VerifyPrivateKey(priv2)
assert.NotNil(t, err)
}
func TestNewCAPoolFromBytes(t *testing.T) {
noNewLines := `
# Current provisional, Remove once everything moves over to the real root.
-----BEGIN NEBULA CERTIFICATE-----
CkAKDm5lYnVsYSByb290IGNhKJfap9AFMJfg1+YGOiCUQGByMuNRhIlQBOyzXWbL
vcKBwDhov900phEfJ5DN3kABEkDCq5R8qBiu8sl54yVfgRcQXEDt3cHr8UTSLszv
bzBEr00kERQxxTzTsH8cpYEgRoipvmExvg8WP8NdAJEYJosB
-----END NEBULA CERTIFICATE-----
# root-ca01
-----BEGIN NEBULA CERTIFICATE-----
CkMKEW5lYnVsYSByb290IGNhIDAxKJL2u9EFMJL86+cGOiDPXMH4oU6HZTk/CqTG
BVG+oJpAoqokUBbI4U0N8CSfpUABEkB/Pm5A2xyH/nc8mg/wvGUWG3pZ7nHzaDMf
8/phAUt+FLzqTECzQKisYswKvE3pl9mbEYKbOdIHrxdIp95mo4sF
-----END NEBULA CERTIFICATE-----
`
withNewLines := `
# Current provisional, Remove once everything moves over to the real root.
-----BEGIN NEBULA CERTIFICATE-----
CkAKDm5lYnVsYSByb290IGNhKJfap9AFMJfg1+YGOiCUQGByMuNRhIlQBOyzXWbL
vcKBwDhov900phEfJ5DN3kABEkDCq5R8qBiu8sl54yVfgRcQXEDt3cHr8UTSLszv
bzBEr00kERQxxTzTsH8cpYEgRoipvmExvg8WP8NdAJEYJosB
-----END NEBULA CERTIFICATE-----
# root-ca01
-----BEGIN NEBULA CERTIFICATE-----
CkMKEW5lYnVsYSByb290IGNhIDAxKJL2u9EFMJL86+cGOiDPXMH4oU6HZTk/CqTG
BVG+oJpAoqokUBbI4U0N8CSfpUABEkB/Pm5A2xyH/nc8mg/wvGUWG3pZ7nHzaDMf
8/phAUt+FLzqTECzQKisYswKvE3pl9mbEYKbOdIHrxdIp95mo4sF
-----END NEBULA CERTIFICATE-----
`
rootCA := NebulaCertificate{
Details: NebulaCertificateDetails{
Name: "nebula root ca",
},
}
rootCA01 := NebulaCertificate{
Details: NebulaCertificateDetails{
Name: "nebula root ca 01",
},
}
p, err := NewCAPoolFromBytes([]byte(noNewLines))
assert.Nil(t, err)
assert.Equal(t, p.CAs[string("c9bfaf7ce8e84b2eeda2e27b469f4b9617bde192efd214b68891ecda6ed49522")].Details.Name, rootCA.Details.Name)
assert.Equal(t, p.CAs[string("5c9c3f23e7ee7fe97637cbd3a0a5b854154d1d9aaaf7b566a51f4a88f76b64cd")].Details.Name, rootCA01.Details.Name)
pp, err := NewCAPoolFromBytes([]byte(withNewLines))
assert.Nil(t, err)
assert.Equal(t, pp.CAs[string("c9bfaf7ce8e84b2eeda2e27b469f4b9617bde192efd214b68891ecda6ed49522")].Details.Name, rootCA.Details.Name)
assert.Equal(t, pp.CAs[string("5c9c3f23e7ee7fe97637cbd3a0a5b854154d1d9aaaf7b566a51f4a88f76b64cd")].Details.Name, rootCA01.Details.Name)
}
// Ensure that upgrading the protobuf library does not change how certificates
// are marshalled, since this would break signature verification
func TestMarshalingNebulaCertificateConsistency(t *testing.T) {
before := time.Date(2009, time.November, 10, 23, 0, 0, 0, time.UTC)
after := time.Date(2017, time.January, 18, 28, 40, 0, 0, time.UTC)
pubKey := []byte("1234567890abcedfghij1234567890ab")
nc := NebulaCertificate{
Details: NebulaCertificateDetails{
Name: "testing",
Ips: []*net.IPNet{
{IP: net.ParseIP("10.1.1.1"), Mask: net.IPMask(net.ParseIP("255.255.255.0"))},
{IP: net.ParseIP("10.1.1.2"), Mask: net.IPMask(net.ParseIP("255.255.0.0"))},
{IP: net.ParseIP("10.1.1.3"), Mask: net.IPMask(net.ParseIP("255.0.255.0"))},
},
Subnets: []*net.IPNet{
{IP: net.ParseIP("9.1.1.1"), Mask: net.IPMask(net.ParseIP("255.0.255.0"))},
{IP: net.ParseIP("9.1.1.2"), Mask: net.IPMask(net.ParseIP("255.255.255.0"))},
{IP: net.ParseIP("9.1.1.3"), Mask: net.IPMask(net.ParseIP("255.255.0.0"))},
},
Groups: []string{"test-group1", "test-group2", "test-group3"},
NotBefore: before,
NotAfter: after,
PublicKey: pubKey,
IsCA: false,
Issuer: "1234567890abcedfghij1234567890ab",
},
Signature: []byte("1234567890abcedfghij1234567890ab"),
}
b, err := nc.Marshal()
assert.Nil(t, err)
//t.Log("Cert size:", len(b))
assert.Equal(t, "0aa2010a0774657374696e67121b8182845080feffff0f828284508080fcff0f8382845080fe83f80f1a1b8182844880fe83f80f8282844880feffff0f838284488080fcff0f220b746573742d67726f757031220b746573742d67726f757032220b746573742d67726f75703328f0e0e7d70430a08681c4053a20313233343536373839306162636564666768696a3132333435363738393061624a081234567890abcedf1220313233343536373839306162636564666768696a313233343536373839306162", fmt.Sprintf("%x", b))
b, err = proto.Marshal(nc.getRawDetails())
assert.Nil(t, err)
//t.Log("Raw cert size:", len(b))
assert.Equal(t, "0a0774657374696e67121b8182845080feffff0f828284508080fcff0f8382845080fe83f80f1a1b8182844880fe83f80f8282844880feffff0f838284488080fcff0f220b746573742d67726f757031220b746573742d67726f757032220b746573742d67726f75703328f0e0e7d70430a08681c4053a20313233343536373839306162636564666768696a3132333435363738393061624a081234567890abcedf", fmt.Sprintf("%x", b))
}
func newTestCaCert(before, after time.Time, ips, subnets []*net.IPNet, groups []string) (*NebulaCertificate, []byte, []byte, error) {
pub, priv, err := ed25519.GenerateKey(rand.Reader)
if before.IsZero() {
before = time.Now().Add(time.Second * -60).Round(time.Second)
}
if after.IsZero() {
after = time.Now().Add(time.Second * 60).Round(time.Second)
}
nc := &NebulaCertificate{
Details: NebulaCertificateDetails{
Name: "test ca",
NotBefore: before,
NotAfter: after,
PublicKey: pub,
IsCA: true,
},
}
if len(ips) > 0 {
nc.Details.Ips = ips
}
if len(subnets) > 0 {
nc.Details.Subnets = subnets
}
if len(groups) > 0 {
nc.Details.Groups = groups
}
err = nc.Sign(priv)
if err != nil {
return nil, nil, nil, err
}
return nc, pub, priv, nil
}
func newTestCert(ca *NebulaCertificate, key []byte, before, after time.Time, ips, subnets []*net.IPNet, groups []string) (*NebulaCertificate, []byte, []byte, error) {
issuer, err := ca.Sha256Sum()
if err != nil {
return nil, nil, nil, err
}
if before.IsZero() {
before = time.Now().Add(time.Second * -60).Round(time.Second)
}
if after.IsZero() {
after = time.Now().Add(time.Second * 60).Round(time.Second)
}
if len(groups) == 0 {
groups = []string{"test-group1", "test-group2", "test-group3"}
}
if len(ips) == 0 {
ips = []*net.IPNet{
{IP: net.ParseIP("10.1.1.1"), Mask: net.IPMask(net.ParseIP("255.255.255.0"))},
{IP: net.ParseIP("10.1.1.2"), Mask: net.IPMask(net.ParseIP("255.255.0.0"))},
{IP: net.ParseIP("10.1.1.3"), Mask: net.IPMask(net.ParseIP("255.0.255.0"))},
}
}
if len(subnets) == 0 {
subnets = []*net.IPNet{
{IP: net.ParseIP("9.1.1.1"), Mask: net.IPMask(net.ParseIP("255.0.255.0"))},
{IP: net.ParseIP("9.1.1.2"), Mask: net.IPMask(net.ParseIP("255.255.255.0"))},
{IP: net.ParseIP("9.1.1.3"), Mask: net.IPMask(net.ParseIP("255.255.0.0"))},
}
}
pub, rawPriv := x25519Keypair()
nc := &NebulaCertificate{
Details: NebulaCertificateDetails{
Name: "testing",
Ips: ips,
Subnets: subnets,
Groups: groups,
NotBefore: before,
NotAfter: after,
PublicKey: pub,
IsCA: false,
Issuer: issuer,
},
}
err = nc.Sign(key)
if err != nil {
return nil, nil, nil, err
}
return nc, pub, rawPriv, nil
}
func x25519Keypair() ([]byte, []byte) {
var pubkey, privkey [32]byte
if _, err := io.ReadFull(rand.Reader, privkey[:]); err != nil {
panic(err)
}
curve25519.ScalarBaseMult(&pubkey, &privkey)
return pubkey[:], privkey[:]
}

489
cert/cert_v1.go Normal file
View File

@@ -0,0 +1,489 @@
package cert
import (
"bytes"
"crypto/ecdh"
"crypto/ecdsa"
"crypto/ed25519"
"crypto/elliptic"
"crypto/sha256"
"encoding/binary"
"encoding/hex"
"encoding/json"
"encoding/pem"
"fmt"
"net"
"net/netip"
"time"
"golang.org/x/crypto/curve25519"
"google.golang.org/protobuf/proto"
)
const publicKeyLen = 32
type certificateV1 struct {
details detailsV1
signature []byte
}
type detailsV1 struct {
name string
networks []netip.Prefix
unsafeNetworks []netip.Prefix
groups []string
notBefore time.Time
notAfter time.Time
publicKey []byte
isCA bool
issuer string
curve Curve
}
type m = map[string]any
func (c *certificateV1) Version() Version {
return Version1
}
func (c *certificateV1) Curve() Curve {
return c.details.curve
}
func (c *certificateV1) Groups() []string {
return c.details.groups
}
func (c *certificateV1) IsCA() bool {
return c.details.isCA
}
func (c *certificateV1) Issuer() string {
return c.details.issuer
}
func (c *certificateV1) Name() string {
return c.details.name
}
func (c *certificateV1) Networks() []netip.Prefix {
return c.details.networks
}
func (c *certificateV1) NotAfter() time.Time {
return c.details.notAfter
}
func (c *certificateV1) NotBefore() time.Time {
return c.details.notBefore
}
func (c *certificateV1) PublicKey() []byte {
return c.details.publicKey
}
func (c *certificateV1) Signature() []byte {
return c.signature
}
func (c *certificateV1) UnsafeNetworks() []netip.Prefix {
return c.details.unsafeNetworks
}
func (c *certificateV1) Fingerprint() (string, error) {
b, err := c.Marshal()
if err != nil {
return "", err
}
sum := sha256.Sum256(b)
return hex.EncodeToString(sum[:]), nil
}
func (c *certificateV1) CheckSignature(key []byte) bool {
b, err := proto.Marshal(c.getRawDetails())
if err != nil {
return false
}
switch c.details.curve {
case Curve_CURVE25519:
return ed25519.Verify(key, b, c.signature)
case Curve_P256:
x, y := elliptic.Unmarshal(elliptic.P256(), key)
pubKey := &ecdsa.PublicKey{Curve: elliptic.P256(), X: x, Y: y}
hashed := sha256.Sum256(b)
return ecdsa.VerifyASN1(pubKey, hashed[:], c.signature)
default:
return false
}
}
func (c *certificateV1) Expired(t time.Time) bool {
return c.details.notBefore.After(t) || c.details.notAfter.Before(t)
}
func (c *certificateV1) VerifyPrivateKey(curve Curve, key []byte) error {
if curve != c.details.curve {
return fmt.Errorf("curve in cert and private key supplied don't match")
}
if c.details.isCA {
switch curve {
case Curve_CURVE25519:
// the call to PublicKey below will panic slice bounds out of range otherwise
if len(key) != ed25519.PrivateKeySize {
return fmt.Errorf("key was not 64 bytes, is invalid ed25519 private key")
}
if !ed25519.PublicKey(c.details.publicKey).Equal(ed25519.PrivateKey(key).Public()) {
return fmt.Errorf("public key in cert and private key supplied don't match")
}
case Curve_P256:
privkey, err := ecdh.P256().NewPrivateKey(key)
if err != nil {
return fmt.Errorf("cannot parse private key as P256: %w", err)
}
pub := privkey.PublicKey().Bytes()
if !bytes.Equal(pub, c.details.publicKey) {
return fmt.Errorf("public key in cert and private key supplied don't match")
}
default:
return fmt.Errorf("invalid curve: %s", curve)
}
return nil
}
var pub []byte
switch curve {
case Curve_CURVE25519:
var err error
pub, err = curve25519.X25519(key, curve25519.Basepoint)
if err != nil {
return err
}
case Curve_P256:
privkey, err := ecdh.P256().NewPrivateKey(key)
if err != nil {
return err
}
pub = privkey.PublicKey().Bytes()
default:
return fmt.Errorf("invalid curve: %s", curve)
}
if !bytes.Equal(pub, c.details.publicKey) {
return fmt.Errorf("public key in cert and private key supplied don't match")
}
return nil
}
// getRawDetails marshals the raw details into protobuf ready struct
func (c *certificateV1) getRawDetails() *RawNebulaCertificateDetails {
rd := &RawNebulaCertificateDetails{
Name: c.details.name,
Groups: c.details.groups,
NotBefore: c.details.notBefore.Unix(),
NotAfter: c.details.notAfter.Unix(),
PublicKey: make([]byte, len(c.details.publicKey)),
IsCA: c.details.isCA,
Curve: c.details.curve,
}
for _, ipNet := range c.details.networks {
mask := net.CIDRMask(ipNet.Bits(), ipNet.Addr().BitLen())
rd.Ips = append(rd.Ips, addr2int(ipNet.Addr()), ip2int(mask))
}
for _, ipNet := range c.details.unsafeNetworks {
mask := net.CIDRMask(ipNet.Bits(), ipNet.Addr().BitLen())
rd.Subnets = append(rd.Subnets, addr2int(ipNet.Addr()), ip2int(mask))
}
copy(rd.PublicKey, c.details.publicKey[:])
// I know, this is terrible
rd.Issuer, _ = hex.DecodeString(c.details.issuer)
return rd
}
func (c *certificateV1) String() string {
b, err := json.MarshalIndent(c.marshalJSON(), "", "\t")
if err != nil {
return fmt.Sprintf("<error marshalling certificate: %v>", err)
}
return string(b)
}
func (c *certificateV1) MarshalForHandshakes() ([]byte, error) {
pubKey := c.details.publicKey
c.details.publicKey = nil
rawCertNoKey, err := c.Marshal()
if err != nil {
return nil, err
}
c.details.publicKey = pubKey
return rawCertNoKey, nil
}
func (c *certificateV1) Marshal() ([]byte, error) {
rc := RawNebulaCertificate{
Details: c.getRawDetails(),
Signature: c.signature,
}
return proto.Marshal(&rc)
}
func (c *certificateV1) MarshalPEM() ([]byte, error) {
b, err := c.Marshal()
if err != nil {
return nil, err
}
return pem.EncodeToMemory(&pem.Block{Type: CertificateBanner, Bytes: b}), nil
}
func (c *certificateV1) MarshalJSON() ([]byte, error) {
return json.Marshal(c.marshalJSON())
}
func (c *certificateV1) marshalJSON() m {
fp, _ := c.Fingerprint()
return m{
"version": Version1,
"details": m{
"name": c.details.name,
"networks": c.details.networks,
"unsafeNetworks": c.details.unsafeNetworks,
"groups": c.details.groups,
"notBefore": c.details.notBefore,
"notAfter": c.details.notAfter,
"publicKey": fmt.Sprintf("%x", c.details.publicKey),
"isCa": c.details.isCA,
"issuer": c.details.issuer,
"curve": c.details.curve.String(),
},
"fingerprint": fp,
"signature": fmt.Sprintf("%x", c.Signature()),
}
}
func (c *certificateV1) Copy() Certificate {
nc := &certificateV1{
details: detailsV1{
name: c.details.name,
notBefore: c.details.notBefore,
notAfter: c.details.notAfter,
publicKey: make([]byte, len(c.details.publicKey)),
isCA: c.details.isCA,
issuer: c.details.issuer,
curve: c.details.curve,
},
signature: make([]byte, len(c.signature)),
}
if c.details.groups != nil {
nc.details.groups = make([]string, len(c.details.groups))
copy(nc.details.groups, c.details.groups)
}
if c.details.networks != nil {
nc.details.networks = make([]netip.Prefix, len(c.details.networks))
copy(nc.details.networks, c.details.networks)
}
if c.details.unsafeNetworks != nil {
nc.details.unsafeNetworks = make([]netip.Prefix, len(c.details.unsafeNetworks))
copy(nc.details.unsafeNetworks, c.details.unsafeNetworks)
}
copy(nc.signature, c.signature)
copy(nc.details.publicKey, c.details.publicKey)
return nc
}
func (c *certificateV1) fromTBSCertificate(t *TBSCertificate) error {
c.details = detailsV1{
name: t.Name,
networks: t.Networks,
unsafeNetworks: t.UnsafeNetworks,
groups: t.Groups,
notBefore: t.NotBefore,
notAfter: t.NotAfter,
publicKey: t.PublicKey,
isCA: t.IsCA,
curve: t.Curve,
issuer: t.issuer,
}
return c.validate()
}
func (c *certificateV1) validate() error {
// Empty names are allowed
if len(c.details.publicKey) == 0 {
return ErrInvalidPublicKey
}
// Original v1 rules allowed multiple networks to be present but ignored all but the first one.
// Continue to allow this behavior
if !c.details.isCA && len(c.details.networks) == 0 {
return NewErrInvalidCertificateProperties("non-CA certificates must contain exactly one network")
}
for _, network := range c.details.networks {
if !network.IsValid() || !network.Addr().IsValid() {
return NewErrInvalidCertificateProperties("invalid network: %s", network)
}
if network.Addr().Is6() {
return NewErrInvalidCertificateProperties("certificate may not contain IPv6 networks: %v", network)
}
if network.Addr().IsUnspecified() {
return NewErrInvalidCertificateProperties("non-CA certificates must not use the zero address as a network: %s", network)
}
if network.Addr().Zone() != "" {
return NewErrInvalidCertificateProperties("networks may not contain zones: %s", network)
}
}
for _, network := range c.details.unsafeNetworks {
if !network.IsValid() || !network.Addr().IsValid() {
return NewErrInvalidCertificateProperties("invalid unsafe network: %s", network)
}
if network.Addr().Is6() {
return NewErrInvalidCertificateProperties("certificate may not contain IPv6 unsafe networks: %v", network)
}
if network.Addr().Zone() != "" {
return NewErrInvalidCertificateProperties("unsafe networks may not contain zones: %s", network)
}
}
// v1 doesn't bother with sort order or uniqueness of networks or unsafe networks.
// We can't modify the unmarshalled data because verification requires re-marshalling and a re-ordered
// unsafe networks would result in a different signature.
return nil
}
func (c *certificateV1) marshalForSigning() ([]byte, error) {
b, err := proto.Marshal(c.getRawDetails())
if err != nil {
return nil, err
}
return b, nil
}
func (c *certificateV1) setSignature(b []byte) error {
if len(b) == 0 {
return ErrEmptySignature
}
c.signature = b
return nil
}
// unmarshalCertificateV1 will unmarshal a protobuf byte representation of a nebula cert
// if the publicKey is provided here then it is not required to be present in `b`
func unmarshalCertificateV1(b []byte, publicKey []byte) (*certificateV1, error) {
if len(b) == 0 {
return nil, fmt.Errorf("nil byte array")
}
var rc RawNebulaCertificate
err := proto.Unmarshal(b, &rc)
if err != nil {
return nil, err
}
if rc.Details == nil {
return nil, fmt.Errorf("encoded Details was nil")
}
if len(rc.Details.Ips)%2 != 0 {
return nil, fmt.Errorf("encoded IPs should be in pairs, an odd number was found")
}
if len(rc.Details.Subnets)%2 != 0 {
return nil, fmt.Errorf("encoded Subnets should be in pairs, an odd number was found")
}
nc := certificateV1{
details: detailsV1{
name: rc.Details.Name,
groups: make([]string, len(rc.Details.Groups)),
networks: make([]netip.Prefix, len(rc.Details.Ips)/2),
unsafeNetworks: make([]netip.Prefix, len(rc.Details.Subnets)/2),
notBefore: time.Unix(rc.Details.NotBefore, 0),
notAfter: time.Unix(rc.Details.NotAfter, 0),
publicKey: make([]byte, len(rc.Details.PublicKey)),
isCA: rc.Details.IsCA,
curve: rc.Details.Curve,
},
signature: make([]byte, len(rc.Signature)),
}
copy(nc.signature, rc.Signature)
copy(nc.details.groups, rc.Details.Groups)
nc.details.issuer = hex.EncodeToString(rc.Details.Issuer)
if len(publicKey) > 0 {
nc.details.publicKey = publicKey
}
copy(nc.details.publicKey, rc.Details.PublicKey)
var ip netip.Addr
for i, rawIp := range rc.Details.Ips {
if i%2 == 0 {
ip = int2addr(rawIp)
} else {
ones, _ := net.IPMask(int2ip(rawIp)).Size()
nc.details.networks[i/2] = netip.PrefixFrom(ip, ones)
}
}
for i, rawIp := range rc.Details.Subnets {
if i%2 == 0 {
ip = int2addr(rawIp)
} else {
ones, _ := net.IPMask(int2ip(rawIp)).Size()
nc.details.unsafeNetworks[i/2] = netip.PrefixFrom(ip, ones)
}
}
err = nc.validate()
if err != nil {
return nil, err
}
return &nc, nil
}
func ip2int(ip []byte) uint32 {
if len(ip) == 16 {
return binary.BigEndian.Uint32(ip[12:16])
}
return binary.BigEndian.Uint32(ip)
}
func int2ip(nn uint32) net.IP {
ip := make(net.IP, net.IPv4len)
binary.BigEndian.PutUint32(ip, nn)
return ip
}
func addr2int(addr netip.Addr) uint32 {
b := addr.Unmap().As4()
return binary.BigEndian.Uint32(b[:])
}
func int2addr(nn uint32) netip.Addr {
ip := [4]byte{}
binary.BigEndian.PutUint32(ip[:], nn)
return netip.AddrFrom4(ip).Unmap()
}

620
cert/cert_v1.pb.go Normal file
View File

@@ -0,0 +1,620 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.34.2
// protoc v3.21.5
// source: cert_v1.proto
package cert
import (
protoreflect "google.golang.org/protobuf/reflect/protoreflect"
protoimpl "google.golang.org/protobuf/runtime/protoimpl"
reflect "reflect"
sync "sync"
)
const (
// Verify that this generated code is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion)
// Verify that runtime/protoimpl is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
)
type Curve int32
const (
Curve_CURVE25519 Curve = 0
Curve_P256 Curve = 1
)
// Enum value maps for Curve.
var (
Curve_name = map[int32]string{
0: "CURVE25519",
1: "P256",
}
Curve_value = map[string]int32{
"CURVE25519": 0,
"P256": 1,
}
)
func (x Curve) Enum() *Curve {
p := new(Curve)
*p = x
return p
}
func (x Curve) String() string {
return protoimpl.X.EnumStringOf(x.Descriptor(), protoreflect.EnumNumber(x))
}
func (Curve) Descriptor() protoreflect.EnumDescriptor {
return file_cert_v1_proto_enumTypes[0].Descriptor()
}
func (Curve) Type() protoreflect.EnumType {
return &file_cert_v1_proto_enumTypes[0]
}
func (x Curve) Number() protoreflect.EnumNumber {
return protoreflect.EnumNumber(x)
}
// Deprecated: Use Curve.Descriptor instead.
func (Curve) EnumDescriptor() ([]byte, []int) {
return file_cert_v1_proto_rawDescGZIP(), []int{0}
}
type RawNebulaCertificate struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Details *RawNebulaCertificateDetails `protobuf:"bytes,1,opt,name=Details,proto3" json:"Details,omitempty"`
Signature []byte `protobuf:"bytes,2,opt,name=Signature,proto3" json:"Signature,omitempty"`
}
func (x *RawNebulaCertificate) Reset() {
*x = RawNebulaCertificate{}
if protoimpl.UnsafeEnabled {
mi := &file_cert_v1_proto_msgTypes[0]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *RawNebulaCertificate) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*RawNebulaCertificate) ProtoMessage() {}
func (x *RawNebulaCertificate) ProtoReflect() protoreflect.Message {
mi := &file_cert_v1_proto_msgTypes[0]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use RawNebulaCertificate.ProtoReflect.Descriptor instead.
func (*RawNebulaCertificate) Descriptor() ([]byte, []int) {
return file_cert_v1_proto_rawDescGZIP(), []int{0}
}
func (x *RawNebulaCertificate) GetDetails() *RawNebulaCertificateDetails {
if x != nil {
return x.Details
}
return nil
}
func (x *RawNebulaCertificate) GetSignature() []byte {
if x != nil {
return x.Signature
}
return nil
}
type RawNebulaCertificateDetails struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Name string `protobuf:"bytes,1,opt,name=Name,proto3" json:"Name,omitempty"`
// Ips and Subnets are in big endian 32 bit pairs, 1st the ip, 2nd the mask
Ips []uint32 `protobuf:"varint,2,rep,packed,name=Ips,proto3" json:"Ips,omitempty"`
Subnets []uint32 `protobuf:"varint,3,rep,packed,name=Subnets,proto3" json:"Subnets,omitempty"`
Groups []string `protobuf:"bytes,4,rep,name=Groups,proto3" json:"Groups,omitempty"`
NotBefore int64 `protobuf:"varint,5,opt,name=NotBefore,proto3" json:"NotBefore,omitempty"`
NotAfter int64 `protobuf:"varint,6,opt,name=NotAfter,proto3" json:"NotAfter,omitempty"`
PublicKey []byte `protobuf:"bytes,7,opt,name=PublicKey,proto3" json:"PublicKey,omitempty"`
IsCA bool `protobuf:"varint,8,opt,name=IsCA,proto3" json:"IsCA,omitempty"`
// sha-256 of the issuer certificate, if this field is blank the cert is self-signed
Issuer []byte `protobuf:"bytes,9,opt,name=Issuer,proto3" json:"Issuer,omitempty"`
Curve Curve `protobuf:"varint,100,opt,name=curve,proto3,enum=cert.Curve" json:"curve,omitempty"`
}
func (x *RawNebulaCertificateDetails) Reset() {
*x = RawNebulaCertificateDetails{}
if protoimpl.UnsafeEnabled {
mi := &file_cert_v1_proto_msgTypes[1]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *RawNebulaCertificateDetails) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*RawNebulaCertificateDetails) ProtoMessage() {}
func (x *RawNebulaCertificateDetails) ProtoReflect() protoreflect.Message {
mi := &file_cert_v1_proto_msgTypes[1]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use RawNebulaCertificateDetails.ProtoReflect.Descriptor instead.
func (*RawNebulaCertificateDetails) Descriptor() ([]byte, []int) {
return file_cert_v1_proto_rawDescGZIP(), []int{1}
}
func (x *RawNebulaCertificateDetails) GetName() string {
if x != nil {
return x.Name
}
return ""
}
func (x *RawNebulaCertificateDetails) GetIps() []uint32 {
if x != nil {
return x.Ips
}
return nil
}
func (x *RawNebulaCertificateDetails) GetSubnets() []uint32 {
if x != nil {
return x.Subnets
}
return nil
}
func (x *RawNebulaCertificateDetails) GetGroups() []string {
if x != nil {
return x.Groups
}
return nil
}
func (x *RawNebulaCertificateDetails) GetNotBefore() int64 {
if x != nil {
return x.NotBefore
}
return 0
}
func (x *RawNebulaCertificateDetails) GetNotAfter() int64 {
if x != nil {
return x.NotAfter
}
return 0
}
func (x *RawNebulaCertificateDetails) GetPublicKey() []byte {
if x != nil {
return x.PublicKey
}
return nil
}
func (x *RawNebulaCertificateDetails) GetIsCA() bool {
if x != nil {
return x.IsCA
}
return false
}
func (x *RawNebulaCertificateDetails) GetIssuer() []byte {
if x != nil {
return x.Issuer
}
return nil
}
func (x *RawNebulaCertificateDetails) GetCurve() Curve {
if x != nil {
return x.Curve
}
return Curve_CURVE25519
}
type RawNebulaEncryptedData struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
EncryptionMetadata *RawNebulaEncryptionMetadata `protobuf:"bytes,1,opt,name=EncryptionMetadata,proto3" json:"EncryptionMetadata,omitempty"`
Ciphertext []byte `protobuf:"bytes,2,opt,name=Ciphertext,proto3" json:"Ciphertext,omitempty"`
}
func (x *RawNebulaEncryptedData) Reset() {
*x = RawNebulaEncryptedData{}
if protoimpl.UnsafeEnabled {
mi := &file_cert_v1_proto_msgTypes[2]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *RawNebulaEncryptedData) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*RawNebulaEncryptedData) ProtoMessage() {}
func (x *RawNebulaEncryptedData) ProtoReflect() protoreflect.Message {
mi := &file_cert_v1_proto_msgTypes[2]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use RawNebulaEncryptedData.ProtoReflect.Descriptor instead.
func (*RawNebulaEncryptedData) Descriptor() ([]byte, []int) {
return file_cert_v1_proto_rawDescGZIP(), []int{2}
}
func (x *RawNebulaEncryptedData) GetEncryptionMetadata() *RawNebulaEncryptionMetadata {
if x != nil {
return x.EncryptionMetadata
}
return nil
}
func (x *RawNebulaEncryptedData) GetCiphertext() []byte {
if x != nil {
return x.Ciphertext
}
return nil
}
type RawNebulaEncryptionMetadata struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
EncryptionAlgorithm string `protobuf:"bytes,1,opt,name=EncryptionAlgorithm,proto3" json:"EncryptionAlgorithm,omitempty"`
Argon2Parameters *RawNebulaArgon2Parameters `protobuf:"bytes,2,opt,name=Argon2Parameters,proto3" json:"Argon2Parameters,omitempty"`
}
func (x *RawNebulaEncryptionMetadata) Reset() {
*x = RawNebulaEncryptionMetadata{}
if protoimpl.UnsafeEnabled {
mi := &file_cert_v1_proto_msgTypes[3]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *RawNebulaEncryptionMetadata) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*RawNebulaEncryptionMetadata) ProtoMessage() {}
func (x *RawNebulaEncryptionMetadata) ProtoReflect() protoreflect.Message {
mi := &file_cert_v1_proto_msgTypes[3]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use RawNebulaEncryptionMetadata.ProtoReflect.Descriptor instead.
func (*RawNebulaEncryptionMetadata) Descriptor() ([]byte, []int) {
return file_cert_v1_proto_rawDescGZIP(), []int{3}
}
func (x *RawNebulaEncryptionMetadata) GetEncryptionAlgorithm() string {
if x != nil {
return x.EncryptionAlgorithm
}
return ""
}
func (x *RawNebulaEncryptionMetadata) GetArgon2Parameters() *RawNebulaArgon2Parameters {
if x != nil {
return x.Argon2Parameters
}
return nil
}
type RawNebulaArgon2Parameters struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Version int32 `protobuf:"varint,1,opt,name=version,proto3" json:"version,omitempty"` // rune in Go
Memory uint32 `protobuf:"varint,2,opt,name=memory,proto3" json:"memory,omitempty"`
Parallelism uint32 `protobuf:"varint,4,opt,name=parallelism,proto3" json:"parallelism,omitempty"` // uint8 in Go
Iterations uint32 `protobuf:"varint,3,opt,name=iterations,proto3" json:"iterations,omitempty"`
Salt []byte `protobuf:"bytes,5,opt,name=salt,proto3" json:"salt,omitempty"`
}
func (x *RawNebulaArgon2Parameters) Reset() {
*x = RawNebulaArgon2Parameters{}
if protoimpl.UnsafeEnabled {
mi := &file_cert_v1_proto_msgTypes[4]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *RawNebulaArgon2Parameters) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*RawNebulaArgon2Parameters) ProtoMessage() {}
func (x *RawNebulaArgon2Parameters) ProtoReflect() protoreflect.Message {
mi := &file_cert_v1_proto_msgTypes[4]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use RawNebulaArgon2Parameters.ProtoReflect.Descriptor instead.
func (*RawNebulaArgon2Parameters) Descriptor() ([]byte, []int) {
return file_cert_v1_proto_rawDescGZIP(), []int{4}
}
func (x *RawNebulaArgon2Parameters) GetVersion() int32 {
if x != nil {
return x.Version
}
return 0
}
func (x *RawNebulaArgon2Parameters) GetMemory() uint32 {
if x != nil {
return x.Memory
}
return 0
}
func (x *RawNebulaArgon2Parameters) GetParallelism() uint32 {
if x != nil {
return x.Parallelism
}
return 0
}
func (x *RawNebulaArgon2Parameters) GetIterations() uint32 {
if x != nil {
return x.Iterations
}
return 0
}
func (x *RawNebulaArgon2Parameters) GetSalt() []byte {
if x != nil {
return x.Salt
}
return nil
}
var File_cert_v1_proto protoreflect.FileDescriptor
var file_cert_v1_proto_rawDesc = []byte{
0x0a, 0x0d, 0x63, 0x65, 0x72, 0x74, 0x5f, 0x76, 0x31, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x12,
0x04, 0x63, 0x65, 0x72, 0x74, 0x22, 0x71, 0x0a, 0x14, 0x52, 0x61, 0x77, 0x4e, 0x65, 0x62, 0x75,
0x6c, 0x61, 0x43, 0x65, 0x72, 0x74, 0x69, 0x66, 0x69, 0x63, 0x61, 0x74, 0x65, 0x12, 0x3b, 0x0a,
0x07, 0x44, 0x65, 0x74, 0x61, 0x69, 0x6c, 0x73, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x21,
0x2e, 0x63, 0x65, 0x72, 0x74, 0x2e, 0x52, 0x61, 0x77, 0x4e, 0x65, 0x62, 0x75, 0x6c, 0x61, 0x43,
0x65, 0x72, 0x74, 0x69, 0x66, 0x69, 0x63, 0x61, 0x74, 0x65, 0x44, 0x65, 0x74, 0x61, 0x69, 0x6c,
0x73, 0x52, 0x07, 0x44, 0x65, 0x74, 0x61, 0x69, 0x6c, 0x73, 0x12, 0x1c, 0x0a, 0x09, 0x53, 0x69,
0x67, 0x6e, 0x61, 0x74, 0x75, 0x72, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0c, 0x52, 0x09, 0x53,
0x69, 0x67, 0x6e, 0x61, 0x74, 0x75, 0x72, 0x65, 0x22, 0x9c, 0x02, 0x0a, 0x1b, 0x52, 0x61, 0x77,
0x4e, 0x65, 0x62, 0x75, 0x6c, 0x61, 0x43, 0x65, 0x72, 0x74, 0x69, 0x66, 0x69, 0x63, 0x61, 0x74,
0x65, 0x44, 0x65, 0x74, 0x61, 0x69, 0x6c, 0x73, 0x12, 0x12, 0x0a, 0x04, 0x4e, 0x61, 0x6d, 0x65,
0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04, 0x4e, 0x61, 0x6d, 0x65, 0x12, 0x10, 0x0a, 0x03,
0x49, 0x70, 0x73, 0x18, 0x02, 0x20, 0x03, 0x28, 0x0d, 0x52, 0x03, 0x49, 0x70, 0x73, 0x12, 0x18,
0x0a, 0x07, 0x53, 0x75, 0x62, 0x6e, 0x65, 0x74, 0x73, 0x18, 0x03, 0x20, 0x03, 0x28, 0x0d, 0x52,
0x07, 0x53, 0x75, 0x62, 0x6e, 0x65, 0x74, 0x73, 0x12, 0x16, 0x0a, 0x06, 0x47, 0x72, 0x6f, 0x75,
0x70, 0x73, 0x18, 0x04, 0x20, 0x03, 0x28, 0x09, 0x52, 0x06, 0x47, 0x72, 0x6f, 0x75, 0x70, 0x73,
0x12, 0x1c, 0x0a, 0x09, 0x4e, 0x6f, 0x74, 0x42, 0x65, 0x66, 0x6f, 0x72, 0x65, 0x18, 0x05, 0x20,
0x01, 0x28, 0x03, 0x52, 0x09, 0x4e, 0x6f, 0x74, 0x42, 0x65, 0x66, 0x6f, 0x72, 0x65, 0x12, 0x1a,
0x0a, 0x08, 0x4e, 0x6f, 0x74, 0x41, 0x66, 0x74, 0x65, 0x72, 0x18, 0x06, 0x20, 0x01, 0x28, 0x03,
0x52, 0x08, 0x4e, 0x6f, 0x74, 0x41, 0x66, 0x74, 0x65, 0x72, 0x12, 0x1c, 0x0a, 0x09, 0x50, 0x75,
0x62, 0x6c, 0x69, 0x63, 0x4b, 0x65, 0x79, 0x18, 0x07, 0x20, 0x01, 0x28, 0x0c, 0x52, 0x09, 0x50,
0x75, 0x62, 0x6c, 0x69, 0x63, 0x4b, 0x65, 0x79, 0x12, 0x12, 0x0a, 0x04, 0x49, 0x73, 0x43, 0x41,
0x18, 0x08, 0x20, 0x01, 0x28, 0x08, 0x52, 0x04, 0x49, 0x73, 0x43, 0x41, 0x12, 0x16, 0x0a, 0x06,
0x49, 0x73, 0x73, 0x75, 0x65, 0x72, 0x18, 0x09, 0x20, 0x01, 0x28, 0x0c, 0x52, 0x06, 0x49, 0x73,
0x73, 0x75, 0x65, 0x72, 0x12, 0x21, 0x0a, 0x05, 0x63, 0x75, 0x72, 0x76, 0x65, 0x18, 0x64, 0x20,
0x01, 0x28, 0x0e, 0x32, 0x0b, 0x2e, 0x63, 0x65, 0x72, 0x74, 0x2e, 0x43, 0x75, 0x72, 0x76, 0x65,
0x52, 0x05, 0x63, 0x75, 0x72, 0x76, 0x65, 0x22, 0x8b, 0x01, 0x0a, 0x16, 0x52, 0x61, 0x77, 0x4e,
0x65, 0x62, 0x75, 0x6c, 0x61, 0x45, 0x6e, 0x63, 0x72, 0x79, 0x70, 0x74, 0x65, 0x64, 0x44, 0x61,
0x74, 0x61, 0x12, 0x51, 0x0a, 0x12, 0x45, 0x6e, 0x63, 0x72, 0x79, 0x70, 0x74, 0x69, 0x6f, 0x6e,
0x4d, 0x65, 0x74, 0x61, 0x64, 0x61, 0x74, 0x61, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x21,
0x2e, 0x63, 0x65, 0x72, 0x74, 0x2e, 0x52, 0x61, 0x77, 0x4e, 0x65, 0x62, 0x75, 0x6c, 0x61, 0x45,
0x6e, 0x63, 0x72, 0x79, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x4d, 0x65, 0x74, 0x61, 0x64, 0x61, 0x74,
0x61, 0x52, 0x12, 0x45, 0x6e, 0x63, 0x72, 0x79, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x4d, 0x65, 0x74,
0x61, 0x64, 0x61, 0x74, 0x61, 0x12, 0x1e, 0x0a, 0x0a, 0x43, 0x69, 0x70, 0x68, 0x65, 0x72, 0x74,
0x65, 0x78, 0x74, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0c, 0x52, 0x0a, 0x43, 0x69, 0x70, 0x68, 0x65,
0x72, 0x74, 0x65, 0x78, 0x74, 0x22, 0x9c, 0x01, 0x0a, 0x1b, 0x52, 0x61, 0x77, 0x4e, 0x65, 0x62,
0x75, 0x6c, 0x61, 0x45, 0x6e, 0x63, 0x72, 0x79, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x4d, 0x65, 0x74,
0x61, 0x64, 0x61, 0x74, 0x61, 0x12, 0x30, 0x0a, 0x13, 0x45, 0x6e, 0x63, 0x72, 0x79, 0x70, 0x74,
0x69, 0x6f, 0x6e, 0x41, 0x6c, 0x67, 0x6f, 0x72, 0x69, 0x74, 0x68, 0x6d, 0x18, 0x01, 0x20, 0x01,
0x28, 0x09, 0x52, 0x13, 0x45, 0x6e, 0x63, 0x72, 0x79, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x41, 0x6c,
0x67, 0x6f, 0x72, 0x69, 0x74, 0x68, 0x6d, 0x12, 0x4b, 0x0a, 0x10, 0x41, 0x72, 0x67, 0x6f, 0x6e,
0x32, 0x50, 0x61, 0x72, 0x61, 0x6d, 0x65, 0x74, 0x65, 0x72, 0x73, 0x18, 0x02, 0x20, 0x01, 0x28,
0x0b, 0x32, 0x1f, 0x2e, 0x63, 0x65, 0x72, 0x74, 0x2e, 0x52, 0x61, 0x77, 0x4e, 0x65, 0x62, 0x75,
0x6c, 0x61, 0x41, 0x72, 0x67, 0x6f, 0x6e, 0x32, 0x50, 0x61, 0x72, 0x61, 0x6d, 0x65, 0x74, 0x65,
0x72, 0x73, 0x52, 0x10, 0x41, 0x72, 0x67, 0x6f, 0x6e, 0x32, 0x50, 0x61, 0x72, 0x61, 0x6d, 0x65,
0x74, 0x65, 0x72, 0x73, 0x22, 0xa3, 0x01, 0x0a, 0x19, 0x52, 0x61, 0x77, 0x4e, 0x65, 0x62, 0x75,
0x6c, 0x61, 0x41, 0x72, 0x67, 0x6f, 0x6e, 0x32, 0x50, 0x61, 0x72, 0x61, 0x6d, 0x65, 0x74, 0x65,
0x72, 0x73, 0x12, 0x18, 0x0a, 0x07, 0x76, 0x65, 0x72, 0x73, 0x69, 0x6f, 0x6e, 0x18, 0x01, 0x20,
0x01, 0x28, 0x05, 0x52, 0x07, 0x76, 0x65, 0x72, 0x73, 0x69, 0x6f, 0x6e, 0x12, 0x16, 0x0a, 0x06,
0x6d, 0x65, 0x6d, 0x6f, 0x72, 0x79, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0d, 0x52, 0x06, 0x6d, 0x65,
0x6d, 0x6f, 0x72, 0x79, 0x12, 0x20, 0x0a, 0x0b, 0x70, 0x61, 0x72, 0x61, 0x6c, 0x6c, 0x65, 0x6c,
0x69, 0x73, 0x6d, 0x18, 0x04, 0x20, 0x01, 0x28, 0x0d, 0x52, 0x0b, 0x70, 0x61, 0x72, 0x61, 0x6c,
0x6c, 0x65, 0x6c, 0x69, 0x73, 0x6d, 0x12, 0x1e, 0x0a, 0x0a, 0x69, 0x74, 0x65, 0x72, 0x61, 0x74,
0x69, 0x6f, 0x6e, 0x73, 0x18, 0x03, 0x20, 0x01, 0x28, 0x0d, 0x52, 0x0a, 0x69, 0x74, 0x65, 0x72,
0x61, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x12, 0x12, 0x0a, 0x04, 0x73, 0x61, 0x6c, 0x74, 0x18, 0x05,
0x20, 0x01, 0x28, 0x0c, 0x52, 0x04, 0x73, 0x61, 0x6c, 0x74, 0x2a, 0x21, 0x0a, 0x05, 0x43, 0x75,
0x72, 0x76, 0x65, 0x12, 0x0e, 0x0a, 0x0a, 0x43, 0x55, 0x52, 0x56, 0x45, 0x32, 0x35, 0x35, 0x31,
0x39, 0x10, 0x00, 0x12, 0x08, 0x0a, 0x04, 0x50, 0x32, 0x35, 0x36, 0x10, 0x01, 0x42, 0x20, 0x5a,
0x1e, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x73, 0x6c, 0x61, 0x63,
0x6b, 0x68, 0x71, 0x2f, 0x6e, 0x65, 0x62, 0x75, 0x6c, 0x61, 0x2f, 0x63, 0x65, 0x72, 0x74, 0x62,
0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33,
}
var (
file_cert_v1_proto_rawDescOnce sync.Once
file_cert_v1_proto_rawDescData = file_cert_v1_proto_rawDesc
)
func file_cert_v1_proto_rawDescGZIP() []byte {
file_cert_v1_proto_rawDescOnce.Do(func() {
file_cert_v1_proto_rawDescData = protoimpl.X.CompressGZIP(file_cert_v1_proto_rawDescData)
})
return file_cert_v1_proto_rawDescData
}
var file_cert_v1_proto_enumTypes = make([]protoimpl.EnumInfo, 1)
var file_cert_v1_proto_msgTypes = make([]protoimpl.MessageInfo, 5)
var file_cert_v1_proto_goTypes = []any{
(Curve)(0), // 0: cert.Curve
(*RawNebulaCertificate)(nil), // 1: cert.RawNebulaCertificate
(*RawNebulaCertificateDetails)(nil), // 2: cert.RawNebulaCertificateDetails
(*RawNebulaEncryptedData)(nil), // 3: cert.RawNebulaEncryptedData
(*RawNebulaEncryptionMetadata)(nil), // 4: cert.RawNebulaEncryptionMetadata
(*RawNebulaArgon2Parameters)(nil), // 5: cert.RawNebulaArgon2Parameters
}
var file_cert_v1_proto_depIdxs = []int32{
2, // 0: cert.RawNebulaCertificate.Details:type_name -> cert.RawNebulaCertificateDetails
0, // 1: cert.RawNebulaCertificateDetails.curve:type_name -> cert.Curve
4, // 2: cert.RawNebulaEncryptedData.EncryptionMetadata:type_name -> cert.RawNebulaEncryptionMetadata
5, // 3: cert.RawNebulaEncryptionMetadata.Argon2Parameters:type_name -> cert.RawNebulaArgon2Parameters
4, // [4:4] is the sub-list for method output_type
4, // [4:4] is the sub-list for method input_type
4, // [4:4] is the sub-list for extension type_name
4, // [4:4] is the sub-list for extension extendee
0, // [0:4] is the sub-list for field type_name
}
func init() { file_cert_v1_proto_init() }
func file_cert_v1_proto_init() {
if File_cert_v1_proto != nil {
return
}
if !protoimpl.UnsafeEnabled {
file_cert_v1_proto_msgTypes[0].Exporter = func(v any, i int) any {
switch v := v.(*RawNebulaCertificate); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_cert_v1_proto_msgTypes[1].Exporter = func(v any, i int) any {
switch v := v.(*RawNebulaCertificateDetails); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_cert_v1_proto_msgTypes[2].Exporter = func(v any, i int) any {
switch v := v.(*RawNebulaEncryptedData); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_cert_v1_proto_msgTypes[3].Exporter = func(v any, i int) any {
switch v := v.(*RawNebulaEncryptionMetadata); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_cert_v1_proto_msgTypes[4].Exporter = func(v any, i int) any {
switch v := v.(*RawNebulaArgon2Parameters); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
}
type x struct{}
out := protoimpl.TypeBuilder{
File: protoimpl.DescBuilder{
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
RawDescriptor: file_cert_v1_proto_rawDesc,
NumEnums: 1,
NumMessages: 5,
NumExtensions: 0,
NumServices: 0,
},
GoTypes: file_cert_v1_proto_goTypes,
DependencyIndexes: file_cert_v1_proto_depIdxs,
EnumInfos: file_cert_v1_proto_enumTypes,
MessageInfos: file_cert_v1_proto_msgTypes,
}.Build()
File_cert_v1_proto = out.File
file_cert_v1_proto_rawDesc = nil
file_cert_v1_proto_goTypes = nil
file_cert_v1_proto_depIdxs = nil
}

View File

@@ -1,8 +1,15 @@
syntax = "proto3";
package cert;
option go_package = "github.com/slackhq/nebula/cert";
//import "google/protobuf/timestamp.proto";
enum Curve {
CURVE25519 = 0;
P256 = 1;
}
message RawNebulaCertificate {
RawNebulaCertificateDetails Details = 1;
bytes Signature = 2;
@@ -24,4 +31,24 @@ message RawNebulaCertificateDetails {
// sha-256 of the issuer certificate, if this field is blank the cert is self-signed
bytes Issuer = 9;
}
Curve curve = 100;
}
message RawNebulaEncryptedData {
RawNebulaEncryptionMetadata EncryptionMetadata = 1;
bytes Ciphertext = 2;
}
message RawNebulaEncryptionMetadata {
string EncryptionAlgorithm = 1;
RawNebulaArgon2Parameters Argon2Parameters = 2;
}
message RawNebulaArgon2Parameters {
int32 version = 1; // rune in Go
uint32 memory = 2;
uint32 parallelism = 4; // uint8 in Go
uint32 iterations = 3;
bytes salt = 5;
}

218
cert/cert_v1_test.go Normal file
View File

@@ -0,0 +1,218 @@
package cert
import (
"fmt"
"net/netip"
"testing"
"time"
"github.com/slackhq/nebula/test"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"google.golang.org/protobuf/proto"
)
func TestCertificateV1_Marshal(t *testing.T) {
before := time.Now().Add(time.Second * -60).Round(time.Second)
after := time.Now().Add(time.Second * 60).Round(time.Second)
pubKey := []byte("1234567890abcedfghij1234567890ab")
nc := certificateV1{
details: detailsV1{
name: "testing",
networks: []netip.Prefix{
mustParsePrefixUnmapped("10.1.1.1/24"),
mustParsePrefixUnmapped("10.1.1.2/16"),
},
unsafeNetworks: []netip.Prefix{
mustParsePrefixUnmapped("9.1.1.2/24"),
mustParsePrefixUnmapped("9.1.1.3/16"),
},
groups: []string{"test-group1", "test-group2", "test-group3"},
notBefore: before,
notAfter: after,
publicKey: pubKey,
isCA: false,
issuer: "1234567890abcedfghij1234567890ab",
},
signature: []byte("1234567890abcedfghij1234567890ab"),
}
b, err := nc.Marshal()
require.NoError(t, err)
//t.Log("Cert size:", len(b))
nc2, err := unmarshalCertificateV1(b, nil)
require.NoError(t, err)
assert.Equal(t, Version1, nc.Version())
assert.Equal(t, Curve_CURVE25519, nc.Curve())
assert.Equal(t, nc.Signature(), nc2.Signature())
assert.Equal(t, nc.Name(), nc2.Name())
assert.Equal(t, nc.NotBefore(), nc2.NotBefore())
assert.Equal(t, nc.NotAfter(), nc2.NotAfter())
assert.Equal(t, nc.PublicKey(), nc2.PublicKey())
assert.Equal(t, nc.IsCA(), nc2.IsCA())
assert.Equal(t, nc.Networks(), nc2.Networks())
assert.Equal(t, nc.UnsafeNetworks(), nc2.UnsafeNetworks())
assert.Equal(t, nc.Groups(), nc2.Groups())
}
func TestCertificateV1_Expired(t *testing.T) {
nc := certificateV1{
details: detailsV1{
notBefore: time.Now().Add(time.Second * -60).Round(time.Second),
notAfter: time.Now().Add(time.Second * 60).Round(time.Second),
},
}
assert.True(t, nc.Expired(time.Now().Add(time.Hour)))
assert.True(t, nc.Expired(time.Now().Add(-time.Hour)))
assert.False(t, nc.Expired(time.Now()))
}
func TestCertificateV1_MarshalJSON(t *testing.T) {
time.Local = time.UTC
pubKey := []byte("1234567890abcedfghij1234567890ab")
nc := certificateV1{
details: detailsV1{
name: "testing",
networks: []netip.Prefix{
mustParsePrefixUnmapped("10.1.1.1/24"),
mustParsePrefixUnmapped("10.1.1.2/16"),
},
unsafeNetworks: []netip.Prefix{
mustParsePrefixUnmapped("9.1.1.2/24"),
mustParsePrefixUnmapped("9.1.1.3/16"),
},
groups: []string{"test-group1", "test-group2", "test-group3"},
notBefore: time.Date(1, 0, 0, 1, 0, 0, 0, time.UTC),
notAfter: time.Date(1, 0, 0, 2, 0, 0, 0, time.UTC),
publicKey: pubKey,
isCA: false,
issuer: "1234567890abcedfghij1234567890ab",
},
signature: []byte("1234567890abcedfghij1234567890ab"),
}
b, err := nc.MarshalJSON()
require.NoError(t, err)
assert.JSONEq(
t,
"{\"details\":{\"curve\":\"CURVE25519\",\"groups\":[\"test-group1\",\"test-group2\",\"test-group3\"],\"isCa\":false,\"issuer\":\"1234567890abcedfghij1234567890ab\",\"name\":\"testing\",\"networks\":[\"10.1.1.1/24\",\"10.1.1.2/16\"],\"notAfter\":\"0000-11-30T02:00:00Z\",\"notBefore\":\"0000-11-30T01:00:00Z\",\"publicKey\":\"313233343536373839306162636564666768696a313233343536373839306162\",\"unsafeNetworks\":[\"9.1.1.2/24\",\"9.1.1.3/16\"]},\"fingerprint\":\"3944c53d4267a229295b56cb2d27d459164c010ac97d655063ba421e0670f4ba\",\"signature\":\"313233343536373839306162636564666768696a313233343536373839306162\",\"version\":1}",
string(b),
)
}
func TestCertificateV1_VerifyPrivateKey(t *testing.T) {
ca, _, caKey, _ := NewTestCaCert(Version1, Curve_CURVE25519, time.Time{}, time.Time{}, nil, nil, nil)
err := ca.VerifyPrivateKey(Curve_CURVE25519, caKey)
require.NoError(t, err)
_, _, caKey2, _ := NewTestCaCert(Version1, Curve_CURVE25519, time.Time{}, time.Time{}, nil, nil, nil)
require.NoError(t, err)
err = ca.VerifyPrivateKey(Curve_CURVE25519, caKey2)
require.Error(t, err)
c, _, priv, _ := NewTestCert(Version1, Curve_CURVE25519, ca, caKey, "test", time.Time{}, time.Time{}, nil, nil, nil)
rawPriv, b, curve, err := UnmarshalPrivateKeyFromPEM(priv)
require.NoError(t, err)
assert.Empty(t, b)
assert.Equal(t, Curve_CURVE25519, curve)
err = c.VerifyPrivateKey(Curve_CURVE25519, rawPriv)
require.NoError(t, err)
_, priv2 := X25519Keypair()
err = c.VerifyPrivateKey(Curve_CURVE25519, priv2)
require.Error(t, err)
}
func TestCertificateV1_VerifyPrivateKeyP256(t *testing.T) {
ca, _, caKey, _ := NewTestCaCert(Version1, Curve_P256, time.Time{}, time.Time{}, nil, nil, nil)
err := ca.VerifyPrivateKey(Curve_P256, caKey)
require.NoError(t, err)
_, _, caKey2, _ := NewTestCaCert(Version1, Curve_P256, time.Time{}, time.Time{}, nil, nil, nil)
require.NoError(t, err)
err = ca.VerifyPrivateKey(Curve_P256, caKey2)
require.Error(t, err)
c, _, priv, _ := NewTestCert(Version1, Curve_P256, ca, caKey, "test", time.Time{}, time.Time{}, nil, nil, nil)
rawPriv, b, curve, err := UnmarshalPrivateKeyFromPEM(priv)
require.NoError(t, err)
assert.Empty(t, b)
assert.Equal(t, Curve_P256, curve)
err = c.VerifyPrivateKey(Curve_P256, rawPriv)
require.NoError(t, err)
_, priv2 := P256Keypair()
err = c.VerifyPrivateKey(Curve_P256, priv2)
require.Error(t, err)
}
// Ensure that upgrading the protobuf library does not change how certificates
// are marshalled, since this would break signature verification
func TestMarshalingCertificateV1Consistency(t *testing.T) {
before := time.Date(1970, time.January, 1, 1, 1, 1, 1, time.UTC)
after := time.Date(9999, time.January, 1, 1, 1, 1, 1, time.UTC)
pubKey := []byte("1234567890abcedfghij1234567890ab")
nc := certificateV1{
details: detailsV1{
name: "testing",
networks: []netip.Prefix{
mustParsePrefixUnmapped("10.1.1.2/16"),
mustParsePrefixUnmapped("10.1.1.1/24"),
},
unsafeNetworks: []netip.Prefix{
mustParsePrefixUnmapped("9.1.1.3/16"),
mustParsePrefixUnmapped("9.1.1.2/24"),
},
groups: []string{"test-group1", "test-group2", "test-group3"},
notBefore: before,
notAfter: after,
publicKey: pubKey,
isCA: false,
issuer: "1234567890abcedfghij1234567890ab",
},
signature: []byte("1234567890abcedfghij1234567890ab"),
}
b, err := nc.Marshal()
require.NoError(t, err)
assert.Equal(t, "0a8e010a0774657374696e671212828284508080fcff0f8182845080feffff0f1a12838284488080fcff0f8282844880feffff0f220b746573742d67726f757031220b746573742d67726f757032220b746573742d67726f75703328cd1c30cdb8ccf0af073a20313233343536373839306162636564666768696a3132333435363738393061624a081234567890abcedf1220313233343536373839306162636564666768696a313233343536373839306162", fmt.Sprintf("%x", b))
b, err = proto.Marshal(nc.getRawDetails())
require.NoError(t, err)
assert.Equal(t, "0a0774657374696e671212828284508080fcff0f8182845080feffff0f1a12838284488080fcff0f8282844880feffff0f220b746573742d67726f757031220b746573742d67726f757032220b746573742d67726f75703328cd1c30cdb8ccf0af073a20313233343536373839306162636564666768696a3132333435363738393061624a081234567890abcedf", fmt.Sprintf("%x", b))
}
func TestCertificateV1_Copy(t *testing.T) {
ca, _, caKey, _ := NewTestCaCert(Version1, Curve_CURVE25519, time.Now(), time.Now().Add(10*time.Minute), nil, nil, nil)
c, _, _, _ := NewTestCert(Version1, Curve_CURVE25519, ca, caKey, "test", time.Now(), time.Now().Add(5*time.Minute), nil, nil, nil)
cc := c.Copy()
test.AssertDeepCopyEqual(t, c, cc)
}
func TestUnmarshalCertificateV1(t *testing.T) {
// Test that we don't panic with an invalid certificate (#332)
data := []byte("\x98\x00\x00")
_, err := unmarshalCertificateV1(data, nil)
require.EqualError(t, err, "encoded Details was nil")
}
func appendByteSlices(b ...[]byte) []byte {
retSlice := []byte{}
for _, v := range b {
retSlice = append(retSlice, v...)
}
return retSlice
}
func mustParsePrefixUnmapped(s string) netip.Prefix {
prefix := netip.MustParsePrefix(s)
return netip.PrefixFrom(prefix.Addr().Unmap(), prefix.Bits())
}

37
cert/cert_v2.asn1 Normal file
View File

@@ -0,0 +1,37 @@
Nebula DEFINITIONS AUTOMATIC TAGS ::= BEGIN
Name ::= UTF8String (SIZE (1..253))
Time ::= INTEGER (0..18446744073709551615) -- Seconds since unix epoch, uint64 maximum
Network ::= OCTET STRING (SIZE (5,17)) -- IP addresses are 4 or 16 bytes + 1 byte for the prefix length
Curve ::= ENUMERATED {
curve25519 (0),
p256 (1)
}
-- The maximum size of a certificate must not exceed 65536 bytes
Certificate ::= SEQUENCE {
details OCTET STRING,
curve Curve DEFAULT curve25519,
publicKey OCTET STRING,
-- signature(details + curve + publicKey) using the appropriate method for curve
signature OCTET STRING
}
Details ::= SEQUENCE {
name Name,
-- At least 1 ipv4 or ipv6 address must be present if isCA is false
networks SEQUENCE OF Network OPTIONAL,
unsafeNetworks SEQUENCE OF Network OPTIONAL,
groups SEQUENCE OF Name OPTIONAL,
isCA BOOLEAN DEFAULT false,
notBefore Time,
notAfter Time,
-- issuer is only required if isCA is false, if isCA is true then it must not be present
issuer OCTET STRING OPTIONAL,
...
-- New fields can be added below here
}
END

730
cert/cert_v2.go Normal file
View File

@@ -0,0 +1,730 @@
package cert
import (
"bytes"
"crypto/ecdh"
"crypto/ecdsa"
"crypto/ed25519"
"crypto/elliptic"
"crypto/sha256"
"encoding/hex"
"encoding/json"
"encoding/pem"
"fmt"
"net/netip"
"slices"
"time"
"golang.org/x/crypto/cryptobyte"
"golang.org/x/crypto/cryptobyte/asn1"
"golang.org/x/crypto/curve25519"
)
const (
classConstructed = 0x20
classContextSpecific = 0x80
TagCertDetails = 0 | classConstructed | classContextSpecific
TagCertCurve = 1 | classContextSpecific
TagCertPublicKey = 2 | classContextSpecific
TagCertSignature = 3 | classContextSpecific
TagDetailsName = 0 | classContextSpecific
TagDetailsNetworks = 1 | classConstructed | classContextSpecific
TagDetailsUnsafeNetworks = 2 | classConstructed | classContextSpecific
TagDetailsGroups = 3 | classConstructed | classContextSpecific
TagDetailsIsCA = 4 | classContextSpecific
TagDetailsNotBefore = 5 | classContextSpecific
TagDetailsNotAfter = 6 | classContextSpecific
TagDetailsIssuer = 7 | classContextSpecific
)
const (
// MaxCertificateSize is the maximum length a valid certificate can be
MaxCertificateSize = 65536
// MaxNameLength is limited to a maximum realistic DNS domain name to help facilitate DNS systems
MaxNameLength = 253
// MaxNetworkLength is the maximum length a network value can be.
// 16 bytes for an ipv6 address + 1 byte for the prefix length
MaxNetworkLength = 17
)
type certificateV2 struct {
details detailsV2
// RawDetails contains the entire asn.1 DER encoded Details struct
// This is to benefit forwards compatibility in signature checking.
// signature(RawDetails + Curve + PublicKey) == Signature
rawDetails []byte
curve Curve
publicKey []byte
signature []byte
}
type detailsV2 struct {
name string
networks []netip.Prefix // MUST BE SORTED
unsafeNetworks []netip.Prefix // MUST BE SORTED
groups []string
isCA bool
notBefore time.Time
notAfter time.Time
issuer string
}
func (c *certificateV2) Version() Version {
return Version2
}
func (c *certificateV2) Curve() Curve {
return c.curve
}
func (c *certificateV2) Groups() []string {
return c.details.groups
}
func (c *certificateV2) IsCA() bool {
return c.details.isCA
}
func (c *certificateV2) Issuer() string {
return c.details.issuer
}
func (c *certificateV2) Name() string {
return c.details.name
}
func (c *certificateV2) Networks() []netip.Prefix {
return c.details.networks
}
func (c *certificateV2) NotAfter() time.Time {
return c.details.notAfter
}
func (c *certificateV2) NotBefore() time.Time {
return c.details.notBefore
}
func (c *certificateV2) PublicKey() []byte {
return c.publicKey
}
func (c *certificateV2) Signature() []byte {
return c.signature
}
func (c *certificateV2) UnsafeNetworks() []netip.Prefix {
return c.details.unsafeNetworks
}
func (c *certificateV2) Fingerprint() (string, error) {
if len(c.rawDetails) == 0 {
return "", ErrMissingDetails
}
b := make([]byte, len(c.rawDetails)+1+len(c.publicKey)+len(c.signature))
copy(b, c.rawDetails)
b[len(c.rawDetails)] = byte(c.curve)
copy(b[len(c.rawDetails)+1:], c.publicKey)
copy(b[len(c.rawDetails)+1+len(c.publicKey):], c.signature)
sum := sha256.Sum256(b)
return hex.EncodeToString(sum[:]), nil
}
func (c *certificateV2) CheckSignature(key []byte) bool {
if len(c.rawDetails) == 0 {
return false
}
b := make([]byte, len(c.rawDetails)+1+len(c.publicKey))
copy(b, c.rawDetails)
b[len(c.rawDetails)] = byte(c.curve)
copy(b[len(c.rawDetails)+1:], c.publicKey)
switch c.curve {
case Curve_CURVE25519:
return ed25519.Verify(key, b, c.signature)
case Curve_P256:
x, y := elliptic.Unmarshal(elliptic.P256(), key)
pubKey := &ecdsa.PublicKey{Curve: elliptic.P256(), X: x, Y: y}
hashed := sha256.Sum256(b)
return ecdsa.VerifyASN1(pubKey, hashed[:], c.signature)
default:
return false
}
}
func (c *certificateV2) Expired(t time.Time) bool {
return c.details.notBefore.After(t) || c.details.notAfter.Before(t)
}
func (c *certificateV2) VerifyPrivateKey(curve Curve, key []byte) error {
if curve != c.curve {
return ErrPublicPrivateCurveMismatch
}
if c.details.isCA {
switch curve {
case Curve_CURVE25519:
// the call to PublicKey below will panic slice bounds out of range otherwise
if len(key) != ed25519.PrivateKeySize {
return ErrInvalidPrivateKey
}
if !ed25519.PublicKey(c.publicKey).Equal(ed25519.PrivateKey(key).Public()) {
return ErrPublicPrivateKeyMismatch
}
case Curve_P256:
privkey, err := ecdh.P256().NewPrivateKey(key)
if err != nil {
return ErrInvalidPrivateKey
}
pub := privkey.PublicKey().Bytes()
if !bytes.Equal(pub, c.publicKey) {
return ErrPublicPrivateKeyMismatch
}
default:
return fmt.Errorf("invalid curve: %s", curve)
}
return nil
}
var pub []byte
switch curve {
case Curve_CURVE25519:
var err error
pub, err = curve25519.X25519(key, curve25519.Basepoint)
if err != nil {
return ErrInvalidPrivateKey
}
case Curve_P256:
privkey, err := ecdh.P256().NewPrivateKey(key)
if err != nil {
return ErrInvalidPrivateKey
}
pub = privkey.PublicKey().Bytes()
default:
return fmt.Errorf("invalid curve: %s", curve)
}
if !bytes.Equal(pub, c.publicKey) {
return ErrPublicPrivateKeyMismatch
}
return nil
}
func (c *certificateV2) String() string {
mb, err := c.marshalJSON()
if err != nil {
return fmt.Sprintf("<error marshalling certificate: %v>", err)
}
b, err := json.MarshalIndent(mb, "", "\t")
if err != nil {
return fmt.Sprintf("<error marshalling certificate: %v>", err)
}
return string(b)
}
func (c *certificateV2) MarshalForHandshakes() ([]byte, error) {
if c.rawDetails == nil {
return nil, ErrEmptyRawDetails
}
var b cryptobyte.Builder
// Outermost certificate
b.AddASN1(asn1.SEQUENCE, func(b *cryptobyte.Builder) {
// Add the cert details which is already marshalled
b.AddBytes(c.rawDetails)
// Skipping the curve and public key since those come across in a different part of the handshake
// Add the signature
b.AddASN1(TagCertSignature, func(b *cryptobyte.Builder) {
b.AddBytes(c.signature)
})
})
return b.Bytes()
}
func (c *certificateV2) Marshal() ([]byte, error) {
if c.rawDetails == nil {
return nil, ErrEmptyRawDetails
}
var b cryptobyte.Builder
// Outermost certificate
b.AddASN1(asn1.SEQUENCE, func(b *cryptobyte.Builder) {
// Add the cert details which is already marshalled
b.AddBytes(c.rawDetails)
// Add the curve only if its not the default value
if c.curve != Curve_CURVE25519 {
b.AddASN1(TagCertCurve, func(b *cryptobyte.Builder) {
b.AddBytes([]byte{byte(c.curve)})
})
}
// Add the public key if it is not empty
if c.publicKey != nil {
b.AddASN1(TagCertPublicKey, func(b *cryptobyte.Builder) {
b.AddBytes(c.publicKey)
})
}
// Add the signature
b.AddASN1(TagCertSignature, func(b *cryptobyte.Builder) {
b.AddBytes(c.signature)
})
})
return b.Bytes()
}
func (c *certificateV2) MarshalPEM() ([]byte, error) {
b, err := c.Marshal()
if err != nil {
return nil, err
}
return pem.EncodeToMemory(&pem.Block{Type: CertificateV2Banner, Bytes: b}), nil
}
func (c *certificateV2) MarshalJSON() ([]byte, error) {
b, err := c.marshalJSON()
if err != nil {
return nil, err
}
return json.Marshal(b)
}
func (c *certificateV2) marshalJSON() (m, error) {
fp, err := c.Fingerprint()
if err != nil {
return nil, err
}
return m{
"details": m{
"name": c.details.name,
"networks": c.details.networks,
"unsafeNetworks": c.details.unsafeNetworks,
"groups": c.details.groups,
"notBefore": c.details.notBefore,
"notAfter": c.details.notAfter,
"isCa": c.details.isCA,
"issuer": c.details.issuer,
},
"version": Version2,
"publicKey": fmt.Sprintf("%x", c.publicKey),
"curve": c.curve.String(),
"fingerprint": fp,
"signature": fmt.Sprintf("%x", c.Signature()),
}, nil
}
func (c *certificateV2) Copy() Certificate {
nc := &certificateV2{
details: detailsV2{
name: c.details.name,
notBefore: c.details.notBefore,
notAfter: c.details.notAfter,
isCA: c.details.isCA,
issuer: c.details.issuer,
},
curve: c.curve,
publicKey: make([]byte, len(c.publicKey)),
signature: make([]byte, len(c.signature)),
rawDetails: make([]byte, len(c.rawDetails)),
}
if c.details.groups != nil {
nc.details.groups = make([]string, len(c.details.groups))
copy(nc.details.groups, c.details.groups)
}
if c.details.networks != nil {
nc.details.networks = make([]netip.Prefix, len(c.details.networks))
copy(nc.details.networks, c.details.networks)
}
if c.details.unsafeNetworks != nil {
nc.details.unsafeNetworks = make([]netip.Prefix, len(c.details.unsafeNetworks))
copy(nc.details.unsafeNetworks, c.details.unsafeNetworks)
}
copy(nc.rawDetails, c.rawDetails)
copy(nc.signature, c.signature)
copy(nc.publicKey, c.publicKey)
return nc
}
func (c *certificateV2) fromTBSCertificate(t *TBSCertificate) error {
c.details = detailsV2{
name: t.Name,
networks: t.Networks,
unsafeNetworks: t.UnsafeNetworks,
groups: t.Groups,
isCA: t.IsCA,
notBefore: t.NotBefore,
notAfter: t.NotAfter,
issuer: t.issuer,
}
c.curve = t.Curve
c.publicKey = t.PublicKey
return c.validate()
}
func (c *certificateV2) validate() error {
// Empty names are allowed
if len(c.publicKey) == 0 {
return ErrInvalidPublicKey
}
if !c.details.isCA && len(c.details.networks) == 0 {
return NewErrInvalidCertificateProperties("non-CA certificate must contain at least 1 network")
}
hasV4Networks := false
hasV6Networks := false
for _, network := range c.details.networks {
if !network.IsValid() || !network.Addr().IsValid() {
return NewErrInvalidCertificateProperties("invalid network: %s", network)
}
if network.Addr().IsUnspecified() {
return NewErrInvalidCertificateProperties("non-CA certificates must not use the zero address as a network: %s", network)
}
if network.Addr().Zone() != "" {
return NewErrInvalidCertificateProperties("networks may not contain zones: %s", network)
}
if network.Addr().Is4In6() {
return NewErrInvalidCertificateProperties("4in6 networks are not allowed: %s", network)
}
hasV4Networks = hasV4Networks || network.Addr().Is4()
hasV6Networks = hasV6Networks || network.Addr().Is6()
}
slices.SortFunc(c.details.networks, comparePrefix)
err := findDuplicatePrefix(c.details.networks)
if err != nil {
return err
}
for _, network := range c.details.unsafeNetworks {
if !network.IsValid() || !network.Addr().IsValid() {
return NewErrInvalidCertificateProperties("invalid unsafe network: %s", network)
}
if network.Addr().Zone() != "" {
return NewErrInvalidCertificateProperties("unsafe networks may not contain zones: %s", network)
}
if !c.details.isCA {
if network.Addr().Is6() {
if !hasV6Networks {
return NewErrInvalidCertificateProperties("IPv6 unsafe networks require an IPv6 address assignment: %s", network)
}
} else if network.Addr().Is4() {
if !hasV4Networks {
return NewErrInvalidCertificateProperties("IPv4 unsafe networks require an IPv4 address assignment: %s", network)
}
}
}
}
slices.SortFunc(c.details.unsafeNetworks, comparePrefix)
err = findDuplicatePrefix(c.details.unsafeNetworks)
if err != nil {
return err
}
return nil
}
func (c *certificateV2) marshalForSigning() ([]byte, error) {
d, err := c.details.Marshal()
if err != nil {
return nil, fmt.Errorf("marshalling certificate details failed: %w", err)
}
c.rawDetails = d
b := make([]byte, len(c.rawDetails)+1+len(c.publicKey))
copy(b, c.rawDetails)
b[len(c.rawDetails)] = byte(c.curve)
copy(b[len(c.rawDetails)+1:], c.publicKey)
return b, nil
}
func (c *certificateV2) setSignature(b []byte) error {
if len(b) == 0 {
return ErrEmptySignature
}
c.signature = b
return nil
}
func (d *detailsV2) Marshal() ([]byte, error) {
var b cryptobyte.Builder
var err error
// Details are a structure
b.AddASN1(TagCertDetails, func(b *cryptobyte.Builder) {
// Add the name
b.AddASN1(TagDetailsName, func(b *cryptobyte.Builder) {
b.AddBytes([]byte(d.name))
})
// Add the networks if any exist
if len(d.networks) > 0 {
b.AddASN1(TagDetailsNetworks, func(b *cryptobyte.Builder) {
for _, n := range d.networks {
sb, innerErr := n.MarshalBinary()
if innerErr != nil {
// MarshalBinary never returns an error
err = fmt.Errorf("unable to marshal network: %w", innerErr)
return
}
b.AddASN1OctetString(sb)
}
})
}
// Add the unsafe networks if any exist
if len(d.unsafeNetworks) > 0 {
b.AddASN1(TagDetailsUnsafeNetworks, func(b *cryptobyte.Builder) {
for _, n := range d.unsafeNetworks {
sb, innerErr := n.MarshalBinary()
if innerErr != nil {
// MarshalBinary never returns an error
err = fmt.Errorf("unable to marshal unsafe network: %w", innerErr)
return
}
b.AddASN1OctetString(sb)
}
})
}
// Add groups if any exist
if len(d.groups) > 0 {
b.AddASN1(TagDetailsGroups, func(b *cryptobyte.Builder) {
for _, group := range d.groups {
b.AddASN1(asn1.UTF8String, func(b *cryptobyte.Builder) {
b.AddBytes([]byte(group))
})
}
})
}
// Add IsCA only if true
if d.isCA {
b.AddASN1(TagDetailsIsCA, func(b *cryptobyte.Builder) {
b.AddUint8(0xff)
})
}
// Add not before
b.AddASN1Int64WithTag(d.notBefore.Unix(), TagDetailsNotBefore)
// Add not after
b.AddASN1Int64WithTag(d.notAfter.Unix(), TagDetailsNotAfter)
// Add the issuer if present
if d.issuer != "" {
issuerBytes, innerErr := hex.DecodeString(d.issuer)
if innerErr != nil {
err = fmt.Errorf("failed to decode issuer: %w", innerErr)
return
}
b.AddASN1(TagDetailsIssuer, func(b *cryptobyte.Builder) {
b.AddBytes(issuerBytes)
})
}
})
if err != nil {
return nil, err
}
return b.Bytes()
}
func unmarshalCertificateV2(b []byte, publicKey []byte, curve Curve) (*certificateV2, error) {
l := len(b)
if l == 0 || l > MaxCertificateSize {
return nil, ErrBadFormat
}
input := cryptobyte.String(b)
// Open the envelope
if !input.ReadASN1(&input, asn1.SEQUENCE) || input.Empty() {
return nil, ErrBadFormat
}
// Grab the cert details, we need to preserve the tag and length
var rawDetails cryptobyte.String
if !input.ReadASN1Element(&rawDetails, TagCertDetails) || rawDetails.Empty() {
return nil, ErrBadFormat
}
//Maybe grab the curve
var rawCurve byte
if !readOptionalASN1Byte(&input, &rawCurve, TagCertCurve, byte(curve)) {
return nil, ErrBadFormat
}
curve = Curve(rawCurve)
// Maybe grab the public key
var rawPublicKey cryptobyte.String
if len(publicKey) > 0 {
rawPublicKey = publicKey
} else if !input.ReadOptionalASN1(&rawPublicKey, nil, TagCertPublicKey) {
return nil, ErrBadFormat
}
if len(rawPublicKey) == 0 {
return nil, ErrBadFormat
}
// Grab the signature
var rawSignature cryptobyte.String
if !input.ReadASN1(&rawSignature, TagCertSignature) || rawSignature.Empty() {
return nil, ErrBadFormat
}
// Finally unmarshal the details
details, err := unmarshalDetails(rawDetails)
if err != nil {
return nil, err
}
c := &certificateV2{
details: details,
rawDetails: rawDetails,
curve: curve,
publicKey: rawPublicKey,
signature: rawSignature,
}
err = c.validate()
if err != nil {
return nil, err
}
return c, nil
}
func unmarshalDetails(b cryptobyte.String) (detailsV2, error) {
// Open the envelope
if !b.ReadASN1(&b, TagCertDetails) || b.Empty() {
return detailsV2{}, ErrBadFormat
}
// Read the name
var name cryptobyte.String
if !b.ReadASN1(&name, TagDetailsName) || name.Empty() || len(name) > MaxNameLength {
return detailsV2{}, ErrBadFormat
}
// Read the network addresses
var subString cryptobyte.String
var found bool
if !b.ReadOptionalASN1(&subString, &found, TagDetailsNetworks) {
return detailsV2{}, ErrBadFormat
}
var networks []netip.Prefix
var val cryptobyte.String
if found {
for !subString.Empty() {
if !subString.ReadASN1(&val, asn1.OCTET_STRING) || val.Empty() || len(val) > MaxNetworkLength {
return detailsV2{}, ErrBadFormat
}
var n netip.Prefix
if err := n.UnmarshalBinary(val); err != nil {
return detailsV2{}, ErrBadFormat
}
networks = append(networks, n)
}
}
// Read out any unsafe networks
if !b.ReadOptionalASN1(&subString, &found, TagDetailsUnsafeNetworks) {
return detailsV2{}, ErrBadFormat
}
var unsafeNetworks []netip.Prefix
if found {
for !subString.Empty() {
if !subString.ReadASN1(&val, asn1.OCTET_STRING) || val.Empty() || len(val) > MaxNetworkLength {
return detailsV2{}, ErrBadFormat
}
var n netip.Prefix
if err := n.UnmarshalBinary(val); err != nil {
return detailsV2{}, ErrBadFormat
}
unsafeNetworks = append(unsafeNetworks, n)
}
}
// Read out any groups
if !b.ReadOptionalASN1(&subString, &found, TagDetailsGroups) {
return detailsV2{}, ErrBadFormat
}
var groups []string
if found {
for !subString.Empty() {
if !subString.ReadASN1(&val, asn1.UTF8String) || val.Empty() {
return detailsV2{}, ErrBadFormat
}
groups = append(groups, string(val))
}
}
// Read out IsCA
var isCa bool
if !readOptionalASN1Boolean(&b, &isCa, TagDetailsIsCA, false) {
return detailsV2{}, ErrBadFormat
}
// Read not before and not after
var notBefore int64
if !b.ReadASN1Int64WithTag(&notBefore, TagDetailsNotBefore) {
return detailsV2{}, ErrBadFormat
}
var notAfter int64
if !b.ReadASN1Int64WithTag(&notAfter, TagDetailsNotAfter) {
return detailsV2{}, ErrBadFormat
}
// Read issuer
var issuer cryptobyte.String
if !b.ReadOptionalASN1(&issuer, nil, TagDetailsIssuer) {
return detailsV2{}, ErrBadFormat
}
return detailsV2{
name: string(name),
networks: networks,
unsafeNetworks: unsafeNetworks,
groups: groups,
isCA: isCa,
notBefore: time.Unix(notBefore, 0),
notAfter: time.Unix(notAfter, 0),
issuer: hex.EncodeToString(issuer),
}, nil
}

267
cert/cert_v2_test.go Normal file
View File

@@ -0,0 +1,267 @@
package cert
import (
"crypto/ed25519"
"crypto/rand"
"encoding/hex"
"net/netip"
"slices"
"testing"
"time"
"github.com/slackhq/nebula/test"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestCertificateV2_Marshal(t *testing.T) {
before := time.Now().Add(time.Second * -60).Round(time.Second)
after := time.Now().Add(time.Second * 60).Round(time.Second)
pubKey := []byte("1234567890abcedfghij1234567890ab")
nc := certificateV2{
details: detailsV2{
name: "testing",
networks: []netip.Prefix{
mustParsePrefixUnmapped("10.1.1.2/16"),
mustParsePrefixUnmapped("10.1.1.1/24"),
},
unsafeNetworks: []netip.Prefix{
mustParsePrefixUnmapped("9.1.1.3/16"),
mustParsePrefixUnmapped("9.1.1.2/24"),
},
groups: []string{"test-group1", "test-group2", "test-group3"},
notBefore: before,
notAfter: after,
isCA: false,
issuer: "1234567890abcdef1234567890abcdef",
},
signature: []byte("1234567890abcdef1234567890abcdef"),
publicKey: pubKey,
}
db, err := nc.details.Marshal()
require.NoError(t, err)
nc.rawDetails = db
b, err := nc.Marshal()
require.NoError(t, err)
//t.Log("Cert size:", len(b))
nc2, err := unmarshalCertificateV2(b, nil, Curve_CURVE25519)
require.NoError(t, err)
assert.Equal(t, Version2, nc.Version())
assert.Equal(t, Curve_CURVE25519, nc.Curve())
assert.Equal(t, nc.Signature(), nc2.Signature())
assert.Equal(t, nc.Name(), nc2.Name())
assert.Equal(t, nc.NotBefore(), nc2.NotBefore())
assert.Equal(t, nc.NotAfter(), nc2.NotAfter())
assert.Equal(t, nc.PublicKey(), nc2.PublicKey())
assert.Equal(t, nc.IsCA(), nc2.IsCA())
assert.Equal(t, nc.Issuer(), nc2.Issuer())
// unmarshalling will sort networks and unsafeNetworks, we need to do the same
// but first make sure it fails
assert.NotEqual(t, nc.Networks(), nc2.Networks())
assert.NotEqual(t, nc.UnsafeNetworks(), nc2.UnsafeNetworks())
slices.SortFunc(nc.details.networks, comparePrefix)
slices.SortFunc(nc.details.unsafeNetworks, comparePrefix)
assert.Equal(t, nc.Networks(), nc2.Networks())
assert.Equal(t, nc.UnsafeNetworks(), nc2.UnsafeNetworks())
assert.Equal(t, nc.Groups(), nc2.Groups())
}
func TestCertificateV2_Expired(t *testing.T) {
nc := certificateV2{
details: detailsV2{
notBefore: time.Now().Add(time.Second * -60).Round(time.Second),
notAfter: time.Now().Add(time.Second * 60).Round(time.Second),
},
}
assert.True(t, nc.Expired(time.Now().Add(time.Hour)))
assert.True(t, nc.Expired(time.Now().Add(-time.Hour)))
assert.False(t, nc.Expired(time.Now()))
}
func TestCertificateV2_MarshalJSON(t *testing.T) {
time.Local = time.UTC
pubKey := []byte("1234567890abcedf1234567890abcedf")
nc := certificateV2{
details: detailsV2{
name: "testing",
networks: []netip.Prefix{
mustParsePrefixUnmapped("10.1.1.1/24"),
mustParsePrefixUnmapped("10.1.1.2/16"),
},
unsafeNetworks: []netip.Prefix{
mustParsePrefixUnmapped("9.1.1.2/24"),
mustParsePrefixUnmapped("9.1.1.3/16"),
},
groups: []string{"test-group1", "test-group2", "test-group3"},
notBefore: time.Date(1, 0, 0, 1, 0, 0, 0, time.UTC),
notAfter: time.Date(1, 0, 0, 2, 0, 0, 0, time.UTC),
isCA: false,
issuer: "1234567890abcedf1234567890abcedf",
},
publicKey: pubKey,
signature: []byte("1234567890abcedf1234567890abcedf1234567890abcedf1234567890abcedf"),
}
b, err := nc.MarshalJSON()
require.ErrorIs(t, err, ErrMissingDetails)
rd, err := nc.details.Marshal()
require.NoError(t, err)
nc.rawDetails = rd
b, err = nc.MarshalJSON()
require.NoError(t, err)
assert.JSONEq(
t,
"{\"curve\":\"CURVE25519\",\"details\":{\"groups\":[\"test-group1\",\"test-group2\",\"test-group3\"],\"isCa\":false,\"issuer\":\"1234567890abcedf1234567890abcedf\",\"name\":\"testing\",\"networks\":[\"10.1.1.1/24\",\"10.1.1.2/16\"],\"notAfter\":\"0000-11-30T02:00:00Z\",\"notBefore\":\"0000-11-30T01:00:00Z\",\"unsafeNetworks\":[\"9.1.1.2/24\",\"9.1.1.3/16\"]},\"fingerprint\":\"152d9a7400c1e001cb76cffd035215ebb351f69eeb797f7f847dd086e15e56dd\",\"publicKey\":\"3132333435363738393061626365646631323334353637383930616263656466\",\"signature\":\"31323334353637383930616263656466313233343536373839306162636564663132333435363738393061626365646631323334353637383930616263656466\",\"version\":2}",
string(b),
)
}
func TestCertificateV2_VerifyPrivateKey(t *testing.T) {
ca, _, caKey, _ := NewTestCaCert(Version2, Curve_CURVE25519, time.Time{}, time.Time{}, nil, nil, nil)
err := ca.VerifyPrivateKey(Curve_CURVE25519, caKey)
require.NoError(t, err)
err = ca.VerifyPrivateKey(Curve_CURVE25519, caKey[:16])
require.ErrorIs(t, err, ErrInvalidPrivateKey)
_, caKey2, err := ed25519.GenerateKey(rand.Reader)
require.NoError(t, err)
err = ca.VerifyPrivateKey(Curve_CURVE25519, caKey2)
require.ErrorIs(t, err, ErrPublicPrivateKeyMismatch)
c, _, priv, _ := NewTestCert(Version2, Curve_CURVE25519, ca, caKey, "test", time.Time{}, time.Time{}, nil, nil, nil)
rawPriv, b, curve, err := UnmarshalPrivateKeyFromPEM(priv)
require.NoError(t, err)
assert.Empty(t, b)
assert.Equal(t, Curve_CURVE25519, curve)
err = c.VerifyPrivateKey(Curve_CURVE25519, rawPriv)
require.NoError(t, err)
_, priv2 := X25519Keypair()
err = c.VerifyPrivateKey(Curve_P256, priv2)
require.ErrorIs(t, err, ErrPublicPrivateCurveMismatch)
err = c.VerifyPrivateKey(Curve_CURVE25519, priv2)
require.ErrorIs(t, err, ErrPublicPrivateKeyMismatch)
err = c.VerifyPrivateKey(Curve_CURVE25519, priv2[:16])
require.ErrorIs(t, err, ErrInvalidPrivateKey)
ac, ok := c.(*certificateV2)
require.True(t, ok)
ac.curve = Curve(99)
err = c.VerifyPrivateKey(Curve(99), priv2)
require.EqualError(t, err, "invalid curve: 99")
ca2, _, caKey2, _ := NewTestCaCert(Version2, Curve_P256, time.Time{}, time.Time{}, nil, nil, nil)
err = ca.VerifyPrivateKey(Curve_CURVE25519, caKey)
require.NoError(t, err)
err = ca2.VerifyPrivateKey(Curve_P256, caKey2[:16])
require.ErrorIs(t, err, ErrInvalidPrivateKey)
c, _, priv, _ = NewTestCert(Version2, Curve_P256, ca2, caKey2, "test", time.Time{}, time.Time{}, nil, nil, nil)
rawPriv, b, curve, err = UnmarshalPrivateKeyFromPEM(priv)
err = c.VerifyPrivateKey(Curve_P256, priv[:16])
require.ErrorIs(t, err, ErrInvalidPrivateKey)
err = c.VerifyPrivateKey(Curve_P256, priv)
require.ErrorIs(t, err, ErrInvalidPrivateKey)
aCa, ok := ca2.(*certificateV2)
require.True(t, ok)
aCa.curve = Curve(99)
err = aCa.VerifyPrivateKey(Curve(99), priv2)
require.EqualError(t, err, "invalid curve: 99")
}
func TestCertificateV2_VerifyPrivateKeyP256(t *testing.T) {
ca, _, caKey, _ := NewTestCaCert(Version2, Curve_P256, time.Time{}, time.Time{}, nil, nil, nil)
err := ca.VerifyPrivateKey(Curve_P256, caKey)
require.NoError(t, err)
_, _, caKey2, _ := NewTestCaCert(Version2, Curve_P256, time.Time{}, time.Time{}, nil, nil, nil)
require.NoError(t, err)
err = ca.VerifyPrivateKey(Curve_P256, caKey2)
require.Error(t, err)
c, _, priv, _ := NewTestCert(Version2, Curve_P256, ca, caKey, "test", time.Time{}, time.Time{}, nil, nil, nil)
rawPriv, b, curve, err := UnmarshalPrivateKeyFromPEM(priv)
require.NoError(t, err)
assert.Empty(t, b)
assert.Equal(t, Curve_P256, curve)
err = c.VerifyPrivateKey(Curve_P256, rawPriv)
require.NoError(t, err)
_, priv2 := P256Keypair()
err = c.VerifyPrivateKey(Curve_P256, priv2)
require.Error(t, err)
}
func TestCertificateV2_Copy(t *testing.T) {
ca, _, caKey, _ := NewTestCaCert(Version2, Curve_CURVE25519, time.Now(), time.Now().Add(10*time.Minute), nil, nil, nil)
c, _, _, _ := NewTestCert(Version2, Curve_CURVE25519, ca, caKey, "test", time.Now(), time.Now().Add(5*time.Minute), nil, nil, nil)
cc := c.Copy()
test.AssertDeepCopyEqual(t, c, cc)
}
func TestUnmarshalCertificateV2(t *testing.T) {
data := []byte("\x98\x00\x00")
_, err := unmarshalCertificateV2(data, nil, Curve_CURVE25519)
require.EqualError(t, err, "bad wire format")
}
func TestCertificateV2_marshalForSigningStability(t *testing.T) {
before := time.Date(1996, time.May, 5, 0, 0, 0, 0, time.UTC)
after := before.Add(time.Second * 60).Round(time.Second)
pubKey := []byte("1234567890abcedfghij1234567890ab")
nc := certificateV2{
details: detailsV2{
name: "testing",
networks: []netip.Prefix{
mustParsePrefixUnmapped("10.1.1.2/16"),
mustParsePrefixUnmapped("10.1.1.1/24"),
},
unsafeNetworks: []netip.Prefix{
mustParsePrefixUnmapped("9.1.1.3/16"),
mustParsePrefixUnmapped("9.1.1.2/24"),
},
groups: []string{"test-group1", "test-group2", "test-group3"},
notBefore: before,
notAfter: after,
isCA: false,
issuer: "1234567890abcdef1234567890abcdef",
},
signature: []byte("1234567890abcdef1234567890abcdef"),
publicKey: pubKey,
}
const expectedRawDetailsStr = "a070800774657374696e67a10e04050a0101021004050a01010118a20e0405090101031004050901010218a3270c0b746573742d67726f7570310c0b746573742d67726f7570320c0b746573742d67726f7570338504318bef808604318befbc87101234567890abcdef1234567890abcdef"
expectedRawDetails, err := hex.DecodeString(expectedRawDetailsStr)
require.NoError(t, err)
db, err := nc.details.Marshal()
require.NoError(t, err)
assert.Equal(t, expectedRawDetails, db)
expectedForSigning, err := hex.DecodeString(expectedRawDetailsStr + "00313233343536373839306162636564666768696a313233343536373839306162")
b, err := nc.marshalForSigning()
require.NoError(t, err)
assert.Equal(t, expectedForSigning, b)
}

300
cert/crypto.go Normal file
View File

@@ -0,0 +1,300 @@
package cert
import (
"crypto/aes"
"crypto/cipher"
"crypto/ed25519"
"crypto/rand"
"encoding/pem"
"fmt"
"io"
"math"
"golang.org/x/crypto/argon2"
"google.golang.org/protobuf/proto"
)
type NebulaEncryptedData struct {
EncryptionMetadata NebulaEncryptionMetadata
Ciphertext []byte
}
type NebulaEncryptionMetadata struct {
EncryptionAlgorithm string
Argon2Parameters Argon2Parameters
}
// Argon2Parameters KDF factors
type Argon2Parameters struct {
version rune
Memory uint32 // KiB
Parallelism uint8
Iterations uint32
salt []byte
}
// NewArgon2Parameters Returns a new Argon2Parameters object with current version set
func NewArgon2Parameters(memory uint32, parallelism uint8, iterations uint32) *Argon2Parameters {
return &Argon2Parameters{
version: argon2.Version,
Memory: memory, // KiB
Parallelism: parallelism,
Iterations: iterations,
}
}
// Encrypts data using AES-256-GCM and the Argon2id key derivation function
func aes256Encrypt(passphrase []byte, kdfParams *Argon2Parameters, data []byte) ([]byte, error) {
key, err := aes256DeriveKey(passphrase, kdfParams)
if err != nil {
return nil, err
}
// this should never happen, but since this dictates how our calls into the
// aes package behave and could be catastraphic, let's sanity check this
if len(key) != 32 {
return nil, fmt.Errorf("invalid AES-256 key length (%d) - cowardly refusing to encrypt", len(key))
}
block, err := aes.NewCipher(key)
if err != nil {
return nil, err
}
gcm, err := cipher.NewGCM(block)
if err != nil {
return nil, err
}
nonce := make([]byte, gcm.NonceSize())
if _, err := io.ReadFull(rand.Reader, nonce); err != nil {
return nil, err
}
ciphertext := gcm.Seal(nil, nonce, data, nil)
blob := joinNonceCiphertext(nonce, ciphertext)
return blob, nil
}
// Decrypts data using AES-256-GCM and the Argon2id key derivation function
// Expects the data to include an Argon2id parameter string before the encrypted data
func aes256Decrypt(passphrase []byte, kdfParams *Argon2Parameters, data []byte) ([]byte, error) {
key, err := aes256DeriveKey(passphrase, kdfParams)
if err != nil {
return nil, err
}
block, err := aes.NewCipher(key)
if err != nil {
return nil, err
}
gcm, err := cipher.NewGCM(block)
if err != nil {
return nil, err
}
nonce, ciphertext, err := splitNonceCiphertext(data, gcm.NonceSize())
if err != nil {
return nil, err
}
plaintext, err := gcm.Open(nil, nonce, ciphertext, nil)
if err != nil {
return nil, fmt.Errorf("invalid passphrase or corrupt private key")
}
return plaintext, nil
}
func aes256DeriveKey(passphrase []byte, params *Argon2Parameters) ([]byte, error) {
if params.salt == nil {
params.salt = make([]byte, 32)
if _, err := rand.Read(params.salt); err != nil {
return nil, err
}
}
// keySize of 32 bytes will result in AES-256 encryption
key, err := deriveKey(passphrase, 32, params)
if err != nil {
return nil, err
}
return key, nil
}
// Derives a key from a passphrase using Argon2id
func deriveKey(passphrase []byte, keySize uint32, params *Argon2Parameters) ([]byte, error) {
if params.version != argon2.Version {
return nil, fmt.Errorf("incompatible Argon2 version: %d", params.version)
}
if params.salt == nil {
return nil, fmt.Errorf("salt must be set in argon2Parameters")
} else if len(params.salt) < 16 {
return nil, fmt.Errorf("salt must be at least 128 bits")
}
key := argon2.IDKey(passphrase, params.salt, params.Iterations, params.Memory, params.Parallelism, keySize)
return key, nil
}
// Prepends nonce to ciphertext
func joinNonceCiphertext(nonce []byte, ciphertext []byte) []byte {
return append(nonce, ciphertext...)
}
// Splits nonce from ciphertext
func splitNonceCiphertext(blob []byte, nonceSize int) ([]byte, []byte, error) {
if len(blob) <= nonceSize {
return nil, nil, fmt.Errorf("invalid ciphertext blob - blob shorter than nonce length")
}
return blob[:nonceSize], blob[nonceSize:], nil
}
// EncryptAndMarshalSigningPrivateKey is a simple helper to encrypt and PEM encode a private key
func EncryptAndMarshalSigningPrivateKey(curve Curve, b []byte, passphrase []byte, kdfParams *Argon2Parameters) ([]byte, error) {
ciphertext, err := aes256Encrypt(passphrase, kdfParams, b)
if err != nil {
return nil, err
}
b, err = proto.Marshal(&RawNebulaEncryptedData{
EncryptionMetadata: &RawNebulaEncryptionMetadata{
EncryptionAlgorithm: "AES-256-GCM",
Argon2Parameters: &RawNebulaArgon2Parameters{
Version: kdfParams.version,
Memory: kdfParams.Memory,
Parallelism: uint32(kdfParams.Parallelism),
Iterations: kdfParams.Iterations,
Salt: kdfParams.salt,
},
},
Ciphertext: ciphertext,
})
if err != nil {
return nil, err
}
switch curve {
case Curve_CURVE25519:
return pem.EncodeToMemory(&pem.Block{Type: EncryptedEd25519PrivateKeyBanner, Bytes: b}), nil
case Curve_P256:
return pem.EncodeToMemory(&pem.Block{Type: EncryptedECDSAP256PrivateKeyBanner, Bytes: b}), nil
default:
return nil, fmt.Errorf("invalid curve: %v", curve)
}
}
// UnmarshalNebulaEncryptedData will unmarshal a protobuf byte representation of a nebula cert into its
// protobuf-generated struct.
func UnmarshalNebulaEncryptedData(b []byte) (*NebulaEncryptedData, error) {
if len(b) == 0 {
return nil, fmt.Errorf("nil byte array")
}
var rned RawNebulaEncryptedData
err := proto.Unmarshal(b, &rned)
if err != nil {
return nil, err
}
if rned.EncryptionMetadata == nil {
return nil, fmt.Errorf("encoded EncryptionMetadata was nil")
}
if rned.EncryptionMetadata.Argon2Parameters == nil {
return nil, fmt.Errorf("encoded Argon2Parameters was nil")
}
params, err := unmarshalArgon2Parameters(rned.EncryptionMetadata.Argon2Parameters)
if err != nil {
return nil, err
}
ned := NebulaEncryptedData{
EncryptionMetadata: NebulaEncryptionMetadata{
EncryptionAlgorithm: rned.EncryptionMetadata.EncryptionAlgorithm,
Argon2Parameters: *params,
},
Ciphertext: rned.Ciphertext,
}
return &ned, nil
}
func unmarshalArgon2Parameters(params *RawNebulaArgon2Parameters) (*Argon2Parameters, error) {
if params.Version < math.MinInt32 || params.Version > math.MaxInt32 {
return nil, fmt.Errorf("Argon2Parameters Version must be at least %d and no more than %d", math.MinInt32, math.MaxInt32)
}
if params.Memory <= 0 || params.Memory > math.MaxUint32 {
return nil, fmt.Errorf("Argon2Parameters Memory must be be greater than 0 and no more than %d KiB", uint32(math.MaxUint32))
}
if params.Parallelism <= 0 || params.Parallelism > math.MaxUint8 {
return nil, fmt.Errorf("Argon2Parameters Parallelism must be be greater than 0 and no more than %d", math.MaxUint8)
}
if params.Iterations <= 0 || params.Iterations > math.MaxUint32 {
return nil, fmt.Errorf("-argon-iterations must be be greater than 0 and no more than %d", uint32(math.MaxUint32))
}
return &Argon2Parameters{
version: params.Version,
Memory: params.Memory,
Parallelism: uint8(params.Parallelism),
Iterations: params.Iterations,
salt: params.Salt,
}, nil
}
// DecryptAndUnmarshalSigningPrivateKey will try to pem decode and decrypt an Ed25519/ECDSA private key with
// the given passphrase, returning any other bytes b or an error on failure
func DecryptAndUnmarshalSigningPrivateKey(passphrase, b []byte) (Curve, []byte, []byte, error) {
var curve Curve
k, r := pem.Decode(b)
if k == nil {
return curve, nil, r, fmt.Errorf("input did not contain a valid PEM encoded block")
}
switch k.Type {
case EncryptedEd25519PrivateKeyBanner:
curve = Curve_CURVE25519
case EncryptedECDSAP256PrivateKeyBanner:
curve = Curve_P256
default:
return curve, nil, r, fmt.Errorf("bytes did not contain a proper nebula encrypted Ed25519/ECDSA private key banner")
}
ned, err := UnmarshalNebulaEncryptedData(k.Bytes)
if err != nil {
return curve, nil, r, err
}
var bytes []byte
switch ned.EncryptionMetadata.EncryptionAlgorithm {
case "AES-256-GCM":
bytes, err = aes256Decrypt(passphrase, &ned.EncryptionMetadata.Argon2Parameters, ned.Ciphertext)
if err != nil {
return curve, nil, r, err
}
default:
return curve, nil, r, fmt.Errorf("unsupported encryption algorithm: %s", ned.EncryptionMetadata.EncryptionAlgorithm)
}
switch curve {
case Curve_CURVE25519:
if len(bytes) != ed25519.PrivateKeySize {
return curve, nil, r, fmt.Errorf("key was not %d bytes, is invalid ed25519 private key", ed25519.PrivateKeySize)
}
case Curve_P256:
if len(bytes) != 32 {
return curve, nil, r, fmt.Errorf("key was not 32 bytes, is invalid ECDSA P256 private key")
}
}
return curve, bytes, r, nil
}

113
cert/crypto_test.go Normal file
View File

@@ -0,0 +1,113 @@
package cert
import (
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"golang.org/x/crypto/argon2"
)
func TestNewArgon2Parameters(t *testing.T) {
p := NewArgon2Parameters(64*1024, 4, 3)
assert.Equal(t, &Argon2Parameters{
version: argon2.Version,
Memory: 64 * 1024,
Parallelism: 4,
Iterations: 3,
}, p)
p = NewArgon2Parameters(2*1024*1024, 2, 1)
assert.Equal(t, &Argon2Parameters{
version: argon2.Version,
Memory: 2 * 1024 * 1024,
Parallelism: 2,
Iterations: 1,
}, p)
}
func TestDecryptAndUnmarshalSigningPrivateKey(t *testing.T) {
passphrase := []byte("DO NOT USE")
privKey := []byte(`# A good key
-----BEGIN NEBULA ED25519 ENCRYPTED PRIVATE KEY-----
CjsKC0FFUy0yNTYtR0NNEiwIExCAgAQYAyAEKiCPoDfGQiosxNPTbPn5EsMlc2MI
c0Bt4oz6gTrFQhX3aBJcimhHKeAuhyTGvllD0Z19fe+DFPcLH3h5VrdjVfIAajg0
KrbV3n9UHif/Au5skWmquNJzoW1E4MTdRbvpti6o+WdQ49DxjBFhx0YH8LBqrbPU
0BGkUHmIO7daP24=
-----END NEBULA ED25519 ENCRYPTED PRIVATE KEY-----
`)
shortKey := []byte(`# A key which, once decrypted, is too short
-----BEGIN NEBULA ED25519 ENCRYPTED PRIVATE KEY-----
CjsKC0FFUy0yNTYtR0NNEiwIExCAgAQYAyAEKiAVJwdfl3r+eqi/vF6S7OMdpjfo
hAzmTCRnr58Su4AqmBJbCv3zleYCEKYJP6UI3S8ekLMGISsgO4hm5leukCCyqT0Z
cQ76yrberpzkJKoPLGisX8f+xdy4aXSZl7oEYWQte1+vqbtl/eY9PGZhxUQdcyq7
hqzIyrRqfUgVuA==
-----END NEBULA ED25519 ENCRYPTED PRIVATE KEY-----
`)
invalidBanner := []byte(`# Invalid banner (not encrypted)
-----BEGIN NEBULA ED25519 PRIVATE KEY-----
bWRp2CTVFhW9HD/qCd28ltDgK3w8VXSeaEYczDWos8sMUBqDb9jP3+NYwcS4lURG
XgLvodMXZJuaFPssp+WwtA==
-----END NEBULA ED25519 PRIVATE KEY-----
`)
invalidPem := []byte(`# Not a valid PEM format
-BEGIN NEBULA ED25519 ENCRYPTED PRIVATE KEY-----
CjwKC0FFUy0yNTYtR0NNEi0IExCAgIABGAEgBCognnjujd67Vsv99p22wfAjQaDT
oCMW1mdjkU3gACKNW4MSXOWR9Sts4C81yk1RUku2gvGKs3TB9LYoklLsIizSYOLl
+Vs//O1T0I1Xbml2XBAROsb/VSoDln/6LMqR4B6fn6B3GOsLBBqRI8daDl9lRMPB
qrlJ69wer3ZUHFXA
-END NEBULA ED25519 ENCRYPTED PRIVATE KEY-----
`)
keyBundle := appendByteSlices(privKey, shortKey, invalidBanner, invalidPem)
// Success test case
curve, k, rest, err := DecryptAndUnmarshalSigningPrivateKey(passphrase, keyBundle)
require.NoError(t, err)
assert.Equal(t, Curve_CURVE25519, curve)
assert.Len(t, k, 64)
assert.Equal(t, rest, appendByteSlices(shortKey, invalidBanner, invalidPem))
// Fail due to short key
curve, k, rest, err = DecryptAndUnmarshalSigningPrivateKey(passphrase, rest)
require.EqualError(t, err, "key was not 64 bytes, is invalid ed25519 private key")
assert.Nil(t, k)
assert.Equal(t, rest, appendByteSlices(invalidBanner, invalidPem))
// Fail due to invalid banner
curve, k, rest, err = DecryptAndUnmarshalSigningPrivateKey(passphrase, rest)
require.EqualError(t, err, "bytes did not contain a proper nebula encrypted Ed25519/ECDSA private key banner")
assert.Nil(t, k)
assert.Equal(t, rest, invalidPem)
// Fail due to ivalid PEM format, because
// it's missing the requisite pre-encapsulation boundary.
curve, k, rest, err = DecryptAndUnmarshalSigningPrivateKey(passphrase, rest)
require.EqualError(t, err, "input did not contain a valid PEM encoded block")
assert.Nil(t, k)
assert.Equal(t, rest, invalidPem)
// Fail due to invalid passphrase
curve, k, rest, err = DecryptAndUnmarshalSigningPrivateKey([]byte("invalid passphrase"), privKey)
require.EqualError(t, err, "invalid passphrase or corrupt private key")
assert.Nil(t, k)
assert.Equal(t, []byte{}, rest)
}
func TestEncryptAndMarshalSigningPrivateKey(t *testing.T) {
// Having proved that decryption works correctly above, we can test the
// encryption function produces a value which can be decrypted
passphrase := []byte("passphrase")
bytes := []byte("AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA")
kdfParams := NewArgon2Parameters(64*1024, 4, 3)
key, err := EncryptAndMarshalSigningPrivateKey(Curve_CURVE25519, bytes, passphrase, kdfParams)
require.NoError(t, err)
// Verify the "key" can be decrypted successfully
curve, k, rest, err := DecryptAndUnmarshalSigningPrivateKey(passphrase, key)
assert.Len(t, k, 64)
assert.Equal(t, Curve_CURVE25519, curve)
assert.Equal(t, []byte{}, rest)
require.NoError(t, err)
// EncryptAndMarshalEd25519PrivateKey does not create any errors itself
}

49
cert/errors.go Normal file
View File

@@ -0,0 +1,49 @@
package cert
import (
"errors"
"fmt"
)
var (
ErrBadFormat = errors.New("bad wire format")
ErrRootExpired = errors.New("root certificate is expired")
ErrExpired = errors.New("certificate is expired")
ErrNotCA = errors.New("certificate is not a CA")
ErrNotSelfSigned = errors.New("certificate is not self-signed")
ErrBlockListed = errors.New("certificate is in the block list")
ErrFingerprintMismatch = errors.New("certificate fingerprint did not match")
ErrSignatureMismatch = errors.New("certificate signature did not match")
ErrInvalidPublicKey = errors.New("invalid public key")
ErrInvalidPrivateKey = errors.New("invalid private key")
ErrPublicPrivateCurveMismatch = errors.New("public key does not match private key curve")
ErrPublicPrivateKeyMismatch = errors.New("public key and private key are not a pair")
ErrPrivateKeyEncrypted = errors.New("private key must be decrypted")
ErrCaNotFound = errors.New("could not find ca for the certificate")
ErrInvalidPEMBlock = errors.New("input did not contain a valid PEM encoded block")
ErrInvalidPEMCertificateBanner = errors.New("bytes did not contain a proper certificate banner")
ErrInvalidPEMX25519PublicKeyBanner = errors.New("bytes did not contain a proper X25519 public key banner")
ErrInvalidPEMX25519PrivateKeyBanner = errors.New("bytes did not contain a proper X25519 private key banner")
ErrInvalidPEMEd25519PublicKeyBanner = errors.New("bytes did not contain a proper Ed25519 public key banner")
ErrInvalidPEMEd25519PrivateKeyBanner = errors.New("bytes did not contain a proper Ed25519 private key banner")
ErrNoPeerStaticKey = errors.New("no peer static key was present")
ErrNoPayload = errors.New("provided payload was empty")
ErrMissingDetails = errors.New("certificate did not contain details")
ErrEmptySignature = errors.New("empty signature")
ErrEmptyRawDetails = errors.New("empty rawDetails not allowed")
)
type ErrInvalidCertificateProperties struct {
str string
}
func NewErrInvalidCertificateProperties(format string, a ...any) error {
return &ErrInvalidCertificateProperties{fmt.Sprintf(format, a...)}
}
func (e *ErrInvalidCertificateProperties) Error() string {
return e.str
}

141
cert/helper_test.go Normal file
View File

@@ -0,0 +1,141 @@
package cert
import (
"crypto/ecdh"
"crypto/ecdsa"
"crypto/elliptic"
"crypto/rand"
"io"
"net/netip"
"time"
"golang.org/x/crypto/curve25519"
"golang.org/x/crypto/ed25519"
)
// NewTestCaCert will create a new ca certificate
func NewTestCaCert(version Version, curve Curve, before, after time.Time, networks, unsafeNetworks []netip.Prefix, groups []string) (Certificate, []byte, []byte, []byte) {
var err error
var pub, priv []byte
switch curve {
case Curve_CURVE25519:
pub, priv, err = ed25519.GenerateKey(rand.Reader)
case Curve_P256:
privk, err := ecdsa.GenerateKey(elliptic.P256(), rand.Reader)
if err != nil {
panic(err)
}
pub = elliptic.Marshal(elliptic.P256(), privk.PublicKey.X, privk.PublicKey.Y)
priv = privk.D.FillBytes(make([]byte, 32))
default:
// There is no default to allow the underlying lib to respond with an error
}
if before.IsZero() {
before = time.Now().Add(time.Second * -60).Round(time.Second)
}
if after.IsZero() {
after = time.Now().Add(time.Second * 60).Round(time.Second)
}
t := &TBSCertificate{
Curve: curve,
Version: version,
Name: "test ca",
NotBefore: time.Unix(before.Unix(), 0),
NotAfter: time.Unix(after.Unix(), 0),
PublicKey: pub,
Networks: networks,
UnsafeNetworks: unsafeNetworks,
Groups: groups,
IsCA: true,
}
c, err := t.Sign(nil, curve, priv)
if err != nil {
panic(err)
}
pem, err := c.MarshalPEM()
if err != nil {
panic(err)
}
return c, pub, priv, pem
}
// NewTestCert will generate a signed certificate with the provided details.
// Expiry times are defaulted if you do not pass them in
func NewTestCert(v Version, curve Curve, ca Certificate, key []byte, name string, before, after time.Time, networks, unsafeNetworks []netip.Prefix, groups []string) (Certificate, []byte, []byte, []byte) {
if before.IsZero() {
before = time.Now().Add(time.Second * -60).Round(time.Second)
}
if after.IsZero() {
after = time.Now().Add(time.Second * 60).Round(time.Second)
}
if len(networks) == 0 {
networks = []netip.Prefix{netip.MustParsePrefix("10.0.0.123/8")}
}
var pub, priv []byte
switch curve {
case Curve_CURVE25519:
pub, priv = X25519Keypair()
case Curve_P256:
pub, priv = P256Keypair()
default:
panic("unknown curve")
}
nc := &TBSCertificate{
Version: v,
Curve: curve,
Name: name,
Networks: networks,
UnsafeNetworks: unsafeNetworks,
Groups: groups,
NotBefore: time.Unix(before.Unix(), 0),
NotAfter: time.Unix(after.Unix(), 0),
PublicKey: pub,
IsCA: false,
}
c, err := nc.Sign(ca, ca.Curve(), key)
if err != nil {
panic(err)
}
pem, err := c.MarshalPEM()
if err != nil {
panic(err)
}
return c, pub, MarshalPrivateKeyToPEM(curve, priv), pem
}
func X25519Keypair() ([]byte, []byte) {
privkey := make([]byte, 32)
if _, err := io.ReadFull(rand.Reader, privkey); err != nil {
panic(err)
}
pubkey, err := curve25519.X25519(privkey, curve25519.Basepoint)
if err != nil {
panic(err)
}
return pubkey, privkey
}
func P256Keypair() ([]byte, []byte) {
privkey, err := ecdh.P256().GenerateKey(rand.Reader)
if err != nil {
panic(err)
}
pubkey := privkey.PublicKey()
return pubkey.Bytes(), privkey.Bytes()
}

161
cert/pem.go Normal file
View File

@@ -0,0 +1,161 @@
package cert
import (
"encoding/pem"
"fmt"
"golang.org/x/crypto/ed25519"
)
const (
CertificateBanner = "NEBULA CERTIFICATE"
CertificateV2Banner = "NEBULA CERTIFICATE V2"
X25519PrivateKeyBanner = "NEBULA X25519 PRIVATE KEY"
X25519PublicKeyBanner = "NEBULA X25519 PUBLIC KEY"
EncryptedEd25519PrivateKeyBanner = "NEBULA ED25519 ENCRYPTED PRIVATE KEY"
Ed25519PrivateKeyBanner = "NEBULA ED25519 PRIVATE KEY"
Ed25519PublicKeyBanner = "NEBULA ED25519 PUBLIC KEY"
P256PrivateKeyBanner = "NEBULA P256 PRIVATE KEY"
P256PublicKeyBanner = "NEBULA P256 PUBLIC KEY"
EncryptedECDSAP256PrivateKeyBanner = "NEBULA ECDSA P256 ENCRYPTED PRIVATE KEY"
ECDSAP256PrivateKeyBanner = "NEBULA ECDSA P256 PRIVATE KEY"
)
// UnmarshalCertificateFromPEM will try to unmarshal the first pem block in a byte array, returning any non consumed
// data or an error on failure
func UnmarshalCertificateFromPEM(b []byte) (Certificate, []byte, error) {
p, r := pem.Decode(b)
if p == nil {
return nil, r, ErrInvalidPEMBlock
}
var c Certificate
var err error
switch p.Type {
// Implementations must validate the resulting certificate contains valid information
case CertificateBanner:
c, err = unmarshalCertificateV1(p.Bytes, nil)
case CertificateV2Banner:
c, err = unmarshalCertificateV2(p.Bytes, nil, Curve_CURVE25519)
default:
return nil, r, ErrInvalidPEMCertificateBanner
}
if err != nil {
return nil, r, err
}
return c, r, nil
}
func MarshalPublicKeyToPEM(curve Curve, b []byte) []byte {
switch curve {
case Curve_CURVE25519:
return pem.EncodeToMemory(&pem.Block{Type: X25519PublicKeyBanner, Bytes: b})
case Curve_P256:
return pem.EncodeToMemory(&pem.Block{Type: P256PublicKeyBanner, Bytes: b})
default:
return nil
}
}
func UnmarshalPublicKeyFromPEM(b []byte) ([]byte, []byte, Curve, error) {
k, r := pem.Decode(b)
if k == nil {
return nil, r, 0, fmt.Errorf("input did not contain a valid PEM encoded block")
}
var expectedLen int
var curve Curve
switch k.Type {
case X25519PublicKeyBanner, Ed25519PublicKeyBanner:
expectedLen = 32
curve = Curve_CURVE25519
case P256PublicKeyBanner:
// Uncompressed
expectedLen = 65
curve = Curve_P256
default:
return nil, r, 0, fmt.Errorf("bytes did not contain a proper public key banner")
}
if len(k.Bytes) != expectedLen {
return nil, r, 0, fmt.Errorf("key was not %d bytes, is invalid %s public key", expectedLen, curve)
}
return k.Bytes, r, curve, nil
}
func MarshalPrivateKeyToPEM(curve Curve, b []byte) []byte {
switch curve {
case Curve_CURVE25519:
return pem.EncodeToMemory(&pem.Block{Type: X25519PrivateKeyBanner, Bytes: b})
case Curve_P256:
return pem.EncodeToMemory(&pem.Block{Type: P256PrivateKeyBanner, Bytes: b})
default:
return nil
}
}
func MarshalSigningPrivateKeyToPEM(curve Curve, b []byte) []byte {
switch curve {
case Curve_CURVE25519:
return pem.EncodeToMemory(&pem.Block{Type: Ed25519PrivateKeyBanner, Bytes: b})
case Curve_P256:
return pem.EncodeToMemory(&pem.Block{Type: ECDSAP256PrivateKeyBanner, Bytes: b})
default:
return nil
}
}
// UnmarshalPrivateKeyFromPEM will try to unmarshal the first pem block in a byte array, returning any non
// consumed data or an error on failure
func UnmarshalPrivateKeyFromPEM(b []byte) ([]byte, []byte, Curve, error) {
k, r := pem.Decode(b)
if k == nil {
return nil, r, 0, fmt.Errorf("input did not contain a valid PEM encoded block")
}
var expectedLen int
var curve Curve
switch k.Type {
case X25519PrivateKeyBanner:
expectedLen = 32
curve = Curve_CURVE25519
case P256PrivateKeyBanner:
expectedLen = 32
curve = Curve_P256
default:
return nil, r, 0, fmt.Errorf("bytes did not contain a proper private key banner")
}
if len(k.Bytes) != expectedLen {
return nil, r, 0, fmt.Errorf("key was not %d bytes, is invalid %s private key", expectedLen, curve)
}
return k.Bytes, r, curve, nil
}
func UnmarshalSigningPrivateKeyFromPEM(b []byte) ([]byte, []byte, Curve, error) {
k, r := pem.Decode(b)
if k == nil {
return nil, r, 0, fmt.Errorf("input did not contain a valid PEM encoded block")
}
var curve Curve
switch k.Type {
case EncryptedEd25519PrivateKeyBanner:
return nil, nil, Curve_CURVE25519, ErrPrivateKeyEncrypted
case EncryptedECDSAP256PrivateKeyBanner:
return nil, nil, Curve_P256, ErrPrivateKeyEncrypted
case Ed25519PrivateKeyBanner:
curve = Curve_CURVE25519
if len(k.Bytes) != ed25519.PrivateKeySize {
return nil, r, 0, fmt.Errorf("key was not %d bytes, is invalid Ed25519 private key", ed25519.PrivateKeySize)
}
case ECDSAP256PrivateKeyBanner:
curve = Curve_P256
if len(k.Bytes) != 32 {
return nil, r, 0, fmt.Errorf("key was not 32 bytes, is invalid ECDSA P256 private key")
}
default:
return nil, r, 0, fmt.Errorf("bytes did not contain a proper Ed25519/ECDSA private key banner")
}
return k.Bytes, r, curve, nil
}

293
cert/pem_test.go Normal file
View File

@@ -0,0 +1,293 @@
package cert
import (
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestUnmarshalCertificateFromPEM(t *testing.T) {
goodCert := []byte(`
# A good cert
-----BEGIN NEBULA CERTIFICATE-----
CkAKDm5lYnVsYSByb290IGNhKJfap9AFMJfg1+YGOiCUQGByMuNRhIlQBOyzXWbL
vcKBwDhov900phEfJ5DN3kABEkDCq5R8qBiu8sl54yVfgRcQXEDt3cHr8UTSLszv
bzBEr00kERQxxTzTsH8cpYEgRoipvmExvg8WP8NdAJEYJosB
-----END NEBULA CERTIFICATE-----
`)
badBanner := []byte(`# A bad banner
-----BEGIN NOT A NEBULA CERTIFICATE-----
CkAKDm5lYnVsYSByb290IGNhKJfap9AFMJfg1+YGOiCUQGByMuNRhIlQBOyzXWbL
vcKBwDhov900phEfJ5DN3kABEkDCq5R8qBiu8sl54yVfgRcQXEDt3cHr8UTSLszv
bzBEr00kERQxxTzTsH8cpYEgRoipvmExvg8WP8NdAJEYJosB
-----END NOT A NEBULA CERTIFICATE-----
`)
invalidPem := []byte(`# Not a valid PEM format
-BEGIN NEBULA CERTIFICATE-----
CkAKDm5lYnVsYSByb290IGNhKJfap9AFMJfg1+YGOiCUQGByMuNRhIlQBOyzXWbL
vcKBwDhov900phEfJ5DN3kABEkDCq5R8qBiu8sl54yVfgRcQXEDt3cHr8UTSLszv
bzBEr00kERQxxTzTsH8cpYEgRoipvmExvg8WP8NdAJEYJosB
-END NEBULA CERTIFICATE----`)
certBundle := appendByteSlices(goodCert, badBanner, invalidPem)
// Success test case
cert, rest, err := UnmarshalCertificateFromPEM(certBundle)
assert.NotNil(t, cert)
assert.Equal(t, rest, append(badBanner, invalidPem...))
require.NoError(t, err)
// Fail due to invalid banner.
cert, rest, err = UnmarshalCertificateFromPEM(rest)
assert.Nil(t, cert)
assert.Equal(t, rest, invalidPem)
require.EqualError(t, err, "bytes did not contain a proper certificate banner")
// Fail due to ivalid PEM format, because
// it's missing the requisite pre-encapsulation boundary.
cert, rest, err = UnmarshalCertificateFromPEM(rest)
assert.Nil(t, cert)
assert.Equal(t, rest, invalidPem)
require.EqualError(t, err, "input did not contain a valid PEM encoded block")
}
func TestUnmarshalSigningPrivateKeyFromPEM(t *testing.T) {
privKey := []byte(`# A good key
-----BEGIN NEBULA ED25519 PRIVATE KEY-----
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA==
-----END NEBULA ED25519 PRIVATE KEY-----
`)
privP256Key := []byte(`# A good key
-----BEGIN NEBULA ECDSA P256 PRIVATE KEY-----
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=
-----END NEBULA ECDSA P256 PRIVATE KEY-----
`)
shortKey := []byte(`# A short key
-----BEGIN NEBULA ED25519 PRIVATE KEY-----
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
-----END NEBULA ED25519 PRIVATE KEY-----
`)
invalidBanner := []byte(`# Invalid banner
-----BEGIN NOT A NEBULA PRIVATE KEY-----
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA==
-----END NOT A NEBULA PRIVATE KEY-----
`)
invalidPem := []byte(`# Not a valid PEM format
-BEGIN NEBULA ED25519 PRIVATE KEY-----
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA==
-END NEBULA ED25519 PRIVATE KEY-----`)
keyBundle := appendByteSlices(privKey, privP256Key, shortKey, invalidBanner, invalidPem)
// Success test case
k, rest, curve, err := UnmarshalSigningPrivateKeyFromPEM(keyBundle)
assert.Len(t, k, 64)
assert.Equal(t, rest, appendByteSlices(privP256Key, shortKey, invalidBanner, invalidPem))
assert.Equal(t, Curve_CURVE25519, curve)
require.NoError(t, err)
// Success test case
k, rest, curve, err = UnmarshalSigningPrivateKeyFromPEM(rest)
assert.Len(t, k, 32)
assert.Equal(t, rest, appendByteSlices(shortKey, invalidBanner, invalidPem))
assert.Equal(t, Curve_P256, curve)
require.NoError(t, err)
// Fail due to short key
k, rest, curve, err = UnmarshalSigningPrivateKeyFromPEM(rest)
assert.Nil(t, k)
assert.Equal(t, rest, appendByteSlices(invalidBanner, invalidPem))
require.EqualError(t, err, "key was not 64 bytes, is invalid Ed25519 private key")
// Fail due to invalid banner
k, rest, curve, err = UnmarshalSigningPrivateKeyFromPEM(rest)
assert.Nil(t, k)
assert.Equal(t, rest, invalidPem)
require.EqualError(t, err, "bytes did not contain a proper Ed25519/ECDSA private key banner")
// Fail due to ivalid PEM format, because
// it's missing the requisite pre-encapsulation boundary.
k, rest, curve, err = UnmarshalSigningPrivateKeyFromPEM(rest)
assert.Nil(t, k)
assert.Equal(t, rest, invalidPem)
require.EqualError(t, err, "input did not contain a valid PEM encoded block")
}
func TestUnmarshalPrivateKeyFromPEM(t *testing.T) {
privKey := []byte(`# A good key
-----BEGIN NEBULA X25519 PRIVATE KEY-----
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=
-----END NEBULA X25519 PRIVATE KEY-----
`)
privP256Key := []byte(`# A good key
-----BEGIN NEBULA P256 PRIVATE KEY-----
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=
-----END NEBULA P256 PRIVATE KEY-----
`)
shortKey := []byte(`# A short key
-----BEGIN NEBULA X25519 PRIVATE KEY-----
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA==
-----END NEBULA X25519 PRIVATE KEY-----
`)
invalidBanner := []byte(`# Invalid banner
-----BEGIN NOT A NEBULA PRIVATE KEY-----
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=
-----END NOT A NEBULA PRIVATE KEY-----
`)
invalidPem := []byte(`# Not a valid PEM format
-BEGIN NEBULA X25519 PRIVATE KEY-----
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=
-END NEBULA X25519 PRIVATE KEY-----`)
keyBundle := appendByteSlices(privKey, privP256Key, shortKey, invalidBanner, invalidPem)
// Success test case
k, rest, curve, err := UnmarshalPrivateKeyFromPEM(keyBundle)
assert.Len(t, k, 32)
assert.Equal(t, rest, appendByteSlices(privP256Key, shortKey, invalidBanner, invalidPem))
assert.Equal(t, Curve_CURVE25519, curve)
require.NoError(t, err)
// Success test case
k, rest, curve, err = UnmarshalPrivateKeyFromPEM(rest)
assert.Len(t, k, 32)
assert.Equal(t, rest, appendByteSlices(shortKey, invalidBanner, invalidPem))
assert.Equal(t, Curve_P256, curve)
require.NoError(t, err)
// Fail due to short key
k, rest, curve, err = UnmarshalPrivateKeyFromPEM(rest)
assert.Nil(t, k)
assert.Equal(t, rest, appendByteSlices(invalidBanner, invalidPem))
require.EqualError(t, err, "key was not 32 bytes, is invalid CURVE25519 private key")
// Fail due to invalid banner
k, rest, curve, err = UnmarshalPrivateKeyFromPEM(rest)
assert.Nil(t, k)
assert.Equal(t, rest, invalidPem)
require.EqualError(t, err, "bytes did not contain a proper private key banner")
// Fail due to ivalid PEM format, because
// it's missing the requisite pre-encapsulation boundary.
k, rest, curve, err = UnmarshalPrivateKeyFromPEM(rest)
assert.Nil(t, k)
assert.Equal(t, rest, invalidPem)
require.EqualError(t, err, "input did not contain a valid PEM encoded block")
}
func TestUnmarshalPublicKeyFromPEM(t *testing.T) {
pubKey := []byte(`# A good key
-----BEGIN NEBULA ED25519 PUBLIC KEY-----
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=
-----END NEBULA ED25519 PUBLIC KEY-----
`)
shortKey := []byte(`# A short key
-----BEGIN NEBULA ED25519 PUBLIC KEY-----
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA==
-----END NEBULA ED25519 PUBLIC KEY-----
`)
invalidBanner := []byte(`# Invalid banner
-----BEGIN NOT A NEBULA PUBLIC KEY-----
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=
-----END NOT A NEBULA PUBLIC KEY-----
`)
invalidPem := []byte(`# Not a valid PEM format
-BEGIN NEBULA ED25519 PUBLIC KEY-----
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=
-END NEBULA ED25519 PUBLIC KEY-----`)
keyBundle := appendByteSlices(pubKey, shortKey, invalidBanner, invalidPem)
// Success test case
k, rest, curve, err := UnmarshalPublicKeyFromPEM(keyBundle)
assert.Len(t, k, 32)
assert.Equal(t, Curve_CURVE25519, curve)
require.NoError(t, err)
assert.Equal(t, rest, appendByteSlices(shortKey, invalidBanner, invalidPem))
// Fail due to short key
k, rest, curve, err = UnmarshalPublicKeyFromPEM(rest)
assert.Nil(t, k)
assert.Equal(t, Curve_CURVE25519, curve)
assert.Equal(t, rest, appendByteSlices(invalidBanner, invalidPem))
require.EqualError(t, err, "key was not 32 bytes, is invalid CURVE25519 public key")
// Fail due to invalid banner
k, rest, curve, err = UnmarshalPublicKeyFromPEM(rest)
assert.Nil(t, k)
assert.Equal(t, Curve_CURVE25519, curve)
require.EqualError(t, err, "bytes did not contain a proper public key banner")
assert.Equal(t, rest, invalidPem)
// Fail due to ivalid PEM format, because
// it's missing the requisite pre-encapsulation boundary.
k, rest, curve, err = UnmarshalPublicKeyFromPEM(rest)
assert.Nil(t, k)
assert.Equal(t, Curve_CURVE25519, curve)
assert.Equal(t, rest, invalidPem)
require.EqualError(t, err, "input did not contain a valid PEM encoded block")
}
func TestUnmarshalX25519PublicKey(t *testing.T) {
pubKey := []byte(`# A good key
-----BEGIN NEBULA X25519 PUBLIC KEY-----
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=
-----END NEBULA X25519 PUBLIC KEY-----
`)
pubP256Key := []byte(`# A good key
-----BEGIN NEBULA P256 PUBLIC KEY-----
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAA=
-----END NEBULA P256 PUBLIC KEY-----
`)
shortKey := []byte(`# A short key
-----BEGIN NEBULA X25519 PUBLIC KEY-----
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA==
-----END NEBULA X25519 PUBLIC KEY-----
`)
invalidBanner := []byte(`# Invalid banner
-----BEGIN NOT A NEBULA PUBLIC KEY-----
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=
-----END NOT A NEBULA PUBLIC KEY-----
`)
invalidPem := []byte(`# Not a valid PEM format
-BEGIN NEBULA X25519 PUBLIC KEY-----
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=
-END NEBULA X25519 PUBLIC KEY-----`)
keyBundle := appendByteSlices(pubKey, pubP256Key, shortKey, invalidBanner, invalidPem)
// Success test case
k, rest, curve, err := UnmarshalPublicKeyFromPEM(keyBundle)
assert.Len(t, k, 32)
require.NoError(t, err)
assert.Equal(t, rest, appendByteSlices(pubP256Key, shortKey, invalidBanner, invalidPem))
assert.Equal(t, Curve_CURVE25519, curve)
// Success test case
k, rest, curve, err = UnmarshalPublicKeyFromPEM(rest)
assert.Len(t, k, 65)
require.NoError(t, err)
assert.Equal(t, rest, appendByteSlices(shortKey, invalidBanner, invalidPem))
assert.Equal(t, Curve_P256, curve)
// Fail due to short key
k, rest, curve, err = UnmarshalPublicKeyFromPEM(rest)
assert.Nil(t, k)
assert.Equal(t, rest, appendByteSlices(invalidBanner, invalidPem))
require.EqualError(t, err, "key was not 32 bytes, is invalid CURVE25519 public key")
// Fail due to invalid banner
k, rest, curve, err = UnmarshalPublicKeyFromPEM(rest)
assert.Nil(t, k)
require.EqualError(t, err, "bytes did not contain a proper public key banner")
assert.Equal(t, rest, invalidPem)
// Fail due to ivalid PEM format, because
// it's missing the requisite pre-encapsulation boundary.
k, rest, curve, err = UnmarshalPublicKeyFromPEM(rest)
assert.Nil(t, k)
assert.Equal(t, rest, invalidPem)
require.EqualError(t, err, "input did not contain a valid PEM encoded block")
}

167
cert/sign.go Normal file
View File

@@ -0,0 +1,167 @@
package cert
import (
"crypto/ecdsa"
"crypto/ed25519"
"crypto/elliptic"
"crypto/rand"
"crypto/sha256"
"fmt"
"math/big"
"net/netip"
"time"
)
// TBSCertificate represents a certificate intended to be signed.
// It is invalid to use this structure as a Certificate.
type TBSCertificate struct {
Version Version
Name string
Networks []netip.Prefix
UnsafeNetworks []netip.Prefix
Groups []string
IsCA bool
NotBefore time.Time
NotAfter time.Time
PublicKey []byte
Curve Curve
issuer string
}
type beingSignedCertificate interface {
// fromTBSCertificate copies the values from the TBSCertificate to this versions internal representation
// Implementations must validate the resulting certificate contains valid information
fromTBSCertificate(*TBSCertificate) error
// marshalForSigning returns the bytes that should be signed
marshalForSigning() ([]byte, error)
// setSignature sets the signature for the certificate that has just been signed. The signature must not be blank.
setSignature([]byte) error
}
type SignerLambda func(certBytes []byte) ([]byte, error)
// Sign will create a sealed certificate using details provided by the TBSCertificate as long as those
// details do not violate constraints of the signing certificate.
// If the TBSCertificate is a CA then signer must be nil.
func (t *TBSCertificate) Sign(signer Certificate, curve Curve, key []byte) (Certificate, error) {
switch t.Curve {
case Curve_CURVE25519:
pk := ed25519.PrivateKey(key)
sp := func(certBytes []byte) ([]byte, error) {
sig := ed25519.Sign(pk, certBytes)
return sig, nil
}
return t.SignWith(signer, curve, sp)
case Curve_P256:
pk := &ecdsa.PrivateKey{
PublicKey: ecdsa.PublicKey{
Curve: elliptic.P256(),
},
// ref: https://github.com/golang/go/blob/go1.19/src/crypto/x509/sec1.go#L95
D: new(big.Int).SetBytes(key),
}
// ref: https://github.com/golang/go/blob/go1.19/src/crypto/x509/sec1.go#L119
pk.X, pk.Y = pk.Curve.ScalarBaseMult(key)
sp := func(certBytes []byte) ([]byte, error) {
// We need to hash first for ECDSA
// - https://pkg.go.dev/crypto/ecdsa#SignASN1
hashed := sha256.Sum256(certBytes)
return ecdsa.SignASN1(rand.Reader, pk, hashed[:])
}
return t.SignWith(signer, curve, sp)
default:
return nil, fmt.Errorf("invalid curve: %s", t.Curve)
}
}
// SignWith does the same thing as sign, but uses the function in `sp` to calculate the signature.
// You should only use SignWith if you do not have direct access to your private key.
func (t *TBSCertificate) SignWith(signer Certificate, curve Curve, sp SignerLambda) (Certificate, error) {
if curve != t.Curve {
return nil, fmt.Errorf("curve in cert and private key supplied don't match")
}
if signer != nil {
if t.IsCA {
return nil, fmt.Errorf("can not sign a CA certificate with another")
}
err := checkCAConstraints(signer, t.NotBefore, t.NotAfter, t.Groups, t.Networks, t.UnsafeNetworks)
if err != nil {
return nil, err
}
issuer, err := signer.Fingerprint()
if err != nil {
return nil, fmt.Errorf("error computing issuer: %v", err)
}
t.issuer = issuer
} else {
if !t.IsCA {
return nil, fmt.Errorf("self signed certificates must have IsCA set to true")
}
}
var c beingSignedCertificate
switch t.Version {
case Version1:
c = &certificateV1{}
err := c.fromTBSCertificate(t)
if err != nil {
return nil, err
}
case Version2:
c = &certificateV2{}
err := c.fromTBSCertificate(t)
if err != nil {
return nil, err
}
default:
return nil, fmt.Errorf("unknown cert version %d", t.Version)
}
certBytes, err := c.marshalForSigning()
if err != nil {
return nil, err
}
sig, err := sp(certBytes)
if err != nil {
return nil, err
}
err = c.setSignature(sig)
if err != nil {
return nil, err
}
sc, ok := c.(Certificate)
if !ok {
return nil, fmt.Errorf("invalid certificate")
}
return sc, nil
}
func comparePrefix(a, b netip.Prefix) int {
addr := a.Addr().Compare(b.Addr())
if addr == 0 {
return a.Bits() - b.Bits()
}
return addr
}
// findDuplicatePrefix returns an error if there is a duplicate prefix in the pre-sorted input slice sortedPrefixes
func findDuplicatePrefix(sortedPrefixes []netip.Prefix) error {
if len(sortedPrefixes) < 2 {
return nil
}
for i := 1; i < len(sortedPrefixes); i++ {
if comparePrefix(sortedPrefixes[i], sortedPrefixes[i-1]) == 0 {
return NewErrInvalidCertificateProperties("duplicate network detected: %v", sortedPrefixes[i])
}
}
return nil
}

91
cert/sign_test.go Normal file
View File

@@ -0,0 +1,91 @@
package cert
import (
"crypto/ecdsa"
"crypto/ed25519"
"crypto/elliptic"
"crypto/rand"
"net/netip"
"testing"
"time"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestCertificateV1_Sign(t *testing.T) {
before := time.Now().Add(time.Second * -60).Round(time.Second)
after := time.Now().Add(time.Second * 60).Round(time.Second)
pubKey := []byte("1234567890abcedfghij1234567890ab")
tbs := TBSCertificate{
Version: Version1,
Name: "testing",
Networks: []netip.Prefix{
mustParsePrefixUnmapped("10.1.1.1/24"),
mustParsePrefixUnmapped("10.1.1.2/16"),
},
UnsafeNetworks: []netip.Prefix{
mustParsePrefixUnmapped("9.1.1.2/24"),
mustParsePrefixUnmapped("9.1.1.3/24"),
},
Groups: []string{"test-group1", "test-group2", "test-group3"},
NotBefore: before,
NotAfter: after,
PublicKey: pubKey,
IsCA: false,
}
pub, priv, err := ed25519.GenerateKey(rand.Reader)
c, err := tbs.Sign(&certificateV1{details: detailsV1{notBefore: before, notAfter: after}}, Curve_CURVE25519, priv)
require.NoError(t, err)
assert.NotNil(t, c)
assert.True(t, c.CheckSignature(pub))
b, err := c.Marshal()
require.NoError(t, err)
uc, err := unmarshalCertificateV1(b, nil)
require.NoError(t, err)
assert.NotNil(t, uc)
}
func TestCertificateV1_SignP256(t *testing.T) {
before := time.Now().Add(time.Second * -60).Round(time.Second)
after := time.Now().Add(time.Second * 60).Round(time.Second)
pubKey := []byte("01234567890abcedfghij1234567890ab1234567890abcedfghij1234567890ab")
tbs := TBSCertificate{
Version: Version1,
Name: "testing",
Networks: []netip.Prefix{
mustParsePrefixUnmapped("10.1.1.1/24"),
mustParsePrefixUnmapped("10.1.1.2/16"),
},
UnsafeNetworks: []netip.Prefix{
mustParsePrefixUnmapped("9.1.1.2/24"),
mustParsePrefixUnmapped("9.1.1.3/16"),
},
Groups: []string{"test-group1", "test-group2", "test-group3"},
NotBefore: before,
NotAfter: after,
PublicKey: pubKey,
IsCA: false,
Curve: Curve_P256,
}
priv, err := ecdsa.GenerateKey(elliptic.P256(), rand.Reader)
require.NoError(t, err)
pub := elliptic.Marshal(elliptic.P256(), priv.PublicKey.X, priv.PublicKey.Y)
rawPriv := priv.D.FillBytes(make([]byte, 32))
c, err := tbs.Sign(&certificateV1{details: detailsV1{notBefore: before, notAfter: after}}, Curve_P256, rawPriv)
require.NoError(t, err)
assert.NotNil(t, c)
assert.True(t, c.CheckSignature(pub))
b, err := c.Marshal()
require.NoError(t, err)
uc, err := unmarshalCertificateV1(b, nil)
require.NoError(t, err)
assert.NotNil(t, uc)
}

138
cert_test/cert.go Normal file
View File

@@ -0,0 +1,138 @@
package cert_test
import (
"crypto/ecdh"
"crypto/ecdsa"
"crypto/elliptic"
"crypto/rand"
"io"
"net/netip"
"time"
"github.com/slackhq/nebula/cert"
"golang.org/x/crypto/curve25519"
"golang.org/x/crypto/ed25519"
)
// NewTestCaCert will create a new ca certificate
func NewTestCaCert(version cert.Version, curve cert.Curve, before, after time.Time, networks, unsafeNetworks []netip.Prefix, groups []string) (cert.Certificate, []byte, []byte, []byte) {
var err error
var pub, priv []byte
switch curve {
case cert.Curve_CURVE25519:
pub, priv, err = ed25519.GenerateKey(rand.Reader)
case cert.Curve_P256:
privk, err := ecdsa.GenerateKey(elliptic.P256(), rand.Reader)
if err != nil {
panic(err)
}
pub = elliptic.Marshal(elliptic.P256(), privk.PublicKey.X, privk.PublicKey.Y)
priv = privk.D.FillBytes(make([]byte, 32))
default:
// There is no default to allow the underlying lib to respond with an error
}
if before.IsZero() {
before = time.Now().Add(time.Second * -60).Round(time.Second)
}
if after.IsZero() {
after = time.Now().Add(time.Second * 60).Round(time.Second)
}
t := &cert.TBSCertificate{
Curve: curve,
Version: version,
Name: "test ca",
NotBefore: time.Unix(before.Unix(), 0),
NotAfter: time.Unix(after.Unix(), 0),
PublicKey: pub,
Networks: networks,
UnsafeNetworks: unsafeNetworks,
Groups: groups,
IsCA: true,
}
c, err := t.Sign(nil, curve, priv)
if err != nil {
panic(err)
}
pem, err := c.MarshalPEM()
if err != nil {
panic(err)
}
return c, pub, priv, pem
}
// NewTestCert will generate a signed certificate with the provided details.
// Expiry times are defaulted if you do not pass them in
func NewTestCert(v cert.Version, curve cert.Curve, ca cert.Certificate, key []byte, name string, before, after time.Time, networks, unsafeNetworks []netip.Prefix, groups []string) (cert.Certificate, []byte, []byte, []byte) {
if before.IsZero() {
before = time.Now().Add(time.Second * -60).Round(time.Second)
}
if after.IsZero() {
after = time.Now().Add(time.Second * 60).Round(time.Second)
}
var pub, priv []byte
switch curve {
case cert.Curve_CURVE25519:
pub, priv = X25519Keypair()
case cert.Curve_P256:
pub, priv = P256Keypair()
default:
panic("unknown curve")
}
nc := &cert.TBSCertificate{
Version: v,
Curve: curve,
Name: name,
Networks: networks,
UnsafeNetworks: unsafeNetworks,
Groups: groups,
NotBefore: time.Unix(before.Unix(), 0),
NotAfter: time.Unix(after.Unix(), 0),
PublicKey: pub,
IsCA: false,
}
c, err := nc.Sign(ca, ca.Curve(), key)
if err != nil {
panic(err)
}
pem, err := c.MarshalPEM()
if err != nil {
panic(err)
}
return c, pub, cert.MarshalPrivateKeyToPEM(curve, priv), pem
}
func X25519Keypair() ([]byte, []byte) {
privkey := make([]byte, 32)
if _, err := io.ReadFull(rand.Reader, privkey); err != nil {
panic(err)
}
pubkey, err := curve25519.X25519(privkey, curve25519.Basepoint)
if err != nil {
panic(err)
}
return pubkey, privkey
}
func P256Keypair() ([]byte, []byte) {
privkey, err := ecdh.P256().GenerateKey(rand.Reader)
if err != nil {
panic(err)
}
pubkey := privkey.PublicKey()
return pubkey.Bytes(), privkey.Bytes()
}

View File

@@ -1,170 +0,0 @@
package nebula
import (
"encoding/binary"
"fmt"
"net"
)
type CIDRNode struct {
left *CIDRNode
right *CIDRNode
parent *CIDRNode
value interface{}
}
type CIDRTree struct {
root *CIDRNode
}
const (
startbit = uint32(0x80000000)
)
func NewCIDRTree() *CIDRTree {
tree := new(CIDRTree)
tree.root = &CIDRNode{}
return tree
}
func (tree *CIDRTree) AddCIDR(cidr *net.IPNet, val interface{}) {
bit := startbit
node := tree.root
next := tree.root
ip := ip2int(cidr.IP)
mask := ip2int(cidr.Mask)
// Find our last ancestor in the tree
for bit&mask != 0 {
if ip&bit != 0 {
next = node.right
} else {
next = node.left
}
if next == nil {
break
}
bit = bit >> 1
node = next
}
// We already have this range so update the value
if next != nil {
node.value = val
return
}
// Build up the rest of the tree we don't already have
for bit&mask != 0 {
next = &CIDRNode{}
next.parent = node
if ip&bit != 0 {
node.right = next
} else {
node.left = next
}
bit >>= 1
node = next
}
// Final node marks our cidr, set the value
node.value = val
}
// Finds the first match, which way be the least specific
func (tree *CIDRTree) Contains(ip uint32) (value interface{}) {
bit := startbit
node := tree.root
for node != nil {
if node.value != nil {
return node.value
}
if ip&bit != 0 {
node = node.right
} else {
node = node.left
}
bit >>= 1
}
return value
}
// Finds the most specific match
func (tree *CIDRTree) MostSpecificContains(ip uint32) (value interface{}) {
bit := startbit
node := tree.root
for node != nil {
if node.value != nil {
value = node.value
}
if ip&bit != 0 {
node = node.right
} else {
node = node.left
}
bit >>= 1
}
return value
}
// Finds the most specific match
func (tree *CIDRTree) Match(ip uint32) (value interface{}) {
bit := startbit
node := tree.root
lastNode := node
for node != nil {
lastNode = node
if ip&bit != 0 {
node = node.right
} else {
node = node.left
}
bit >>= 1
}
if bit == 0 && lastNode != nil {
value = lastNode.value
}
return value
}
// A helper type to avoid converting to IP when logging
type IntIp uint32
func (ip IntIp) String() string {
return fmt.Sprintf("%v", int2ip(uint32(ip)))
}
func (ip IntIp) MarshalJSON() ([]byte, error) {
return []byte(fmt.Sprintf("\"%s\"", int2ip(uint32(ip)).String())), nil
}
func ip2int(ip []byte) uint32 {
if len(ip) == 16 {
return binary.BigEndian.Uint32(ip[12:16])
}
return binary.BigEndian.Uint32(ip)
}
func int2ip(nn uint32) net.IP {
ip := make(net.IP, 4)
binary.BigEndian.PutUint32(ip, nn)
return ip
}

View File

@@ -1,157 +0,0 @@
package nebula
import (
"net"
"testing"
"github.com/stretchr/testify/assert"
)
func TestCIDRTree_Contains(t *testing.T) {
tree := NewCIDRTree()
tree.AddCIDR(getCIDR("1.0.0.0/8"), "1")
tree.AddCIDR(getCIDR("2.1.0.0/16"), "2")
tree.AddCIDR(getCIDR("3.1.1.0/24"), "3")
tree.AddCIDR(getCIDR("4.1.1.0/24"), "4a")
tree.AddCIDR(getCIDR("4.1.1.1/32"), "4b")
tree.AddCIDR(getCIDR("4.1.2.1/32"), "4c")
tree.AddCIDR(getCIDR("254.0.0.0/4"), "5")
tests := []struct {
Result interface{}
IP string
}{
{"1", "1.0.0.0"},
{"1", "1.255.255.255"},
{"2", "2.1.0.0"},
{"2", "2.1.255.255"},
{"3", "3.1.1.0"},
{"3", "3.1.1.255"},
{"4a", "4.1.1.255"},
{"4a", "4.1.1.1"},
{"5", "240.0.0.0"},
{"5", "255.255.255.255"},
{nil, "239.0.0.0"},
{nil, "4.1.2.2"},
}
for _, tt := range tests {
assert.Equal(t, tt.Result, tree.Contains(ip2int(net.ParseIP(tt.IP))))
}
tree = NewCIDRTree()
tree.AddCIDR(getCIDR("1.1.1.1/0"), "cool")
assert.Equal(t, "cool", tree.Contains(ip2int(net.ParseIP("0.0.0.0"))))
assert.Equal(t, "cool", tree.Contains(ip2int(net.ParseIP("255.255.255.255"))))
}
func TestCIDRTree_MostSpecificContains(t *testing.T) {
tree := NewCIDRTree()
tree.AddCIDR(getCIDR("1.0.0.0/8"), "1")
tree.AddCIDR(getCIDR("2.1.0.0/16"), "2")
tree.AddCIDR(getCIDR("3.1.1.0/24"), "3")
tree.AddCIDR(getCIDR("4.1.1.0/24"), "4a")
tree.AddCIDR(getCIDR("4.1.1.0/30"), "4b")
tree.AddCIDR(getCIDR("4.1.1.1/32"), "4c")
tree.AddCIDR(getCIDR("254.0.0.0/4"), "5")
tests := []struct {
Result interface{}
IP string
}{
{"1", "1.0.0.0"},
{"1", "1.255.255.255"},
{"2", "2.1.0.0"},
{"2", "2.1.255.255"},
{"3", "3.1.1.0"},
{"3", "3.1.1.255"},
{"4a", "4.1.1.255"},
{"4b", "4.1.1.2"},
{"4c", "4.1.1.1"},
{"5", "240.0.0.0"},
{"5", "255.255.255.255"},
{nil, "239.0.0.0"},
{nil, "4.1.2.2"},
}
for _, tt := range tests {
assert.Equal(t, tt.Result, tree.MostSpecificContains(ip2int(net.ParseIP(tt.IP))))
}
tree = NewCIDRTree()
tree.AddCIDR(getCIDR("1.1.1.1/0"), "cool")
assert.Equal(t, "cool", tree.MostSpecificContains(ip2int(net.ParseIP("0.0.0.0"))))
assert.Equal(t, "cool", tree.MostSpecificContains(ip2int(net.ParseIP("255.255.255.255"))))
}
func TestCIDRTree_Match(t *testing.T) {
tree := NewCIDRTree()
tree.AddCIDR(getCIDR("4.1.1.0/32"), "1a")
tree.AddCIDR(getCIDR("4.1.1.1/32"), "1b")
tests := []struct {
Result interface{}
IP string
}{
{"1a", "4.1.1.0"},
{"1b", "4.1.1.1"},
}
for _, tt := range tests {
assert.Equal(t, tt.Result, tree.Match(ip2int(net.ParseIP(tt.IP))))
}
tree = NewCIDRTree()
tree.AddCIDR(getCIDR("1.1.1.1/0"), "cool")
assert.Equal(t, "cool", tree.Contains(ip2int(net.ParseIP("0.0.0.0"))))
assert.Equal(t, "cool", tree.Contains(ip2int(net.ParseIP("255.255.255.255"))))
}
func BenchmarkCIDRTree_Contains(b *testing.B) {
tree := NewCIDRTree()
tree.AddCIDR(getCIDR("1.1.0.0/16"), "1")
tree.AddCIDR(getCIDR("1.2.1.1/32"), "1")
tree.AddCIDR(getCIDR("192.2.1.1/32"), "1")
tree.AddCIDR(getCIDR("172.2.1.1/32"), "1")
ip := ip2int(net.ParseIP("1.2.1.1"))
b.Run("found", func(b *testing.B) {
for i := 0; i < b.N; i++ {
tree.Contains(ip)
}
})
ip = ip2int(net.ParseIP("1.2.1.255"))
b.Run("not found", func(b *testing.B) {
for i := 0; i < b.N; i++ {
tree.Contains(ip)
}
})
}
func BenchmarkCIDRTree_Match(b *testing.B) {
tree := NewCIDRTree()
tree.AddCIDR(getCIDR("1.1.0.0/16"), "1")
tree.AddCIDR(getCIDR("1.2.1.1/32"), "1")
tree.AddCIDR(getCIDR("192.2.1.1/32"), "1")
tree.AddCIDR(getCIDR("172.2.1.1/32"), "1")
ip := ip2int(net.ParseIP("1.2.1.1"))
b.Run("found", func(b *testing.B) {
for i := 0; i < b.N; i++ {
tree.Match(ip)
}
})
ip = ip2int(net.ParseIP("1.2.1.255"))
b.Run("not found", func(b *testing.B) {
for i := 0; i < b.N; i++ {
tree.Match(ip)
}
})
}
func getCIDR(s string) *net.IPNet {
_, c, _ := net.ParseCIDR(s)
return c
}

View File

@@ -1,60 +1,112 @@
package main
import (
"crypto/ecdsa"
"crypto/elliptic"
"crypto/rand"
"flag"
"fmt"
"io"
"io/ioutil"
"net"
"math"
"net/netip"
"os"
"strings"
"time"
"github.com/skip2/go-qrcode"
"github.com/slackhq/nebula/cert"
"github.com/slackhq/nebula/pkclient"
"golang.org/x/crypto/ed25519"
)
type caFlags struct {
set *flag.FlagSet
name *string
duration *time.Duration
outKeyPath *string
outCertPath *string
groups *string
ips *string
subnets *string
set *flag.FlagSet
name *string
duration *time.Duration
outKeyPath *string
outCertPath *string
outQRPath *string
groups *string
networks *string
unsafeNetworks *string
argonMemory *uint
argonIterations *uint
argonParallelism *uint
encryption *bool
version *uint
curve *string
p11url *string
// Deprecated options
ips *string
subnets *string
}
func newCaFlags() *caFlags {
cf := caFlags{set: flag.NewFlagSet("ca", flag.ContinueOnError)}
cf.set.Usage = func() {}
cf.name = cf.set.String("name", "", "Required: name of the certificate authority")
cf.version = cf.set.Uint("version", uint(cert.Version2), "Optional: version of the certificate format to use")
cf.duration = cf.set.Duration("duration", time.Duration(time.Hour*8760), "Optional: amount of time the certificate should be valid for. Valid time units are seconds: \"s\", minutes: \"m\", hours: \"h\"")
cf.outKeyPath = cf.set.String("out-key", "ca.key", "Optional: path to write the private key to")
cf.outCertPath = cf.set.String("out-crt", "ca.crt", "Optional: path to write the certificate to")
cf.outQRPath = cf.set.String("out-qr", "", "Optional: output a qr code image (png) of the certificate")
cf.groups = cf.set.String("groups", "", "Optional: comma separated list of groups. This will limit which groups subordinate certs can use")
cf.ips = cf.set.String("ips", "", "Optional: comma separated list of ip and network in CIDR notation. This will limit which ip addresses and networks subordinate certs can use")
cf.subnets = cf.set.String("subnets", "", "Optional: comma separated list of ip and network in CIDR notation. This will limit which subnet addresses and networks subordinate certs can use")
cf.networks = cf.set.String("networks", "", "Optional: comma separated list of ip address and network in CIDR notation. This will limit which ip addresses and networks subordinate certs can use in networks")
cf.unsafeNetworks = cf.set.String("unsafe-networks", "", "Optional: comma separated list of ip address and network in CIDR notation. This will limit which ip addresses and networks subordinate certs can use in unsafe networks")
cf.argonMemory = cf.set.Uint("argon-memory", 2*1024*1024, "Optional: Argon2 memory parameter (in KiB) used for encrypted private key passphrase")
cf.argonParallelism = cf.set.Uint("argon-parallelism", 4, "Optional: Argon2 parallelism parameter used for encrypted private key passphrase")
cf.argonIterations = cf.set.Uint("argon-iterations", 1, "Optional: Argon2 iterations parameter used for encrypted private key passphrase")
cf.encryption = cf.set.Bool("encrypt", false, "Optional: prompt for passphrase and write out-key in an encrypted format")
cf.curve = cf.set.String("curve", "25519", "EdDSA/ECDSA Curve (25519, P256)")
cf.p11url = p11Flag(cf.set)
cf.ips = cf.set.String("ips", "", "Deprecated, see -networks")
cf.subnets = cf.set.String("subnets", "", "Deprecated, see -unsafe-networks")
return &cf
}
func ca(args []string, out io.Writer, errOut io.Writer) error {
func parseArgonParameters(memory uint, parallelism uint, iterations uint) (*cert.Argon2Parameters, error) {
if memory <= 0 || memory > math.MaxUint32 {
return nil, newHelpErrorf("-argon-memory must be be greater than 0 and no more than %d KiB", uint32(math.MaxUint32))
}
if parallelism <= 0 || parallelism > math.MaxUint8 {
return nil, newHelpErrorf("-argon-parallelism must be be greater than 0 and no more than %d", math.MaxUint8)
}
if iterations <= 0 || iterations > math.MaxUint32 {
return nil, newHelpErrorf("-argon-iterations must be be greater than 0 and no more than %d", uint32(math.MaxUint32))
}
return cert.NewArgon2Parameters(uint32(memory), uint8(parallelism), uint32(iterations)), nil
}
func ca(args []string, out io.Writer, errOut io.Writer, pr PasswordReader) error {
cf := newCaFlags()
err := cf.set.Parse(args)
if err != nil {
return err
}
isP11 := len(*cf.p11url) > 0
if err := mustFlagString("name", cf.name); err != nil {
return err
}
if err := mustFlagString("out-key", cf.outKeyPath); err != nil {
return err
if !isP11 {
if err = mustFlagString("out-key", cf.outKeyPath); err != nil {
return err
}
}
if err := mustFlagString("out-crt", cf.outCertPath); err != nil {
return err
}
var kdfParams *cert.Argon2Parameters
if !isP11 && *cf.encryption {
if kdfParams, err = parseArgonParameters(*cf.argonMemory, *cf.argonParallelism, *cf.argonIterations); err != nil {
return err
}
}
if *cf.duration <= 0 {
return &helpError{"-duration must be greater than 0"}
@@ -70,82 +122,203 @@ func ca(args []string, out io.Writer, errOut io.Writer) error {
}
}
var ips []*net.IPNet
if *cf.ips != "" {
for _, rs := range strings.Split(*cf.ips, ",") {
version := cert.Version(*cf.version)
if version != cert.Version1 && version != cert.Version2 {
return newHelpErrorf("-version must be either %v or %v", cert.Version1, cert.Version2)
}
var networks []netip.Prefix
if *cf.networks == "" && *cf.ips != "" {
// Pull up deprecated -ips flag if needed
*cf.networks = *cf.ips
}
if *cf.networks != "" {
for _, rs := range strings.Split(*cf.networks, ",") {
rs := strings.Trim(rs, " ")
if rs != "" {
ip, ipNet, err := net.ParseCIDR(rs)
n, err := netip.ParsePrefix(rs)
if err != nil {
return newHelpErrorf("invalid ip definition: %s", err)
return newHelpErrorf("invalid -networks definition: %s", rs)
}
ipNet.IP = ip
ips = append(ips, ipNet)
if version == cert.Version1 && !n.Addr().Is4() {
return newHelpErrorf("invalid -networks definition: v1 certificates can only be ipv4, have %s", rs)
}
networks = append(networks, n)
}
}
}
var subnets []*net.IPNet
if *cf.subnets != "" {
for _, rs := range strings.Split(*cf.subnets, ",") {
var unsafeNetworks []netip.Prefix
if *cf.unsafeNetworks == "" && *cf.subnets != "" {
// Pull up deprecated -subnets flag if needed
*cf.unsafeNetworks = *cf.subnets
}
if *cf.unsafeNetworks != "" {
for _, rs := range strings.Split(*cf.unsafeNetworks, ",") {
rs := strings.Trim(rs, " ")
if rs != "" {
_, s, err := net.ParseCIDR(rs)
n, err := netip.ParsePrefix(rs)
if err != nil {
return newHelpErrorf("invalid subnet definition: %s", err)
return newHelpErrorf("invalid -unsafe-networks definition: %s", rs)
}
subnets = append(subnets, s)
if version == cert.Version1 && !n.Addr().Is4() {
return newHelpErrorf("invalid -unsafe-networks definition: v1 certificates can only be ipv4, have %s", rs)
}
unsafeNetworks = append(unsafeNetworks, n)
}
}
}
pub, rawPriv, err := ed25519.GenerateKey(rand.Reader)
if err != nil {
return fmt.Errorf("error while generating ed25519 keys: %s", err)
var passphrase []byte
if !isP11 && *cf.encryption {
for i := 0; i < 5; i++ {
out.Write([]byte("Enter passphrase: "))
passphrase, err = pr.ReadPassword()
if err == ErrNoTerminal {
return fmt.Errorf("out-key must be encrypted interactively")
} else if err != nil {
return fmt.Errorf("error reading passphrase: %s", err)
}
if len(passphrase) > 0 {
break
}
}
if len(passphrase) == 0 {
return fmt.Errorf("no passphrase specified, remove -encrypt flag to write out-key in plaintext")
}
}
nc := cert.NebulaCertificate{
Details: cert.NebulaCertificateDetails{
Name: *cf.name,
Groups: groups,
Ips: ips,
Subnets: subnets,
NotBefore: time.Now(),
NotAfter: time.Now().Add(*cf.duration),
PublicKey: pub,
IsCA: true,
},
var curve cert.Curve
var pub, rawPriv []byte
var p11Client *pkclient.PKClient
if isP11 {
switch *cf.curve {
case "P256":
curve = cert.Curve_P256
default:
return fmt.Errorf("invalid curve for PKCS#11: %s", *cf.curve)
}
p11Client, err = pkclient.FromUrl(*cf.p11url)
if err != nil {
return fmt.Errorf("error while creating PKCS#11 client: %w", err)
}
defer func(client *pkclient.PKClient) {
_ = client.Close()
}(p11Client)
pub, err = p11Client.GetPubKey()
if err != nil {
return fmt.Errorf("error while getting public key with PKCS#11: %w", err)
}
} else {
switch *cf.curve {
case "25519", "X25519", "Curve25519", "CURVE25519":
curve = cert.Curve_CURVE25519
pub, rawPriv, err = ed25519.GenerateKey(rand.Reader)
if err != nil {
return fmt.Errorf("error while generating ed25519 keys: %s", err)
}
case "P256":
var key *ecdsa.PrivateKey
curve = cert.Curve_P256
key, err = ecdsa.GenerateKey(elliptic.P256(), rand.Reader)
if err != nil {
return fmt.Errorf("error while generating ecdsa keys: %s", err)
}
// ecdh.PrivateKey lets us get at the encoded bytes, even though
// we aren't using ECDH here.
eKey, err := key.ECDH()
if err != nil {
return fmt.Errorf("error while converting ecdsa key: %s", err)
}
rawPriv = eKey.Bytes()
pub = eKey.PublicKey().Bytes()
default:
return fmt.Errorf("invalid curve: %s", *cf.curve)
}
}
if _, err := os.Stat(*cf.outKeyPath); err == nil {
return fmt.Errorf("refusing to overwrite existing CA key: %s", *cf.outKeyPath)
t := &cert.TBSCertificate{
Version: version,
Name: *cf.name,
Groups: groups,
Networks: networks,
UnsafeNetworks: unsafeNetworks,
NotBefore: time.Now(),
NotAfter: time.Now().Add(*cf.duration),
PublicKey: pub,
IsCA: true,
Curve: curve,
}
if !isP11 {
if _, err := os.Stat(*cf.outKeyPath); err == nil {
return fmt.Errorf("refusing to overwrite existing CA key: %s", *cf.outKeyPath)
}
}
if _, err := os.Stat(*cf.outCertPath); err == nil {
return fmt.Errorf("refusing to overwrite existing CA cert: %s", *cf.outCertPath)
}
err = nc.Sign(rawPriv)
if err != nil {
return fmt.Errorf("error while signing: %s", err)
var c cert.Certificate
var b []byte
if isP11 {
c, err = t.SignWith(nil, curve, p11Client.SignASN1)
if err != nil {
return fmt.Errorf("error while signing with PKCS#11: %w", err)
}
} else {
c, err = t.Sign(nil, curve, rawPriv)
if err != nil {
return fmt.Errorf("error while signing: %s", err)
}
if *cf.encryption {
b, err = cert.EncryptAndMarshalSigningPrivateKey(curve, rawPriv, passphrase, kdfParams)
if err != nil {
return fmt.Errorf("error while encrypting out-key: %s", err)
}
} else {
b = cert.MarshalSigningPrivateKeyToPEM(curve, rawPriv)
}
err = os.WriteFile(*cf.outKeyPath, b, 0600)
if err != nil {
return fmt.Errorf("error while writing out-key: %s", err)
}
}
err = ioutil.WriteFile(*cf.outKeyPath, cert.MarshalEd25519PrivateKey(rawPriv), 0600)
if err != nil {
return fmt.Errorf("error while writing out-key: %s", err)
}
b, err := nc.MarshalToPEM()
b, err = c.MarshalPEM()
if err != nil {
return fmt.Errorf("error while marshalling certificate: %s", err)
}
err = ioutil.WriteFile(*cf.outCertPath, b, 0600)
err = os.WriteFile(*cf.outCertPath, b, 0600)
if err != nil {
return fmt.Errorf("error while writing out-crt: %s", err)
}
if *cf.outQRPath != "" {
b, err = qrcode.Encode(string(b), qrcode.Medium, -5)
if err != nil {
return fmt.Errorf("error while generating qr code: %s", err)
}
err = os.WriteFile(*cf.outQRPath, b, 0600)
if err != nil {
return fmt.Errorf("error while writing out-qr: %s", err)
}
}
return nil
}

View File

@@ -1,20 +1,22 @@
//go:build !windows
// +build !windows
package main
import (
"bytes"
"io/ioutil"
"encoding/pem"
"errors"
"os"
"strings"
"testing"
"time"
"github.com/slackhq/nebula/cert"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
//TODO: test file permissions
func Test_caSummary(t *testing.T) {
assert.Equal(t, "ca <flags>: create a self signed certificate authority", caSummary())
}
@@ -25,20 +27,39 @@ func Test_caHelp(t *testing.T) {
assert.Equal(
t,
"Usage of "+os.Args[0]+" ca <flags>: create a self signed certificate authority\n"+
" -argon-iterations uint\n"+
" \tOptional: Argon2 iterations parameter used for encrypted private key passphrase (default 1)\n"+
" -argon-memory uint\n"+
" \tOptional: Argon2 memory parameter (in KiB) used for encrypted private key passphrase (default 2097152)\n"+
" -argon-parallelism uint\n"+
" \tOptional: Argon2 parallelism parameter used for encrypted private key passphrase (default 4)\n"+
" -curve string\n"+
" \tEdDSA/ECDSA Curve (25519, P256) (default \"25519\")\n"+
" -duration duration\n"+
" \tOptional: amount of time the certificate should be valid for. Valid time units are seconds: \"s\", minutes: \"m\", hours: \"h\" (default 8760h0m0s)\n"+
" -encrypt\n"+
" \tOptional: prompt for passphrase and write out-key in an encrypted format\n"+
" -groups string\n"+
" \tOptional: comma separated list of groups. This will limit which groups subordinate certs can use\n"+
" -ips string\n"+
" \tOptional: comma separated list of ip and network in CIDR notation. This will limit which ip addresses and networks subordinate certs can use\n"+
" Deprecated, see -networks\n"+
" -name string\n"+
" \tRequired: name of the certificate authority\n"+
" -networks string\n"+
" \tOptional: comma separated list of ip address and network in CIDR notation. This will limit which ip addresses and networks subordinate certs can use in networks\n"+
" -out-crt string\n"+
" \tOptional: path to write the certificate to (default \"ca.crt\")\n"+
" -out-key string\n"+
" \tOptional: path to write the private key to (default \"ca.key\")\n"+
" -out-qr string\n"+
" \tOptional: output a qr code image (png) of the certificate\n"+
optionalPkcs11String(" -pkcs11 string\n \tOptional: PKCS#11 URI to an existing private key\n")+
" -subnets string\n"+
" \tOptional: comma separated list of ip and network in CIDR notation. This will limit which subnet addresses and networks subordinate certs can use\n",
" \tDeprecated, see -unsafe-networks\n"+
" -unsafe-networks string\n"+
" \tOptional: comma separated list of ip address and network in CIDR notation. This will limit which ip addresses and networks subordinate certs can use in unsafe networks\n"+
" -version uint\n"+
" \tOptional: version of the certificate format to use (default 2)\n",
ob.String(),
)
}
@@ -47,92 +68,171 @@ func Test_ca(t *testing.T) {
ob := &bytes.Buffer{}
eb := &bytes.Buffer{}
nopw := &StubPasswordReader{
password: []byte(""),
err: nil,
}
errpw := &StubPasswordReader{
password: []byte(""),
err: errors.New("stub error"),
}
passphrase := []byte("DO NOT USE THIS KEY")
testpw := &StubPasswordReader{
password: passphrase,
err: nil,
}
pwPromptOb := "Enter passphrase: "
// required args
assertHelpError(t, ca([]string{"-out-key", "nope", "-out-crt", "nope", "duration", "100m"}, ob, eb), "-name is required")
assert.Equal(t, "", ob.String())
assert.Equal(t, "", eb.String())
assertHelpError(t, ca(
[]string{"-version", "1", "-out-key", "nope", "-out-crt", "nope", "duration", "100m"}, ob, eb, nopw,
), "-name is required")
assert.Empty(t, ob.String())
assert.Empty(t, eb.String())
// ipv4 only ips
assertHelpError(t, ca([]string{"-version", "1", "-name", "ipv6", "-ips", "100::100/100"}, ob, eb, nopw), "invalid -networks definition: v1 certificates can only be ipv4, have 100::100/100")
assert.Empty(t, ob.String())
assert.Empty(t, eb.String())
// ipv4 only subnets
assertHelpError(t, ca([]string{"-version", "1", "-name", "ipv6", "-subnets", "100::100/100"}, ob, eb, nopw), "invalid -unsafe-networks definition: v1 certificates can only be ipv4, have 100::100/100")
assert.Empty(t, ob.String())
assert.Empty(t, eb.String())
// failed key write
ob.Reset()
eb.Reset()
args := []string{"-name", "test", "-duration", "100m", "-out-crt", "/do/not/write/pleasecrt", "-out-key", "/do/not/write/pleasekey"}
assert.EqualError(t, ca(args, ob, eb), "error while writing out-key: open /do/not/write/pleasekey: "+NoSuchDirError)
assert.Equal(t, "", ob.String())
assert.Equal(t, "", eb.String())
args := []string{"-version", "1", "-name", "test", "-duration", "100m", "-out-crt", "/do/not/write/pleasecrt", "-out-key", "/do/not/write/pleasekey"}
require.EqualError(t, ca(args, ob, eb, nopw), "error while writing out-key: open /do/not/write/pleasekey: "+NoSuchDirError)
assert.Empty(t, ob.String())
assert.Empty(t, eb.String())
// create temp key file
keyF, err := ioutil.TempFile("", "test.key")
assert.Nil(t, err)
os.Remove(keyF.Name())
keyF, err := os.CreateTemp("", "test.key")
require.NoError(t, err)
require.NoError(t, os.Remove(keyF.Name()))
// failed cert write
ob.Reset()
eb.Reset()
args = []string{"-name", "test", "-duration", "100m", "-out-crt", "/do/not/write/pleasecrt", "-out-key", keyF.Name()}
assert.EqualError(t, ca(args, ob, eb), "error while writing out-crt: open /do/not/write/pleasecrt: "+NoSuchDirError)
assert.Equal(t, "", ob.String())
assert.Equal(t, "", eb.String())
args = []string{"-version", "1", "-name", "test", "-duration", "100m", "-out-crt", "/do/not/write/pleasecrt", "-out-key", keyF.Name()}
require.EqualError(t, ca(args, ob, eb, nopw), "error while writing out-crt: open /do/not/write/pleasecrt: "+NoSuchDirError)
assert.Empty(t, ob.String())
assert.Empty(t, eb.String())
// create temp cert file
crtF, err := ioutil.TempFile("", "test.crt")
assert.Nil(t, err)
os.Remove(crtF.Name())
os.Remove(keyF.Name())
crtF, err := os.CreateTemp("", "test.crt")
require.NoError(t, err)
require.NoError(t, os.Remove(crtF.Name()))
require.NoError(t, os.Remove(keyF.Name()))
// test proper cert with removed empty groups and subnets
ob.Reset()
eb.Reset()
args = []string{"-name", "test", "-duration", "100m", "-groups", "1,, 2 , ,,,3,4,5", "-out-crt", crtF.Name(), "-out-key", keyF.Name()}
assert.Nil(t, ca(args, ob, eb))
assert.Equal(t, "", ob.String())
assert.Equal(t, "", eb.String())
args = []string{"-version", "1", "-name", "test", "-duration", "100m", "-groups", "1,, 2 , ,,,3,4,5", "-out-crt", crtF.Name(), "-out-key", keyF.Name()}
require.NoError(t, ca(args, ob, eb, nopw))
assert.Empty(t, ob.String())
assert.Empty(t, eb.String())
// read cert and key files
rb, _ := ioutil.ReadFile(keyF.Name())
lKey, b, err := cert.UnmarshalEd25519PrivateKey(rb)
assert.Len(t, b, 0)
assert.Nil(t, err)
rb, _ := os.ReadFile(keyF.Name())
lKey, b, c, err := cert.UnmarshalSigningPrivateKeyFromPEM(rb)
assert.Equal(t, cert.Curve_CURVE25519, c)
assert.Empty(t, b)
require.NoError(t, err)
assert.Len(t, lKey, 64)
rb, _ = ioutil.ReadFile(crtF.Name())
lCrt, b, err := cert.UnmarshalNebulaCertificateFromPEM(rb)
assert.Len(t, b, 0)
assert.Nil(t, err)
rb, _ = os.ReadFile(crtF.Name())
lCrt, b, err := cert.UnmarshalCertificateFromPEM(rb)
assert.Empty(t, b)
require.NoError(t, err)
assert.Equal(t, "test", lCrt.Details.Name)
assert.Len(t, lCrt.Details.Ips, 0)
assert.True(t, lCrt.Details.IsCA)
assert.Equal(t, []string{"1", "2", "3", "4", "5"}, lCrt.Details.Groups)
assert.Len(t, lCrt.Details.Subnets, 0)
assert.Len(t, lCrt.Details.PublicKey, 32)
assert.Equal(t, time.Duration(time.Minute*100), lCrt.Details.NotAfter.Sub(lCrt.Details.NotBefore))
assert.Equal(t, "", lCrt.Details.Issuer)
assert.True(t, lCrt.CheckSignature(lCrt.Details.PublicKey))
assert.Equal(t, "test", lCrt.Name())
assert.Empty(t, lCrt.Networks())
assert.True(t, lCrt.IsCA())
assert.Equal(t, []string{"1", "2", "3", "4", "5"}, lCrt.Groups())
assert.Empty(t, lCrt.UnsafeNetworks())
assert.Len(t, lCrt.PublicKey(), 32)
assert.Equal(t, time.Duration(time.Minute*100), lCrt.NotAfter().Sub(lCrt.NotBefore()))
assert.Empty(t, lCrt.Issuer())
assert.True(t, lCrt.CheckSignature(lCrt.PublicKey()))
// test encrypted key
os.Remove(keyF.Name())
os.Remove(crtF.Name())
ob.Reset()
eb.Reset()
args = []string{"-version", "1", "-encrypt", "-name", "test", "-duration", "100m", "-groups", "1,2,3,4,5", "-out-crt", crtF.Name(), "-out-key", keyF.Name()}
require.NoError(t, ca(args, ob, eb, testpw))
assert.Equal(t, pwPromptOb, ob.String())
assert.Empty(t, eb.String())
// read encrypted key file and verify default params
rb, _ = os.ReadFile(keyF.Name())
k, _ := pem.Decode(rb)
ned, err := cert.UnmarshalNebulaEncryptedData(k.Bytes)
require.NoError(t, err)
// we won't know salt in advance, so just check start of string
assert.Equal(t, uint32(2*1024*1024), ned.EncryptionMetadata.Argon2Parameters.Memory)
assert.Equal(t, uint8(4), ned.EncryptionMetadata.Argon2Parameters.Parallelism)
assert.Equal(t, uint32(1), ned.EncryptionMetadata.Argon2Parameters.Iterations)
// verify the key is valid and decrypt-able
var curve cert.Curve
curve, lKey, b, err = cert.DecryptAndUnmarshalSigningPrivateKey(passphrase, rb)
assert.Equal(t, cert.Curve_CURVE25519, curve)
require.NoError(t, err)
assert.Empty(t, b)
assert.Len(t, lKey, 64)
// test when reading passsword results in an error
os.Remove(keyF.Name())
os.Remove(crtF.Name())
ob.Reset()
eb.Reset()
args = []string{"-version", "1", "-encrypt", "-name", "test", "-duration", "100m", "-groups", "1,2,3,4,5", "-out-crt", crtF.Name(), "-out-key", keyF.Name()}
require.Error(t, ca(args, ob, eb, errpw))
assert.Equal(t, pwPromptOb, ob.String())
assert.Empty(t, eb.String())
// test when user fails to enter a password
os.Remove(keyF.Name())
os.Remove(crtF.Name())
ob.Reset()
eb.Reset()
args = []string{"-version", "1", "-encrypt", "-name", "test", "-duration", "100m", "-groups", "1,2,3,4,5", "-out-crt", crtF.Name(), "-out-key", keyF.Name()}
require.EqualError(t, ca(args, ob, eb, nopw), "no passphrase specified, remove -encrypt flag to write out-key in plaintext")
assert.Equal(t, strings.Repeat(pwPromptOb, 5), ob.String()) // prompts 5 times before giving up
assert.Empty(t, eb.String())
// create valid cert/key for overwrite tests
os.Remove(keyF.Name())
os.Remove(crtF.Name())
ob.Reset()
eb.Reset()
args = []string{"-name", "test", "-duration", "100m", "-groups", "1,, 2 , ,,,3,4,5", "-out-crt", crtF.Name(), "-out-key", keyF.Name()}
assert.Nil(t, ca(args, ob, eb))
args = []string{"-version", "1", "-name", "test", "-duration", "100m", "-groups", "1,, 2 , ,,,3,4,5", "-out-crt", crtF.Name(), "-out-key", keyF.Name()}
require.NoError(t, ca(args, ob, eb, nopw))
// test that we won't overwrite existing certificate file
ob.Reset()
eb.Reset()
args = []string{"-name", "test", "-duration", "100m", "-groups", "1,, 2 , ,,,3,4,5", "-out-crt", crtF.Name(), "-out-key", keyF.Name()}
assert.EqualError(t, ca(args, ob, eb), "refusing to overwrite existing CA key: "+keyF.Name())
assert.Equal(t, "", ob.String())
assert.Equal(t, "", eb.String())
args = []string{"-version", "1", "-name", "test", "-duration", "100m", "-groups", "1,, 2 , ,,,3,4,5", "-out-crt", crtF.Name(), "-out-key", keyF.Name()}
require.EqualError(t, ca(args, ob, eb, nopw), "refusing to overwrite existing CA key: "+keyF.Name())
assert.Empty(t, ob.String())
assert.Empty(t, eb.String())
// test that we won't overwrite existing key file
os.Remove(keyF.Name())
ob.Reset()
eb.Reset()
args = []string{"-name", "test", "-duration", "100m", "-groups", "1,, 2 , ,,,3,4,5", "-out-crt", crtF.Name(), "-out-key", keyF.Name()}
assert.EqualError(t, ca(args, ob, eb), "refusing to overwrite existing CA cert: "+crtF.Name())
assert.Equal(t, "", ob.String())
assert.Equal(t, "", eb.String())
args = []string{"-version", "1", "-name", "test", "-duration", "100m", "-groups", "1,, 2 , ,,,3,4,5", "-out-crt", crtF.Name(), "-out-key", keyF.Name()}
require.EqualError(t, ca(args, ob, eb, nopw), "refusing to overwrite existing CA cert: "+crtF.Name())
assert.Empty(t, ob.String())
assert.Empty(t, eb.String())
os.Remove(keyF.Name())
}

View File

@@ -4,9 +4,10 @@ import (
"flag"
"fmt"
"io"
"io/ioutil"
"os"
"github.com/slackhq/nebula/pkclient"
"github.com/slackhq/nebula/cert"
)
@@ -14,6 +15,8 @@ type keygenFlags struct {
set *flag.FlagSet
outKeyPath *string
outPubPath *string
curve *string
p11url *string
}
func newKeygenFlags() *keygenFlags {
@@ -21,6 +24,8 @@ func newKeygenFlags() *keygenFlags {
cf.set.Usage = func() {}
cf.outPubPath = cf.set.String("out-pub", "", "Required: path to write the public key to")
cf.outKeyPath = cf.set.String("out-key", "", "Required: path to write the private key to")
cf.curve = cf.set.String("curve", "25519", "ECDH Curve (25519, P256)")
cf.p11url = p11Flag(cf.set)
return &cf
}
@@ -31,21 +36,58 @@ func keygen(args []string, out io.Writer, errOut io.Writer) error {
return err
}
if err := mustFlagString("out-key", cf.outKeyPath); err != nil {
return err
isP11 := len(*cf.p11url) > 0
if !isP11 {
if err = mustFlagString("out-key", cf.outKeyPath); err != nil {
return err
}
}
if err := mustFlagString("out-pub", cf.outPubPath); err != nil {
if err = mustFlagString("out-pub", cf.outPubPath); err != nil {
return err
}
pub, rawPriv := x25519Keypair()
err = ioutil.WriteFile(*cf.outKeyPath, cert.MarshalX25519PrivateKey(rawPriv), 0600)
if err != nil {
return fmt.Errorf("error while writing out-key: %s", err)
var pub, rawPriv []byte
var curve cert.Curve
if isP11 {
switch *cf.curve {
case "P256":
curve = cert.Curve_P256
default:
return fmt.Errorf("invalid curve for PKCS#11: %s", *cf.curve)
}
} else {
switch *cf.curve {
case "25519", "X25519", "Curve25519", "CURVE25519":
pub, rawPriv = x25519Keypair()
curve = cert.Curve_CURVE25519
case "P256":
pub, rawPriv = p256Keypair()
curve = cert.Curve_P256
default:
return fmt.Errorf("invalid curve: %s", *cf.curve)
}
}
err = ioutil.WriteFile(*cf.outPubPath, cert.MarshalX25519PublicKey(pub), 0600)
if isP11 {
p11Client, err := pkclient.FromUrl(*cf.p11url)
if err != nil {
return fmt.Errorf("error while creating PKCS#11 client: %w", err)
}
defer func(client *pkclient.PKClient) {
_ = client.Close()
}(p11Client)
pub, err = p11Client.GetPubKey()
if err != nil {
return fmt.Errorf("error while getting public key: %w", err)
}
} else {
err = os.WriteFile(*cf.outKeyPath, cert.MarshalPrivateKeyToPEM(curve, rawPriv), 0600)
if err != nil {
return fmt.Errorf("error while writing out-key: %s", err)
}
}
err = os.WriteFile(*cf.outPubPath, cert.MarshalPublicKeyToPEM(curve, pub), 0600)
if err != nil {
return fmt.Errorf("error while writing out-pub: %s", err)
}
@@ -59,7 +101,7 @@ func keygenSummary() string {
func keygenHelp(out io.Writer) {
cf := newKeygenFlags()
out.Write([]byte("Usage of " + os.Args[0] + " " + keygenSummary() + "\n"))
_, _ = out.Write([]byte("Usage of " + os.Args[0] + " " + keygenSummary() + "\n"))
cf.set.SetOutput(out)
cf.set.PrintDefaults()
}

View File

@@ -2,16 +2,14 @@ package main
import (
"bytes"
"io/ioutil"
"os"
"testing"
"github.com/slackhq/nebula/cert"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
//TODO: test file permissions
func Test_keygenSummary(t *testing.T) {
assert.Equal(t, "keygen <flags>: create a public/private key pair. the public key can be passed to `nebula-cert sign`", keygenSummary())
}
@@ -22,10 +20,13 @@ func Test_keygenHelp(t *testing.T) {
assert.Equal(
t,
"Usage of "+os.Args[0]+" keygen <flags>: create a public/private key pair. the public key can be passed to `nebula-cert sign`\n"+
" -curve string\n"+
" \tECDH Curve (25519, P256) (default \"25519\")\n"+
" -out-key string\n"+
" \tRequired: path to write the private key to\n"+
" -out-pub string\n"+
" \tRequired: path to write the public key to\n",
" \tRequired: path to write the public key to\n"+
optionalPkcs11String(" -pkcs11 string\n \tOptional: PKCS#11 URI to an existing private key\n"),
ob.String(),
)
}
@@ -36,57 +37,59 @@ func Test_keygen(t *testing.T) {
// required args
assertHelpError(t, keygen([]string{"-out-pub", "nope"}, ob, eb), "-out-key is required")
assert.Equal(t, "", ob.String())
assert.Equal(t, "", eb.String())
assert.Empty(t, ob.String())
assert.Empty(t, eb.String())
assertHelpError(t, keygen([]string{"-out-key", "nope"}, ob, eb), "-out-pub is required")
assert.Equal(t, "", ob.String())
assert.Equal(t, "", eb.String())
assert.Empty(t, ob.String())
assert.Empty(t, eb.String())
// failed key write
ob.Reset()
eb.Reset()
args := []string{"-out-pub", "/do/not/write/pleasepub", "-out-key", "/do/not/write/pleasekey"}
assert.EqualError(t, keygen(args, ob, eb), "error while writing out-key: open /do/not/write/pleasekey: "+NoSuchDirError)
assert.Equal(t, "", ob.String())
assert.Equal(t, "", eb.String())
require.EqualError(t, keygen(args, ob, eb), "error while writing out-key: open /do/not/write/pleasekey: "+NoSuchDirError)
assert.Empty(t, ob.String())
assert.Empty(t, eb.String())
// create temp key file
keyF, err := ioutil.TempFile("", "test.key")
assert.Nil(t, err)
keyF, err := os.CreateTemp("", "test.key")
require.NoError(t, err)
defer os.Remove(keyF.Name())
// failed pub write
ob.Reset()
eb.Reset()
args = []string{"-out-pub", "/do/not/write/pleasepub", "-out-key", keyF.Name()}
assert.EqualError(t, keygen(args, ob, eb), "error while writing out-pub: open /do/not/write/pleasepub: "+NoSuchDirError)
assert.Equal(t, "", ob.String())
assert.Equal(t, "", eb.String())
require.EqualError(t, keygen(args, ob, eb), "error while writing out-pub: open /do/not/write/pleasepub: "+NoSuchDirError)
assert.Empty(t, ob.String())
assert.Empty(t, eb.String())
// create temp pub file
pubF, err := ioutil.TempFile("", "test.pub")
assert.Nil(t, err)
pubF, err := os.CreateTemp("", "test.pub")
require.NoError(t, err)
defer os.Remove(pubF.Name())
// test proper keygen
ob.Reset()
eb.Reset()
args = []string{"-out-pub", pubF.Name(), "-out-key", keyF.Name()}
assert.Nil(t, keygen(args, ob, eb))
assert.Equal(t, "", ob.String())
assert.Equal(t, "", eb.String())
require.NoError(t, keygen(args, ob, eb))
assert.Empty(t, ob.String())
assert.Empty(t, eb.String())
// read cert and key files
rb, _ := ioutil.ReadFile(keyF.Name())
lKey, b, err := cert.UnmarshalX25519PrivateKey(rb)
assert.Len(t, b, 0)
assert.Nil(t, err)
rb, _ := os.ReadFile(keyF.Name())
lKey, b, curve, err := cert.UnmarshalPrivateKeyFromPEM(rb)
assert.Equal(t, cert.Curve_CURVE25519, curve)
assert.Empty(t, b)
require.NoError(t, err)
assert.Len(t, lKey, 32)
rb, _ = ioutil.ReadFile(pubF.Name())
lPub, b, err := cert.UnmarshalX25519PublicKey(rb)
assert.Len(t, b, 0)
assert.Nil(t, err)
rb, _ = os.ReadFile(pubF.Name())
lPub, b, curve, err := cert.UnmarshalPublicKeyFromPEM(rb)
assert.Equal(t, cert.Curve_CURVE25519, curve)
assert.Empty(t, b)
require.NoError(t, err)
assert.Len(t, lPub, 32)
}

View File

@@ -17,7 +17,7 @@ func (he *helpError) Error() string {
return he.s
}
func newHelpErrorf(s string, v ...interface{}) error {
func newHelpErrorf(s string, v ...any) error {
return &helpError{s: fmt.Sprintf(s, v...)}
}
@@ -62,11 +62,11 @@ func main() {
switch args[0] {
case "ca":
err = ca(args[1:], os.Stdout, os.Stderr)
err = ca(args[1:], os.Stdout, os.Stderr, StdinPasswordReader{})
case "keygen":
err = keygen(args[1:], os.Stdout, os.Stderr)
case "sign":
err = signCert(args[1:], os.Stdout, os.Stderr)
err = signCert(args[1:], os.Stdout, os.Stderr, StdinPasswordReader{})
case "print":
err = printCert(args[1:], os.Stdout, os.Stderr)
case "verify":
@@ -127,6 +127,8 @@ func help(err string, out io.Writer) {
fmt.Fprintln(out, " "+signSummary())
fmt.Fprintln(out, " "+printSummary())
fmt.Fprintln(out, " "+verifySummary())
fmt.Fprintln(out, "")
fmt.Fprintf(out, " To see usage for a given mode, use %s <mode> -h\n", os.Args[0])
}
func mustFlagString(name string, val *string) error {

View File

@@ -3,13 +3,14 @@ package main
import (
"bytes"
"errors"
"github.com/stretchr/testify/assert"
"fmt"
"io"
"os"
"testing"
)
//TODO: all flag parsing continueOnError will print to stderr on its own currently
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func Test_help(t *testing.T) {
expected := "Usage of " + os.Args[0] + " <global flags> <mode>:\n" +
@@ -21,7 +22,9 @@ func Test_help(t *testing.T) {
" " + keygenSummary() + "\n" +
" " + signSummary() + "\n" +
" " + printSummary() + "\n" +
" " + verifySummary() + "\n"
" " + verifySummary() + "\n" +
"\n" +
" To see usage for a given mode, use " + os.Args[0] + " <mode> -h\n"
ob := &bytes.Buffer{}
@@ -74,8 +77,16 @@ func assertHelpError(t *testing.T, err error, msg string) {
case *helpError:
// good
default:
t.Fatal("err was not a helpError")
t.Fatal(fmt.Sprintf("err was not a helpError: %q, expected %q", err, msg))
}
assert.EqualError(t, err, msg)
require.EqualError(t, err, msg)
}
func optionalPkcs11String(msg string) string {
if p11Supported() {
return msg
} else {
return ""
}
}

View File

@@ -0,0 +1,15 @@
//go:build cgo && pkcs11
package main
import (
"flag"
)
func p11Supported() bool {
return true
}
func p11Flag(set *flag.FlagSet) *string {
return set.String("pkcs11", "", "Optional: PKCS#11 URI to an existing private key")
}

View File

@@ -0,0 +1,16 @@
//go:build !cgo || !pkcs11
package main
import (
"flag"
)
func p11Supported() bool {
return false
}
func p11Flag(set *flag.FlagSet) *string {
var ret = ""
return &ret
}

View File

@@ -0,0 +1,28 @@
package main
import (
"errors"
"fmt"
"os"
"golang.org/x/term"
)
var ErrNoTerminal = errors.New("cannot read password from nonexistent terminal")
type PasswordReader interface {
ReadPassword() ([]byte, error)
}
type StdinPasswordReader struct{}
func (pr StdinPasswordReader) ReadPassword() ([]byte, error) {
if !term.IsTerminal(int(os.Stdin.Fd())) {
return nil, ErrNoTerminal
}
password, err := term.ReadPassword(int(os.Stdin.Fd()))
fmt.Println()
return password, err
}

View File

@@ -0,0 +1,10 @@
package main
type StubPasswordReader struct {
password []byte
err error
}
func (pr *StubPasswordReader) ReadPassword() ([]byte, error) {
return pr.password, pr.err
}

View File

@@ -4,23 +4,26 @@ import (
"encoding/json"
"flag"
"fmt"
"github.com/slackhq/nebula/cert"
"io"
"io/ioutil"
"os"
"strings"
"github.com/skip2/go-qrcode"
"github.com/slackhq/nebula/cert"
)
type printFlags struct {
set *flag.FlagSet
json *bool
path *string
set *flag.FlagSet
json *bool
outQRPath *string
path *string
}
func newPrintFlags() *printFlags {
pf := printFlags{set: flag.NewFlagSet("print", flag.ContinueOnError)}
pf.set.Usage = func() {}
pf.json = pf.set.Bool("json", false, "Optional: outputs certificates in json format")
pf.outQRPath = pf.set.String("out-qr", "", "Optional: output a qr code image (png) of the certificate")
pf.path = pf.set.String("path", "", "Required: path to the certificate")
return &pf
@@ -37,32 +40,61 @@ func printCert(args []string, out io.Writer, errOut io.Writer) error {
return err
}
rawCert, err := ioutil.ReadFile(*pf.path)
rawCert, err := os.ReadFile(*pf.path)
if err != nil {
return fmt.Errorf("unable to read cert; %s", err)
}
var c *cert.NebulaCertificate
var c cert.Certificate
var qrBytes []byte
part := 0
var jsonCerts []cert.Certificate
for {
c, rawCert, err = cert.UnmarshalNebulaCertificateFromPEM(rawCert)
c, rawCert, err = cert.UnmarshalCertificateFromPEM(rawCert)
if err != nil {
return fmt.Errorf("error while unmarshaling cert: %s", err)
}
if *pf.json {
b, _ := json.Marshal(c)
out.Write(b)
out.Write([]byte("\n"))
jsonCerts = append(jsonCerts, c)
} else {
out.Write([]byte(c.String()))
out.Write([]byte("\n"))
_, _ = out.Write([]byte(c.String()))
_, _ = out.Write([]byte("\n"))
}
if *pf.outQRPath != "" {
b, err := c.MarshalPEM()
if err != nil {
return fmt.Errorf("error while marshalling cert to PEM: %s", err)
}
qrBytes = append(qrBytes, b...)
}
if rawCert == nil || len(rawCert) == 0 || strings.TrimSpace(string(rawCert)) == "" {
break
}
part++
}
if *pf.json {
b, _ := json.Marshal(jsonCerts)
_, _ = out.Write(b)
_, _ = out.Write([]byte("\n"))
}
if *pf.outQRPath != "" {
b, err := qrcode.Encode(string(qrBytes), qrcode.Medium, -5)
if err != nil {
return fmt.Errorf("error while generating qr code: %s", err)
}
err = os.WriteFile(*pf.outQRPath, b, 0600)
if err != nil {
return fmt.Errorf("error while writing out-qr: %s", err)
}
}
return nil

View File

@@ -2,12 +2,17 @@ package main
import (
"bytes"
"github.com/slackhq/nebula/cert"
"github.com/stretchr/testify/assert"
"io/ioutil"
"crypto/ed25519"
"crypto/rand"
"encoding/hex"
"net/netip"
"os"
"testing"
"time"
"github.com/slackhq/nebula/cert"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func Test_printSummary(t *testing.T) {
@@ -22,6 +27,8 @@ func Test_printHelp(t *testing.T) {
"Usage of "+os.Args[0]+" print <flags>: prints details about a certificate\n"+
" -json\n"+
" \tOptional: outputs certificates in json format\n"+
" -out-qr string\n"+
" \tOptional: output a qr code image (png) of the certificate\n"+
" -path string\n"+
" \tRequired: path to the certificate\n",
ob.String(),
@@ -36,84 +43,203 @@ func Test_printCert(t *testing.T) {
// no path
err := printCert([]string{}, ob, eb)
assert.Equal(t, "", ob.String())
assert.Equal(t, "", eb.String())
assert.Empty(t, ob.String())
assert.Empty(t, eb.String())
assertHelpError(t, err, "-path is required")
// no cert at path
ob.Reset()
eb.Reset()
err = printCert([]string{"-path", "does_not_exist"}, ob, eb)
assert.Equal(t, "", ob.String())
assert.Equal(t, "", eb.String())
assert.EqualError(t, err, "unable to read cert; open does_not_exist: "+NoSuchFileError)
assert.Empty(t, ob.String())
assert.Empty(t, eb.String())
require.EqualError(t, err, "unable to read cert; open does_not_exist: "+NoSuchFileError)
// invalid cert at path
ob.Reset()
eb.Reset()
tf, err := ioutil.TempFile("", "print-cert")
assert.Nil(t, err)
tf, err := os.CreateTemp("", "print-cert")
require.NoError(t, err)
defer os.Remove(tf.Name())
tf.WriteString("-----BEGIN NOPE-----")
err = printCert([]string{"-path", tf.Name()}, ob, eb)
assert.Equal(t, "", ob.String())
assert.Equal(t, "", eb.String())
assert.EqualError(t, err, "error while unmarshaling cert: input did not contain a valid PEM encoded block")
assert.Empty(t, ob.String())
assert.Empty(t, eb.String())
require.EqualError(t, err, "error while unmarshaling cert: input did not contain a valid PEM encoded block")
// test multiple certs
ob.Reset()
eb.Reset()
tf.Truncate(0)
tf.Seek(0, 0)
c := cert.NebulaCertificate{
Details: cert.NebulaCertificateDetails{
Name: "test",
Groups: []string{"hi"},
PublicKey: []byte{1, 2, 3, 4, 5, 6, 7, 8, 9, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 0, 1, 2},
},
Signature: []byte{1, 2, 3, 4, 5, 6, 7, 8, 9, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 0, 1, 2},
}
ca, caKey := NewTestCaCert("test ca", nil, nil, time.Time{}, time.Time{}, nil, nil, nil)
c, _ := NewTestCert(ca, caKey, "test", time.Time{}, time.Time{}, []netip.Prefix{netip.MustParsePrefix("10.0.0.123/8")}, nil, []string{"hi"})
p, _ := c.MarshalToPEM()
p, _ := c.MarshalPEM()
tf.Write(p)
tf.Write(p)
tf.Write(p)
err = printCert([]string{"-path", tf.Name()}, ob, eb)
assert.Nil(t, err)
fp, _ := c.Fingerprint()
pk := hex.EncodeToString(c.PublicKey())
sig := hex.EncodeToString(c.Signature())
require.NoError(t, err)
assert.Equal(
t,
"NebulaCertificate {\n\tDetails {\n\t\tName: test\n\t\tIps: []\n\t\tSubnets: []\n\t\tGroups: [\n\t\t\t\"hi\"\n\t\t]\n\t\tNot before: 0001-01-01 00:00:00 +0000 UTC\n\t\tNot After: 0001-01-01 00:00:00 +0000 UTC\n\t\tIs CA: false\n\t\tIssuer: \n\t\tPublic key: 0102030405060708090001020304050607080900010203040506070809000102\n\t}\n\tFingerprint: cc3492c0e9c48f17547f5987ea807462ebb3451e622590a10bb3763c344c82bd\n\tSignature: 0102030405060708090001020304050607080900010203040506070809000102\n}\nNebulaCertificate {\n\tDetails {\n\t\tName: test\n\t\tIps: []\n\t\tSubnets: []\n\t\tGroups: [\n\t\t\t\"hi\"\n\t\t]\n\t\tNot before: 0001-01-01 00:00:00 +0000 UTC\n\t\tNot After: 0001-01-01 00:00:00 +0000 UTC\n\t\tIs CA: false\n\t\tIssuer: \n\t\tPublic key: 0102030405060708090001020304050607080900010203040506070809000102\n\t}\n\tFingerprint: cc3492c0e9c48f17547f5987ea807462ebb3451e622590a10bb3763c344c82bd\n\tSignature: 0102030405060708090001020304050607080900010203040506070809000102\n}\nNebulaCertificate {\n\tDetails {\n\t\tName: test\n\t\tIps: []\n\t\tSubnets: []\n\t\tGroups: [\n\t\t\t\"hi\"\n\t\t]\n\t\tNot before: 0001-01-01 00:00:00 +0000 UTC\n\t\tNot After: 0001-01-01 00:00:00 +0000 UTC\n\t\tIs CA: false\n\t\tIssuer: \n\t\tPublic key: 0102030405060708090001020304050607080900010203040506070809000102\n\t}\n\tFingerprint: cc3492c0e9c48f17547f5987ea807462ebb3451e622590a10bb3763c344c82bd\n\tSignature: 0102030405060708090001020304050607080900010203040506070809000102\n}\n",
//"NebulaCertificate {\n\tDetails {\n\t\tName: test\n\t\tIps: []\n\t\tSubnets: []\n\t\tGroups: [\n\t\t\t\"hi\"\n\t\t]\n\t\tNot before: 0001-01-01 00:00:00 +0000 UTC\n\t\tNot After: 0001-01-01 00:00:00 +0000 UTC\n\t\tIs CA: false\n\t\tIssuer: "+c.Issuer()+"\n\t\tPublic key: "+pk+"\n\t\tCurve: CURVE25519\n\t}\n\tFingerprint: "+fp+"\n\tSignature: "+sig+"\n}\nNebulaCertificate {\n\tDetails {\n\t\tName: test\n\t\tIps: []\n\t\tSubnets: []\n\t\tGroups: [\n\t\t\t\"hi\"\n\t\t]\n\t\tNot before: 0001-01-01 00:00:00 +0000 UTC\n\t\tNot After: 0001-01-01 00:00:00 +0000 UTC\n\t\tIs CA: false\n\t\tIssuer: "+c.Issuer()+"\n\t\tPublic key: "+pk+"\n\t\tCurve: CURVE25519\n\t}\n\tFingerprint: "+fp+"\n\tSignature: "+sig+"\n}\nNebulaCertificate {\n\tDetails {\n\t\tName: test\n\t\tIps: []\n\t\tSubnets: []\n\t\tGroups: [\n\t\t\t\"hi\"\n\t\t]\n\t\tNot before: 0001-01-01 00:00:00 +0000 UTC\n\t\tNot After: 0001-01-01 00:00:00 +0000 UTC\n\t\tIs CA: false\n\t\tIssuer: "+c.Issuer()+"\n\t\tPublic key: "+pk+"\n\t\tCurve: CURVE25519\n\t}\n\tFingerprint: "+fp+"\n\tSignature: "+sig+"\n}\n",
`{
"details": {
"curve": "CURVE25519",
"groups": [
"hi"
],
"isCa": false,
"issuer": "`+c.Issuer()+`",
"name": "test",
"networks": [
"10.0.0.123/8"
],
"notAfter": "0001-01-01T00:00:00Z",
"notBefore": "0001-01-01T00:00:00Z",
"publicKey": "`+pk+`",
"unsafeNetworks": []
},
"fingerprint": "`+fp+`",
"signature": "`+sig+`",
"version": 1
}
{
"details": {
"curve": "CURVE25519",
"groups": [
"hi"
],
"isCa": false,
"issuer": "`+c.Issuer()+`",
"name": "test",
"networks": [
"10.0.0.123/8"
],
"notAfter": "0001-01-01T00:00:00Z",
"notBefore": "0001-01-01T00:00:00Z",
"publicKey": "`+pk+`",
"unsafeNetworks": []
},
"fingerprint": "`+fp+`",
"signature": "`+sig+`",
"version": 1
}
{
"details": {
"curve": "CURVE25519",
"groups": [
"hi"
],
"isCa": false,
"issuer": "`+c.Issuer()+`",
"name": "test",
"networks": [
"10.0.0.123/8"
],
"notAfter": "0001-01-01T00:00:00Z",
"notBefore": "0001-01-01T00:00:00Z",
"publicKey": "`+pk+`",
"unsafeNetworks": []
},
"fingerprint": "`+fp+`",
"signature": "`+sig+`",
"version": 1
}
`,
ob.String(),
)
assert.Equal(t, "", eb.String())
assert.Empty(t, eb.String())
// test json
ob.Reset()
eb.Reset()
tf.Truncate(0)
tf.Seek(0, 0)
c = cert.NebulaCertificate{
Details: cert.NebulaCertificateDetails{
Name: "test",
Groups: []string{"hi"},
PublicKey: []byte{1, 2, 3, 4, 5, 6, 7, 8, 9, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 0, 1, 2},
},
Signature: []byte{1, 2, 3, 4, 5, 6, 7, 8, 9, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 0, 1, 2},
}
p, _ = c.MarshalToPEM()
tf.Write(p)
tf.Write(p)
tf.Write(p)
err = printCert([]string{"-json", "-path", tf.Name()}, ob, eb)
assert.Nil(t, err)
fp, _ = c.Fingerprint()
pk = hex.EncodeToString(c.PublicKey())
sig = hex.EncodeToString(c.Signature())
require.NoError(t, err)
assert.Equal(
t,
"{\"details\":{\"groups\":[\"hi\"],\"ips\":[],\"isCa\":false,\"issuer\":\"\",\"name\":\"test\",\"notAfter\":\"0001-01-01T00:00:00Z\",\"notBefore\":\"0001-01-01T00:00:00Z\",\"publicKey\":\"0102030405060708090001020304050607080900010203040506070809000102\",\"subnets\":[]},\"fingerprint\":\"cc3492c0e9c48f17547f5987ea807462ebb3451e622590a10bb3763c344c82bd\",\"signature\":\"0102030405060708090001020304050607080900010203040506070809000102\"}\n{\"details\":{\"groups\":[\"hi\"],\"ips\":[],\"isCa\":false,\"issuer\":\"\",\"name\":\"test\",\"notAfter\":\"0001-01-01T00:00:00Z\",\"notBefore\":\"0001-01-01T00:00:00Z\",\"publicKey\":\"0102030405060708090001020304050607080900010203040506070809000102\",\"subnets\":[]},\"fingerprint\":\"cc3492c0e9c48f17547f5987ea807462ebb3451e622590a10bb3763c344c82bd\",\"signature\":\"0102030405060708090001020304050607080900010203040506070809000102\"}\n{\"details\":{\"groups\":[\"hi\"],\"ips\":[],\"isCa\":false,\"issuer\":\"\",\"name\":\"test\",\"notAfter\":\"0001-01-01T00:00:00Z\",\"notBefore\":\"0001-01-01T00:00:00Z\",\"publicKey\":\"0102030405060708090001020304050607080900010203040506070809000102\",\"subnets\":[]},\"fingerprint\":\"cc3492c0e9c48f17547f5987ea807462ebb3451e622590a10bb3763c344c82bd\",\"signature\":\"0102030405060708090001020304050607080900010203040506070809000102\"}\n",
`[{"details":{"curve":"CURVE25519","groups":["hi"],"isCa":false,"issuer":"`+c.Issuer()+`","name":"test","networks":["10.0.0.123/8"],"notAfter":"0001-01-01T00:00:00Z","notBefore":"0001-01-01T00:00:00Z","publicKey":"`+pk+`","unsafeNetworks":[]},"fingerprint":"`+fp+`","signature":"`+sig+`","version":1},{"details":{"curve":"CURVE25519","groups":["hi"],"isCa":false,"issuer":"`+c.Issuer()+`","name":"test","networks":["10.0.0.123/8"],"notAfter":"0001-01-01T00:00:00Z","notBefore":"0001-01-01T00:00:00Z","publicKey":"`+pk+`","unsafeNetworks":[]},"fingerprint":"`+fp+`","signature":"`+sig+`","version":1},{"details":{"curve":"CURVE25519","groups":["hi"],"isCa":false,"issuer":"`+c.Issuer()+`","name":"test","networks":["10.0.0.123/8"],"notAfter":"0001-01-01T00:00:00Z","notBefore":"0001-01-01T00:00:00Z","publicKey":"`+pk+`","unsafeNetworks":[]},"fingerprint":"`+fp+`","signature":"`+sig+`","version":1}]
`,
ob.String(),
)
assert.Equal(t, "", eb.String())
assert.Empty(t, eb.String())
}
// NewTestCaCert will generate a CA cert
func NewTestCaCert(name string, pubKey, privKey []byte, before, after time.Time, networks, unsafeNetworks []netip.Prefix, groups []string) (cert.Certificate, []byte) {
var err error
if pubKey == nil || privKey == nil {
pubKey, privKey, err = ed25519.GenerateKey(rand.Reader)
if err != nil {
panic(err)
}
}
t := &cert.TBSCertificate{
Version: cert.Version1,
Name: name,
NotBefore: time.Unix(before.Unix(), 0),
NotAfter: time.Unix(after.Unix(), 0),
PublicKey: pubKey,
Networks: networks,
UnsafeNetworks: unsafeNetworks,
Groups: groups,
IsCA: true,
}
c, err := t.Sign(nil, cert.Curve_CURVE25519, privKey)
if err != nil {
panic(err)
}
return c, privKey
}
func NewTestCert(ca cert.Certificate, signerKey []byte, name string, before, after time.Time, networks, unsafeNetworks []netip.Prefix, groups []string) (cert.Certificate, []byte) {
if before.IsZero() {
before = ca.NotBefore()
}
if after.IsZero() {
after = ca.NotAfter()
}
if len(networks) == 0 {
networks = []netip.Prefix{netip.MustParsePrefix("10.0.0.123/8")}
}
pub, rawPriv := x25519Keypair()
nc := &cert.TBSCertificate{
Version: cert.Version1,
Name: name,
Networks: networks,
UnsafeNetworks: unsafeNetworks,
Groups: groups,
NotBefore: time.Unix(before.Unix(), 0),
NotAfter: time.Unix(after.Unix(), 0),
PublicKey: pub,
IsCA: false,
}
c, err := nc.Sign(ca, ca.Curve(), signerKey)
if err != nil {
panic(err)
}
return c, rawPriv
}

View File

@@ -1,60 +1,80 @@
package main
import (
"crypto/ecdh"
"crypto/rand"
"errors"
"flag"
"fmt"
"io"
"io/ioutil"
"net"
"net/netip"
"os"
"strings"
"time"
"github.com/skip2/go-qrcode"
"github.com/slackhq/nebula/cert"
"github.com/slackhq/nebula/pkclient"
"golang.org/x/crypto/curve25519"
)
type signFlags struct {
set *flag.FlagSet
caKeyPath *string
caCertPath *string
name *string
ip *string
duration *time.Duration
inPubPath *string
outKeyPath *string
outCertPath *string
groups *string
subnets *string
set *flag.FlagSet
version *uint
caKeyPath *string
caCertPath *string
name *string
networks *string
unsafeNetworks *string
duration *time.Duration
inPubPath *string
outKeyPath *string
outCertPath *string
outQRPath *string
groups *string
p11url *string
// Deprecated options
ip *string
subnets *string
}
func newSignFlags() *signFlags {
sf := signFlags{set: flag.NewFlagSet("sign", flag.ContinueOnError)}
sf.set.Usage = func() {}
sf.version = sf.set.Uint("version", 0, "Optional: version of the certificate format to use, the default is to create both v1 and v2 certificates.")
sf.caKeyPath = sf.set.String("ca-key", "ca.key", "Optional: path to the signing CA key")
sf.caCertPath = sf.set.String("ca-crt", "ca.crt", "Optional: path to the signing CA cert")
sf.name = sf.set.String("name", "", "Required: name of the cert, usually a hostname")
sf.ip = sf.set.String("ip", "", "Required: ip and network in CIDR notation to assign the cert")
sf.networks = sf.set.String("networks", "", "Required: comma separated list of ip address and network in CIDR notation to assign to this cert")
sf.unsafeNetworks = sf.set.String("unsafe-networks", "", "Optional: comma separated list of ip address and network in CIDR notation. Unsafe networks this cert can route for")
sf.duration = sf.set.Duration("duration", 0, "Optional: how long the cert should be valid for. The default is 1 second before the signing cert expires. Valid time units are seconds: \"s\", minutes: \"m\", hours: \"h\"")
sf.inPubPath = sf.set.String("in-pub", "", "Optional (if out-key not set): path to read a previously generated public key")
sf.outKeyPath = sf.set.String("out-key", "", "Optional (if in-pub not set): path to write the private key to")
sf.outCertPath = sf.set.String("out-crt", "", "Optional: path to write the certificate to")
sf.outQRPath = sf.set.String("out-qr", "", "Optional: output a qr code image (png) of the certificate")
sf.groups = sf.set.String("groups", "", "Optional: comma separated list of groups")
sf.subnets = sf.set.String("subnets", "", "Optional: comma seperated list of subnet this cert can serve for")
return &sf
sf.p11url = p11Flag(sf.set)
sf.ip = sf.set.String("ip", "", "Deprecated, see -networks")
sf.subnets = sf.set.String("subnets", "", "Deprecated, see -unsafe-networks")
return &sf
}
func signCert(args []string, out io.Writer, errOut io.Writer) error {
func signCert(args []string, out io.Writer, errOut io.Writer, pr PasswordReader) error {
sf := newSignFlags()
err := sf.set.Parse(args)
if err != nil {
return err
}
if err := mustFlagString("ca-key", sf.caKeyPath); err != nil {
return err
isP11 := len(*sf.p11url) > 0
if !isP11 {
if err := mustFlagString("ca-key", sf.caKeyPath); err != nil {
return err
}
}
if err := mustFlagString("ca-crt", sf.caCertPath); err != nil {
return err
@@ -62,36 +82,83 @@ func signCert(args []string, out io.Writer, errOut io.Writer) error {
if err := mustFlagString("name", sf.name); err != nil {
return err
}
if err := mustFlagString("ip", sf.ip); err != nil {
return err
}
if *sf.inPubPath != "" && *sf.outKeyPath != "" {
if !isP11 && *sf.inPubPath != "" && *sf.outKeyPath != "" {
return newHelpErrorf("cannot set both -in-pub and -out-key")
}
rawCAKey, err := ioutil.ReadFile(*sf.caKeyPath)
if err != nil {
return fmt.Errorf("error while reading ca-key: %s", err)
var v4Networks []netip.Prefix
var v6Networks []netip.Prefix
if *sf.networks == "" && *sf.ip != "" {
// Pull up deprecated -ip flag if needed
*sf.networks = *sf.ip
}
caKey, _, err := cert.UnmarshalEd25519PrivateKey(rawCAKey)
if err != nil {
return fmt.Errorf("error while parsing ca-key: %s", err)
if len(*sf.networks) == 0 {
return newHelpErrorf("-networks is required")
}
rawCACert, err := ioutil.ReadFile(*sf.caCertPath)
version := cert.Version(*sf.version)
if version != 0 && version != cert.Version1 && version != cert.Version2 {
return newHelpErrorf("-version must be either %v or %v", cert.Version1, cert.Version2)
}
var curve cert.Curve
var caKey []byte
if !isP11 {
var rawCAKey []byte
rawCAKey, err := os.ReadFile(*sf.caKeyPath)
if err != nil {
return fmt.Errorf("error while reading ca-key: %s", err)
}
// naively attempt to decode the private key as though it is not encrypted
caKey, _, curve, err = cert.UnmarshalSigningPrivateKeyFromPEM(rawCAKey)
if errors.Is(err, cert.ErrPrivateKeyEncrypted) {
// ask for a passphrase until we get one
var passphrase []byte
for i := 0; i < 5; i++ {
out.Write([]byte("Enter passphrase: "))
passphrase, err = pr.ReadPassword()
if errors.Is(err, ErrNoTerminal) {
return fmt.Errorf("ca-key is encrypted and must be decrypted interactively")
} else if err != nil {
return fmt.Errorf("error reading password: %s", err)
}
if len(passphrase) > 0 {
break
}
}
if len(passphrase) == 0 {
return fmt.Errorf("cannot open encrypted ca-key without passphrase")
}
curve, caKey, _, err = cert.DecryptAndUnmarshalSigningPrivateKey(passphrase, rawCAKey)
if err != nil {
return fmt.Errorf("error while parsing encrypted ca-key: %s", err)
}
} else if err != nil {
return fmt.Errorf("error while parsing ca-key: %s", err)
}
}
rawCACert, err := os.ReadFile(*sf.caCertPath)
if err != nil {
return fmt.Errorf("error while reading ca-crt: %s", err)
}
caCert, _, err := cert.UnmarshalNebulaCertificateFromPEM(rawCACert)
caCert, _, err := cert.UnmarshalCertificateFromPEM(rawCACert)
if err != nil {
return fmt.Errorf("error while parsing ca-crt: %s", err)
}
issuer, err := caCert.Sha256Sum()
if err != nil {
return fmt.Errorf("error while getting -ca-crt fingerprint: %s", err)
if !isP11 {
if err := caCert.VerifyPrivateKey(curve, caKey); err != nil {
return fmt.Errorf("refusing to sign, root certificate does not match private key")
}
}
if caCert.Expired(time.Now()) {
@@ -100,16 +167,53 @@ func signCert(args []string, out io.Writer, errOut io.Writer) error {
// if no duration is given, expire one second before the root expires
if *sf.duration <= 0 {
*sf.duration = time.Until(caCert.Details.NotAfter) - time.Second*1
*sf.duration = time.Until(caCert.NotAfter()) - time.Second*1
}
ip, ipNet, err := net.ParseCIDR(*sf.ip)
if err != nil {
return newHelpErrorf("invalid ip definition: %s", err)
}
ipNet.IP = ip
if *sf.networks != "" {
for _, rs := range strings.Split(*sf.networks, ",") {
rs := strings.Trim(rs, " ")
if rs != "" {
n, err := netip.ParsePrefix(rs)
if err != nil {
return newHelpErrorf("invalid -networks definition: %s", rs)
}
groups := []string{}
if n.Addr().Is4() {
v4Networks = append(v4Networks, n)
} else {
v6Networks = append(v6Networks, n)
}
}
}
}
var v4UnsafeNetworks []netip.Prefix
var v6UnsafeNetworks []netip.Prefix
if *sf.unsafeNetworks == "" && *sf.subnets != "" {
// Pull up deprecated -subnets flag if needed
*sf.unsafeNetworks = *sf.subnets
}
if *sf.unsafeNetworks != "" {
for _, rs := range strings.Split(*sf.unsafeNetworks, ",") {
rs := strings.Trim(rs, " ")
if rs != "" {
n, err := netip.ParsePrefix(rs)
if err != nil {
return newHelpErrorf("invalid -unsafe-networks definition: %s", rs)
}
if n.Addr().Is4() {
v4UnsafeNetworks = append(v4UnsafeNetworks, n)
} else {
v6UnsafeNetworks = append(v6UnsafeNetworks, n)
}
}
}
}
var groups []string
if *sf.groups != "" {
for _, rg := range strings.Split(*sf.groups, ",") {
g := strings.TrimSpace(rg)
@@ -119,50 +223,41 @@ func signCert(args []string, out io.Writer, errOut io.Writer) error {
}
}
subnets := []*net.IPNet{}
if *sf.subnets != "" {
for _, rs := range strings.Split(*sf.subnets, ",") {
rs := strings.Trim(rs, " ")
if rs != "" {
_, s, err := net.ParseCIDR(rs)
if err != nil {
return newHelpErrorf("invalid subnet definition: %s", err)
}
subnets = append(subnets, s)
}
var pub, rawPriv []byte
var p11Client *pkclient.PKClient
if isP11 {
curve = cert.Curve_P256
p11Client, err = pkclient.FromUrl(*sf.p11url)
if err != nil {
return fmt.Errorf("error while creating PKCS#11 client: %w", err)
}
defer func(client *pkclient.PKClient) {
_ = client.Close()
}(p11Client)
}
var pub, rawPriv []byte
if *sf.inPubPath != "" {
rawPub, err := ioutil.ReadFile(*sf.inPubPath)
var pubCurve cert.Curve
rawPub, err := os.ReadFile(*sf.inPubPath)
if err != nil {
return fmt.Errorf("error while reading in-pub: %s", err)
}
pub, _, err = cert.UnmarshalX25519PublicKey(rawPub)
pub, _, pubCurve, err = cert.UnmarshalPublicKeyFromPEM(rawPub)
if err != nil {
return fmt.Errorf("error while parsing in-pub: %s", err)
}
if pubCurve != curve {
return fmt.Errorf("curve of in-pub does not match ca")
}
} else if isP11 {
pub, err = p11Client.GetPubKey()
if err != nil {
return fmt.Errorf("error while getting public key with PKCS#11: %w", err)
}
} else {
pub, rawPriv = x25519Keypair()
}
nc := cert.NebulaCertificate{
Details: cert.NebulaCertificateDetails{
Name: *sf.name,
Ips: []*net.IPNet{ipNet},
Groups: groups,
Subnets: subnets,
NotBefore: time.Now(),
NotAfter: time.Now().Add(*sf.duration),
PublicKey: pub,
IsCA: false,
Issuer: issuer,
},
}
if err := nc.CheckRootConstrains(caCert); err != nil {
return fmt.Errorf("refusing to sign, root certificate constraints violated: %s", err)
pub, rawPriv = newKeypair(curve)
}
if *sf.outKeyPath == "" {
@@ -177,42 +272,159 @@ func signCert(args []string, out io.Writer, errOut io.Writer) error {
return fmt.Errorf("refusing to overwrite existing cert: %s", *sf.outCertPath)
}
err = nc.Sign(caKey)
if err != nil {
return fmt.Errorf("error while signing: %s", err)
var crts []cert.Certificate
notBefore := time.Now()
notAfter := notBefore.Add(*sf.duration)
if version == 0 || version == cert.Version1 {
// Make sure we at least have an ip
if len(v4Networks) != 1 {
return newHelpErrorf("invalid -networks definition: v1 certificates can only have a single ipv4 address")
}
if version == cert.Version1 {
// If we are asked to mint a v1 certificate only then we cant just ignore any v6 addresses
if len(v6Networks) > 0 {
return newHelpErrorf("invalid -networks definition: v1 certificates can only be ipv4")
}
if len(v6UnsafeNetworks) > 0 {
return newHelpErrorf("invalid -unsafe-networks definition: v1 certificates can only be ipv4")
}
}
t := &cert.TBSCertificate{
Version: cert.Version1,
Name: *sf.name,
Networks: []netip.Prefix{v4Networks[0]},
Groups: groups,
UnsafeNetworks: v4UnsafeNetworks,
NotBefore: notBefore,
NotAfter: notAfter,
PublicKey: pub,
IsCA: false,
Curve: curve,
}
var nc cert.Certificate
if p11Client == nil {
nc, err = t.Sign(caCert, curve, caKey)
if err != nil {
return fmt.Errorf("error while signing: %w", err)
}
} else {
nc, err = t.SignWith(caCert, curve, p11Client.SignASN1)
if err != nil {
return fmt.Errorf("error while signing with PKCS#11: %w", err)
}
}
crts = append(crts, nc)
}
if *sf.inPubPath == "" {
if version == 0 || version == cert.Version2 {
t := &cert.TBSCertificate{
Version: cert.Version2,
Name: *sf.name,
Networks: append(v4Networks, v6Networks...),
Groups: groups,
UnsafeNetworks: append(v4UnsafeNetworks, v6UnsafeNetworks...),
NotBefore: notBefore,
NotAfter: notAfter,
PublicKey: pub,
IsCA: false,
Curve: curve,
}
var nc cert.Certificate
if p11Client == nil {
nc, err = t.Sign(caCert, curve, caKey)
if err != nil {
return fmt.Errorf("error while signing: %w", err)
}
} else {
nc, err = t.SignWith(caCert, curve, p11Client.SignASN1)
if err != nil {
return fmt.Errorf("error while signing with PKCS#11: %w", err)
}
}
crts = append(crts, nc)
}
if !isP11 && *sf.inPubPath == "" {
if _, err := os.Stat(*sf.outKeyPath); err == nil {
return fmt.Errorf("refusing to overwrite existing key: %s", *sf.outKeyPath)
}
err = ioutil.WriteFile(*sf.outKeyPath, cert.MarshalX25519PrivateKey(rawPriv), 0600)
err = os.WriteFile(*sf.outKeyPath, cert.MarshalPrivateKeyToPEM(curve, rawPriv), 0600)
if err != nil {
return fmt.Errorf("error while writing out-key: %s", err)
}
}
b, err := nc.MarshalToPEM()
if err != nil {
return fmt.Errorf("error while marshalling certificate: %s", err)
var b []byte
for _, c := range crts {
sb, err := c.MarshalPEM()
if err != nil {
return fmt.Errorf("error while marshalling certificate: %s", err)
}
b = append(b, sb...)
}
err = ioutil.WriteFile(*sf.outCertPath, b, 0600)
err = os.WriteFile(*sf.outCertPath, b, 0600)
if err != nil {
return fmt.Errorf("error while writing out-crt: %s", err)
}
if *sf.outQRPath != "" {
b, err = qrcode.Encode(string(b), qrcode.Medium, -5)
if err != nil {
return fmt.Errorf("error while generating qr code: %s", err)
}
err = os.WriteFile(*sf.outQRPath, b, 0600)
if err != nil {
return fmt.Errorf("error while writing out-qr: %s", err)
}
}
return nil
}
func newKeypair(curve cert.Curve) ([]byte, []byte) {
switch curve {
case cert.Curve_CURVE25519:
return x25519Keypair()
case cert.Curve_P256:
return p256Keypair()
default:
return nil, nil
}
}
func x25519Keypair() ([]byte, []byte) {
var pubkey, privkey [32]byte
if _, err := io.ReadFull(rand.Reader, privkey[:]); err != nil {
privkey := make([]byte, 32)
if _, err := io.ReadFull(rand.Reader, privkey); err != nil {
panic(err)
}
curve25519.ScalarBaseMult(&pubkey, &privkey)
return pubkey[:], privkey[:]
pubkey, err := curve25519.X25519(privkey, curve25519.Basepoint)
if err != nil {
panic(err)
}
return pubkey, privkey
}
func p256Keypair() ([]byte, []byte) {
privkey, err := ecdh.P256().GenerateKey(rand.Reader)
if err != nil {
panic(err)
}
pubkey := privkey.PublicKey()
return pubkey.Bytes(), privkey.Bytes()
}
func signSummary() string {

View File

@@ -1,3 +1,4 @@
//go:build !windows
// +build !windows
package main
@@ -5,18 +6,17 @@ package main
import (
"bytes"
"crypto/rand"
"io/ioutil"
"errors"
"os"
"testing"
"time"
"github.com/slackhq/nebula/cert"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"golang.org/x/crypto/ed25519"
)
//TODO: test file permissions
func Test_signSummary(t *testing.T) {
assert.Equal(t, "sign <flags>: create and sign a certificate", signSummary())
}
@@ -38,15 +38,24 @@ func Test_signHelp(t *testing.T) {
" -in-pub string\n"+
" \tOptional (if out-key not set): path to read a previously generated public key\n"+
" -ip string\n"+
" \tRequired: ip and network in CIDR notation to assign the cert\n"+
" \tDeprecated, see -networks\n"+
" -name string\n"+
" \tRequired: name of the cert, usually a hostname\n"+
" -networks string\n"+
" \tRequired: comma separated list of ip address and network in CIDR notation to assign to this cert\n"+
" -out-crt string\n"+
" \tOptional: path to write the certificate to\n"+
" -out-key string\n"+
" \tOptional (if in-pub not set): path to write the private key to\n"+
" -out-qr string\n"+
" \tOptional: output a qr code image (png) of the certificate\n"+
optionalPkcs11String(" -pkcs11 string\n \tOptional: PKCS#11 URI to an existing private key\n")+
" -subnets string\n"+
" \tOptional: comma seperated list of subnet this cert can serve for\n",
" \tDeprecated, see -unsafe-networks\n"+
" -unsafe-networks string\n"+
" \tOptional: comma separated list of ip address and network in CIDR notation. Unsafe networks this cert can route for\n"+
" -version uint\n"+
" \tOptional: version of the certificate format to use, the default is to create both v1 and v2 certificates.\n",
ob.String(),
)
}
@@ -55,36 +64,57 @@ func Test_signCert(t *testing.T) {
ob := &bytes.Buffer{}
eb := &bytes.Buffer{}
// required args
nopw := &StubPasswordReader{
password: []byte(""),
err: nil,
}
assertHelpError(t, signCert([]string{"-ca-crt", "./nope", "-ca-key", "./nope", "-ip", "1.1.1.1/24", "-out-key", "nope", "-out-crt", "nope"}, ob, eb), "-name is required")
errpw := &StubPasswordReader{
password: []byte(""),
err: errors.New("stub error"),
}
passphrase := []byte("DO NOT USE THIS KEY")
testpw := &StubPasswordReader{
password: passphrase,
err: nil,
}
// required args
assertHelpError(t, signCert(
[]string{"-version", "1", "-ca-crt", "./nope", "-ca-key", "./nope", "-ip", "1.1.1.1/24", "-out-key", "nope", "-out-crt", "nope"}, ob, eb, nopw,
), "-name is required")
assert.Empty(t, ob.String())
assert.Empty(t, eb.String())
assertHelpError(t, signCert([]string{"-ca-crt", "./nope", "-ca-key", "./nope", "-name", "test", "-out-key", "nope", "-out-crt", "nope"}, ob, eb), "-ip is required")
assertHelpError(t, signCert(
[]string{"-version", "1", "-ca-crt", "./nope", "-ca-key", "./nope", "-name", "test", "-out-key", "nope", "-out-crt", "nope"}, ob, eb, nopw,
), "-networks is required")
assert.Empty(t, ob.String())
assert.Empty(t, eb.String())
// cannot set -in-pub and -out-key
assertHelpError(t, signCert([]string{"-ca-crt", "./nope", "-ca-key", "./nope", "-name", "test", "-in-pub", "nope", "-ip", "1.1.1.1/24", "-out-crt", "nope", "-out-key", "nope"}, ob, eb), "cannot set both -in-pub and -out-key")
assertHelpError(t, signCert(
[]string{"-version", "1", "-ca-crt", "./nope", "-ca-key", "./nope", "-name", "test", "-in-pub", "nope", "-ip", "1.1.1.1/24", "-out-crt", "nope", "-out-key", "nope"}, ob, eb, nopw,
), "cannot set both -in-pub and -out-key")
assert.Empty(t, ob.String())
assert.Empty(t, eb.String())
// failed to read key
ob.Reset()
eb.Reset()
args := []string{"-ca-crt", "./nope", "-ca-key", "./nope", "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", "nope", "-out-key", "nope", "-duration", "100m"}
assert.EqualError(t, signCert(args, ob, eb), "error while reading ca-key: open ./nope: "+NoSuchFileError)
args := []string{"-version", "1", "-ca-crt", "./nope", "-ca-key", "./nope", "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", "nope", "-out-key", "nope", "-duration", "100m"}
require.EqualError(t, signCert(args, ob, eb, nopw), "error while reading ca-key: open ./nope: "+NoSuchFileError)
// failed to unmarshal key
ob.Reset()
eb.Reset()
caKeyF, err := ioutil.TempFile("", "sign-cert.key")
assert.Nil(t, err)
caKeyF, err := os.CreateTemp("", "sign-cert.key")
require.NoError(t, err)
defer os.Remove(caKeyF.Name())
args = []string{"-ca-crt", "./nope", "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", "nope", "-out-key", "nope", "-duration", "100m"}
assert.EqualError(t, signCert(args, ob, eb), "error while parsing ca-key: input did not contain a valid PEM encoded block")
args = []string{"-version", "1", "-ca-crt", "./nope", "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", "nope", "-out-key", "nope", "-duration", "100m"}
require.EqualError(t, signCert(args, ob, eb, nopw), "error while parsing ca-key: input did not contain a valid PEM encoded block")
assert.Empty(t, ob.String())
assert.Empty(t, eb.String())
@@ -92,54 +122,46 @@ func Test_signCert(t *testing.T) {
ob.Reset()
eb.Reset()
caPub, caPriv, _ := ed25519.GenerateKey(rand.Reader)
caKeyF.Write(cert.MarshalEd25519PrivateKey(caPriv))
caKeyF.Write(cert.MarshalSigningPrivateKeyToPEM(cert.Curve_CURVE25519, caPriv))
// failed to read cert
args = []string{"-ca-crt", "./nope", "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", "nope", "-out-key", "nope", "-duration", "100m"}
assert.EqualError(t, signCert(args, ob, eb), "error while reading ca-crt: open ./nope: "+NoSuchFileError)
args = []string{"-version", "1", "-ca-crt", "./nope", "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", "nope", "-out-key", "nope", "-duration", "100m"}
require.EqualError(t, signCert(args, ob, eb, nopw), "error while reading ca-crt: open ./nope: "+NoSuchFileError)
assert.Empty(t, ob.String())
assert.Empty(t, eb.String())
// failed to unmarshal cert
ob.Reset()
eb.Reset()
caCrtF, err := ioutil.TempFile("", "sign-cert.crt")
assert.Nil(t, err)
caCrtF, err := os.CreateTemp("", "sign-cert.crt")
require.NoError(t, err)
defer os.Remove(caCrtF.Name())
args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", "nope", "-out-key", "nope", "-duration", "100m"}
assert.EqualError(t, signCert(args, ob, eb), "error while parsing ca-crt: input did not contain a valid PEM encoded block")
args = []string{"-version", "1", "-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", "nope", "-out-key", "nope", "-duration", "100m"}
require.EqualError(t, signCert(args, ob, eb, nopw), "error while parsing ca-crt: input did not contain a valid PEM encoded block")
assert.Empty(t, ob.String())
assert.Empty(t, eb.String())
// write a proper ca cert for later
ca := cert.NebulaCertificate{
Details: cert.NebulaCertificateDetails{
Name: "ca",
NotBefore: time.Now(),
NotAfter: time.Now().Add(time.Minute * 200),
PublicKey: caPub,
IsCA: true,
},
}
b, _ := ca.MarshalToPEM()
ca, _ := NewTestCaCert("ca", caPub, caPriv, time.Now(), time.Now().Add(time.Minute*200), nil, nil, nil)
b, _ := ca.MarshalPEM()
caCrtF.Write(b)
// failed to read pub
args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", "nope", "-in-pub", "./nope", "-duration", "100m"}
assert.EqualError(t, signCert(args, ob, eb), "error while reading in-pub: open ./nope: "+NoSuchFileError)
args = []string{"-version", "1", "-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", "nope", "-in-pub", "./nope", "-duration", "100m"}
require.EqualError(t, signCert(args, ob, eb, nopw), "error while reading in-pub: open ./nope: "+NoSuchFileError)
assert.Empty(t, ob.String())
assert.Empty(t, eb.String())
// failed to unmarshal pub
ob.Reset()
eb.Reset()
inPubF, err := ioutil.TempFile("", "in.pub")
assert.Nil(t, err)
inPubF, err := os.CreateTemp("", "in.pub")
require.NoError(t, err)
defer os.Remove(inPubF.Name())
args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", "nope", "-in-pub", inPubF.Name(), "-duration", "100m"}
assert.EqualError(t, signCert(args, ob, eb), "error while parsing in-pub: input did not contain a valid PEM encoded block")
args = []string{"-version", "1", "-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", "nope", "-in-pub", inPubF.Name(), "-duration", "100m"}
require.EqualError(t, signCert(args, ob, eb, nopw), "error while parsing in-pub: input did not contain a valid PEM encoded block")
assert.Empty(t, ob.String())
assert.Empty(t, eb.String())
@@ -147,88 +169,124 @@ func Test_signCert(t *testing.T) {
ob.Reset()
eb.Reset()
inPub, _ := x25519Keypair()
inPubF.Write(cert.MarshalX25519PublicKey(inPub))
inPubF.Write(cert.MarshalPublicKeyToPEM(cert.Curve_CURVE25519, inPub))
// bad ip cidr
ob.Reset()
eb.Reset()
args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "a1.1.1.1/24", "-out-crt", "nope", "-out-key", "nope", "-duration", "100m"}
assertHelpError(t, signCert(args, ob, eb), "invalid ip definition: invalid CIDR address: a1.1.1.1/24")
args = []string{"-version", "1", "-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "a1.1.1.1/24", "-out-crt", "nope", "-out-key", "nope", "-duration", "100m"}
assertHelpError(t, signCert(args, ob, eb, nopw), "invalid -networks definition: a1.1.1.1/24")
assert.Empty(t, ob.String())
assert.Empty(t, eb.String())
ob.Reset()
eb.Reset()
args = []string{"-version", "1", "-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "100::100/100", "-out-crt", "nope", "-out-key", "nope", "-duration", "100m"}
assertHelpError(t, signCert(args, ob, eb, nopw), "invalid -networks definition: v1 certificates can only have a single ipv4 address")
assert.Empty(t, ob.String())
assert.Empty(t, eb.String())
ob.Reset()
eb.Reset()
args = []string{"-version", "1", "-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24,1.1.1.2/24", "-out-crt", "nope", "-out-key", "nope", "-duration", "100m"}
assertHelpError(t, signCert(args, ob, eb, nopw), "invalid -networks definition: v1 certificates can only have a single ipv4 address")
assert.Empty(t, ob.String())
assert.Empty(t, eb.String())
// bad subnet cidr
ob.Reset()
eb.Reset()
args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", "nope", "-out-key", "nope", "-duration", "100m", "-subnets", "a"}
assertHelpError(t, signCert(args, ob, eb), "invalid subnet definition: invalid CIDR address: a")
args = []string{"-version", "1", "-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", "nope", "-out-key", "nope", "-duration", "100m", "-subnets", "a"}
assertHelpError(t, signCert(args, ob, eb, nopw), "invalid -unsafe-networks definition: a")
assert.Empty(t, ob.String())
assert.Empty(t, eb.String())
ob.Reset()
eb.Reset()
args = []string{"-version", "1", "-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", "nope", "-out-key", "nope", "-duration", "100m", "-subnets", "100::100/100"}
assertHelpError(t, signCert(args, ob, eb, nopw), "invalid -unsafe-networks definition: v1 certificates can only be ipv4")
assert.Empty(t, ob.String())
assert.Empty(t, eb.String())
// mismatched ca key
_, caPriv2, _ := ed25519.GenerateKey(rand.Reader)
caKeyF2, err := os.CreateTemp("", "sign-cert-2.key")
require.NoError(t, err)
defer os.Remove(caKeyF2.Name())
caKeyF2.Write(cert.MarshalSigningPrivateKeyToPEM(cert.Curve_CURVE25519, caPriv2))
ob.Reset()
eb.Reset()
args = []string{"-version", "1", "-ca-crt", caCrtF.Name(), "-ca-key", caKeyF2.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", "nope", "-out-key", "nope", "-duration", "100m", "-subnets", "a"}
require.EqualError(t, signCert(args, ob, eb, nopw), "refusing to sign, root certificate does not match private key")
assert.Empty(t, ob.String())
assert.Empty(t, eb.String())
// failed key write
ob.Reset()
eb.Reset()
args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", "/do/not/write/pleasecrt", "-out-key", "/do/not/write/pleasekey", "-duration", "100m", "-subnets", "10.1.1.1/32"}
assert.EqualError(t, signCert(args, ob, eb), "error while writing out-key: open /do/not/write/pleasekey: "+NoSuchDirError)
args = []string{"-version", "1", "-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", "/do/not/write/pleasecrt", "-out-key", "/do/not/write/pleasekey", "-duration", "100m", "-subnets", "10.1.1.1/32"}
require.EqualError(t, signCert(args, ob, eb, nopw), "error while writing out-key: open /do/not/write/pleasekey: "+NoSuchDirError)
assert.Empty(t, ob.String())
assert.Empty(t, eb.String())
// create temp key file
keyF, err := ioutil.TempFile("", "test.key")
assert.Nil(t, err)
keyF, err := os.CreateTemp("", "test.key")
require.NoError(t, err)
os.Remove(keyF.Name())
// failed cert write
ob.Reset()
eb.Reset()
args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", "/do/not/write/pleasecrt", "-out-key", keyF.Name(), "-duration", "100m", "-subnets", "10.1.1.1/32"}
assert.EqualError(t, signCert(args, ob, eb), "error while writing out-crt: open /do/not/write/pleasecrt: "+NoSuchDirError)
args = []string{"-version", "1", "-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", "/do/not/write/pleasecrt", "-out-key", keyF.Name(), "-duration", "100m", "-subnets", "10.1.1.1/32"}
require.EqualError(t, signCert(args, ob, eb, nopw), "error while writing out-crt: open /do/not/write/pleasecrt: "+NoSuchDirError)
assert.Empty(t, ob.String())
assert.Empty(t, eb.String())
os.Remove(keyF.Name())
// create temp cert file
crtF, err := ioutil.TempFile("", "test.crt")
assert.Nil(t, err)
crtF, err := os.CreateTemp("", "test.crt")
require.NoError(t, err)
os.Remove(crtF.Name())
// test proper cert with removed empty groups and subnets
ob.Reset()
eb.Reset()
args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", crtF.Name(), "-out-key", keyF.Name(), "-duration", "100m", "-subnets", "10.1.1.1/32, , 10.2.2.2/32 , , ,, 10.5.5.5/32", "-groups", "1,, 2 , ,,,3,4,5"}
assert.Nil(t, signCert(args, ob, eb))
args = []string{"-version", "1", "-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", crtF.Name(), "-out-key", keyF.Name(), "-duration", "100m", "-subnets", "10.1.1.1/32, , 10.2.2.2/32 , , ,, 10.5.5.5/32", "-groups", "1,, 2 , ,,,3,4,5"}
require.NoError(t, signCert(args, ob, eb, nopw))
assert.Empty(t, ob.String())
assert.Empty(t, eb.String())
// read cert and key files
rb, _ := ioutil.ReadFile(keyF.Name())
lKey, b, err := cert.UnmarshalX25519PrivateKey(rb)
assert.Len(t, b, 0)
assert.Nil(t, err)
rb, _ := os.ReadFile(keyF.Name())
lKey, b, curve, err := cert.UnmarshalPrivateKeyFromPEM(rb)
assert.Equal(t, cert.Curve_CURVE25519, curve)
assert.Empty(t, b)
require.NoError(t, err)
assert.Len(t, lKey, 32)
rb, _ = ioutil.ReadFile(crtF.Name())
lCrt, b, err := cert.UnmarshalNebulaCertificateFromPEM(rb)
assert.Len(t, b, 0)
assert.Nil(t, err)
rb, _ = os.ReadFile(crtF.Name())
lCrt, b, err := cert.UnmarshalCertificateFromPEM(rb)
assert.Empty(t, b)
require.NoError(t, err)
assert.Equal(t, "test", lCrt.Details.Name)
assert.Equal(t, "1.1.1.1/24", lCrt.Details.Ips[0].String())
assert.Len(t, lCrt.Details.Ips, 1)
assert.False(t, lCrt.Details.IsCA)
assert.Equal(t, []string{"1", "2", "3", "4", "5"}, lCrt.Details.Groups)
assert.Len(t, lCrt.Details.Subnets, 3)
assert.Len(t, lCrt.Details.PublicKey, 32)
assert.Equal(t, time.Duration(time.Minute*100), lCrt.Details.NotAfter.Sub(lCrt.Details.NotBefore))
assert.Equal(t, "test", lCrt.Name())
assert.Equal(t, "1.1.1.1/24", lCrt.Networks()[0].String())
assert.Len(t, lCrt.Networks(), 1)
assert.False(t, lCrt.IsCA())
assert.Equal(t, []string{"1", "2", "3", "4", "5"}, lCrt.Groups())
assert.Len(t, lCrt.UnsafeNetworks(), 3)
assert.Len(t, lCrt.PublicKey(), 32)
assert.Equal(t, time.Duration(time.Minute*100), lCrt.NotAfter().Sub(lCrt.NotBefore()))
sns := []string{}
for _, sn := range lCrt.Details.Subnets {
for _, sn := range lCrt.UnsafeNetworks() {
sns = append(sns, sn.String())
}
assert.Equal(t, []string{"10.1.1.1/32", "10.2.2.2/32", "10.5.5.5/32"}, sns)
issuer, _ := ca.Sha256Sum()
assert.Equal(t, issuer, lCrt.Details.Issuer)
issuer, _ := ca.Fingerprint()
assert.Equal(t, issuer, lCrt.Issuer())
assert.True(t, lCrt.CheckSignature(caPub))
@@ -237,54 +295,116 @@ func Test_signCert(t *testing.T) {
os.Remove(crtF.Name())
ob.Reset()
eb.Reset()
args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", crtF.Name(), "-in-pub", inPubF.Name(), "-duration", "100m", "-groups", "1"}
assert.Nil(t, signCert(args, ob, eb))
args = []string{"-version", "1", "-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", crtF.Name(), "-in-pub", inPubF.Name(), "-duration", "100m", "-groups", "1"}
require.NoError(t, signCert(args, ob, eb, nopw))
assert.Empty(t, ob.String())
assert.Empty(t, eb.String())
// read cert file and check pub key matches in-pub
rb, _ = ioutil.ReadFile(crtF.Name())
lCrt, b, err = cert.UnmarshalNebulaCertificateFromPEM(rb)
assert.Len(t, b, 0)
assert.Nil(t, err)
assert.Equal(t, lCrt.Details.PublicKey, inPub)
rb, _ = os.ReadFile(crtF.Name())
lCrt, b, err = cert.UnmarshalCertificateFromPEM(rb)
assert.Empty(t, b)
require.NoError(t, err)
assert.Equal(t, lCrt.PublicKey(), inPub)
// test refuse to sign cert with duration beyond root
ob.Reset()
eb.Reset()
args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", crtF.Name(), "-out-key", keyF.Name(), "-duration", "1000m", "-subnets", "10.1.1.1/32, , 10.2.2.2/32 , , ,, 10.5.5.5/32", "-groups", "1,, 2 , ,,,3,4,5"}
assert.EqualError(t, signCert(args, ob, eb), "refusing to sign, root certificate constraints violated: certificate expires after signing certificate")
os.Remove(keyF.Name())
os.Remove(crtF.Name())
args = []string{"-version", "1", "-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", crtF.Name(), "-out-key", keyF.Name(), "-duration", "1000m", "-subnets", "10.1.1.1/32, , 10.2.2.2/32 , , ,, 10.5.5.5/32", "-groups", "1,, 2 , ,,,3,4,5"}
require.EqualError(t, signCert(args, ob, eb, nopw), "error while signing: certificate expires after signing certificate")
assert.Empty(t, ob.String())
assert.Empty(t, eb.String())
// create valid cert/key for overwrite tests
os.Remove(keyF.Name())
os.Remove(crtF.Name())
args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", crtF.Name(), "-out-key", keyF.Name(), "-duration", "100m", "-subnets", "10.1.1.1/32, , 10.2.2.2/32 , , ,, 10.5.5.5/32", "-groups", "1,, 2 , ,,,3,4,5"}
assert.Nil(t, signCert(args, ob, eb))
args = []string{"-version", "1", "-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", crtF.Name(), "-out-key", keyF.Name(), "-duration", "100m", "-subnets", "10.1.1.1/32, , 10.2.2.2/32 , , ,, 10.5.5.5/32", "-groups", "1,, 2 , ,,,3,4,5"}
require.NoError(t, signCert(args, ob, eb, nopw))
// test that we won't overwrite existing key file
os.Remove(crtF.Name())
ob.Reset()
eb.Reset()
args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", crtF.Name(), "-out-key", keyF.Name(), "-duration", "100m", "-subnets", "10.1.1.1/32, , 10.2.2.2/32 , , ,, 10.5.5.5/32", "-groups", "1,, 2 , ,,,3,4,5"}
assert.EqualError(t, signCert(args, ob, eb), "refusing to overwrite existing key: "+keyF.Name())
args = []string{"-version", "1", "-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", crtF.Name(), "-out-key", keyF.Name(), "-duration", "100m", "-subnets", "10.1.1.1/32, , 10.2.2.2/32 , , ,, 10.5.5.5/32", "-groups", "1,, 2 , ,,,3,4,5"}
require.EqualError(t, signCert(args, ob, eb, nopw), "refusing to overwrite existing key: "+keyF.Name())
assert.Empty(t, ob.String())
assert.Empty(t, eb.String())
// create valid cert/key for overwrite tests
os.Remove(keyF.Name())
os.Remove(crtF.Name())
args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", crtF.Name(), "-out-key", keyF.Name(), "-duration", "100m", "-subnets", "10.1.1.1/32, , 10.2.2.2/32 , , ,, 10.5.5.5/32", "-groups", "1,, 2 , ,,,3,4,5"}
assert.Nil(t, signCert(args, ob, eb))
args = []string{"-version", "1", "-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", crtF.Name(), "-out-key", keyF.Name(), "-duration", "100m", "-subnets", "10.1.1.1/32, , 10.2.2.2/32 , , ,, 10.5.5.5/32", "-groups", "1,, 2 , ,,,3,4,5"}
require.NoError(t, signCert(args, ob, eb, nopw))
// test that we won't overwrite existing certificate file
os.Remove(keyF.Name())
ob.Reset()
eb.Reset()
args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", crtF.Name(), "-out-key", keyF.Name(), "-duration", "100m", "-subnets", "10.1.1.1/32, , 10.2.2.2/32 , , ,, 10.5.5.5/32", "-groups", "1,, 2 , ,,,3,4,5"}
assert.EqualError(t, signCert(args, ob, eb), "refusing to overwrite existing cert: "+crtF.Name())
args = []string{"-version", "1", "-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", crtF.Name(), "-out-key", keyF.Name(), "-duration", "100m", "-subnets", "10.1.1.1/32, , 10.2.2.2/32 , , ,, 10.5.5.5/32", "-groups", "1,, 2 , ,,,3,4,5"}
require.EqualError(t, signCert(args, ob, eb, nopw), "refusing to overwrite existing cert: "+crtF.Name())
assert.Empty(t, ob.String())
assert.Empty(t, eb.String())
// create valid cert/key using encrypted CA key
os.Remove(caKeyF.Name())
os.Remove(caCrtF.Name())
os.Remove(keyF.Name())
os.Remove(crtF.Name())
ob.Reset()
eb.Reset()
caKeyF, err = os.CreateTemp("", "sign-cert.key")
require.NoError(t, err)
defer os.Remove(caKeyF.Name())
caCrtF, err = os.CreateTemp("", "sign-cert.crt")
require.NoError(t, err)
defer os.Remove(caCrtF.Name())
// generate the encrypted key
caPub, caPriv, _ = ed25519.GenerateKey(rand.Reader)
kdfParams := cert.NewArgon2Parameters(64*1024, 4, 3)
b, _ = cert.EncryptAndMarshalSigningPrivateKey(cert.Curve_CURVE25519, caPriv, passphrase, kdfParams)
caKeyF.Write(b)
ca, _ = NewTestCaCert("ca", caPub, caPriv, time.Now(), time.Now().Add(time.Minute*200), nil, nil, nil)
b, _ = ca.MarshalPEM()
caCrtF.Write(b)
// test with the proper password
args = []string{"-version", "1", "-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", crtF.Name(), "-out-key", keyF.Name(), "-duration", "100m", "-subnets", "10.1.1.1/32, , 10.2.2.2/32 , , ,, 10.5.5.5/32", "-groups", "1,, 2 , ,,,3,4,5"}
require.NoError(t, signCert(args, ob, eb, testpw))
assert.Equal(t, "Enter passphrase: ", ob.String())
assert.Empty(t, eb.String())
// test with the wrong password
ob.Reset()
eb.Reset()
testpw.password = []byte("invalid password")
args = []string{"-version", "1", "-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", crtF.Name(), "-out-key", keyF.Name(), "-duration", "100m", "-subnets", "10.1.1.1/32, , 10.2.2.2/32 , , ,, 10.5.5.5/32", "-groups", "1,, 2 , ,,,3,4,5"}
require.Error(t, signCert(args, ob, eb, testpw))
assert.Equal(t, "Enter passphrase: ", ob.String())
assert.Empty(t, eb.String())
// test with the user not entering a password
ob.Reset()
eb.Reset()
args = []string{"-version", "1", "-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", crtF.Name(), "-out-key", keyF.Name(), "-duration", "100m", "-subnets", "10.1.1.1/32, , 10.2.2.2/32 , , ,, 10.5.5.5/32", "-groups", "1,, 2 , ,,,3,4,5"}
require.Error(t, signCert(args, ob, eb, nopw))
// normally the user hitting enter on the prompt would add newlines between these
assert.Equal(t, "Enter passphrase: Enter passphrase: Enter passphrase: Enter passphrase: Enter passphrase: ", ob.String())
assert.Empty(t, eb.String())
// test an error condition
ob.Reset()
eb.Reset()
args = []string{"-version", "1", "-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", crtF.Name(), "-out-key", keyF.Name(), "-duration", "100m", "-subnets", "10.1.1.1/32, , 10.2.2.2/32 , , ,, 10.5.5.5/32", "-groups", "1,, 2 , ,,,3,4,5"}
require.Error(t, signCert(args, ob, eb, errpw))
assert.Equal(t, "Enter passphrase: ", ob.String())
assert.Empty(t, eb.String())
}

View File

@@ -1,14 +1,15 @@
package main
import (
"errors"
"flag"
"fmt"
"github.com/slackhq/nebula/cert"
"io"
"io/ioutil"
"os"
"strings"
"time"
"github.com/slackhq/nebula/cert"
)
type verifyFlags struct {
@@ -39,16 +40,16 @@ func verify(args []string, out io.Writer, errOut io.Writer) error {
return err
}
rawCACert, err := ioutil.ReadFile(*vf.caPath)
rawCACert, err := os.ReadFile(*vf.caPath)
if err != nil {
return fmt.Errorf("error while reading ca: %s", err)
return fmt.Errorf("error while reading ca: %w", err)
}
caPool := cert.NewCAPool()
for {
rawCACert, err = caPool.AddCACertificate(rawCACert)
rawCACert, err = caPool.AddCAFromPEM(rawCACert)
if err != nil {
return fmt.Errorf("error while adding ca cert to pool: %s", err)
return fmt.Errorf("error while adding ca cert to pool: %w", err)
}
if rawCACert == nil || len(rawCACert) == 0 || strings.TrimSpace(string(rawCACert)) == "" {
@@ -56,22 +57,32 @@ func verify(args []string, out io.Writer, errOut io.Writer) error {
}
}
rawCert, err := ioutil.ReadFile(*vf.certPath)
rawCert, err := os.ReadFile(*vf.certPath)
if err != nil {
return fmt.Errorf("unable to read crt; %s", err)
return fmt.Errorf("unable to read crt: %w", err)
}
var errs []error
for {
if len(rawCert) == 0 {
break
}
c, extra, err := cert.UnmarshalCertificateFromPEM(rawCert)
if err != nil {
return fmt.Errorf("error while parsing crt: %w", err)
}
rawCert = extra
_, err = caPool.VerifyCertificate(time.Now(), c)
if err != nil {
switch {
case errors.Is(err, cert.ErrCaNotFound):
errs = append(errs, fmt.Errorf("error while verifying certificate v%d %s with issuer %s: %w", c.Version(), c.Name(), c.Issuer(), err))
default:
errs = append(errs, fmt.Errorf("error while verifying certificate %+v: %w", c, err))
}
}
}
c, _, err := cert.UnmarshalNebulaCertificateFromPEM(rawCert)
if err != nil {
return fmt.Errorf("error while parsing crt: %s", err)
}
good, err := c.Verify(time.Now(), caPool)
if !good {
return err
}
return nil
return errors.Join(errs...)
}
func verifySummary() string {
@@ -80,7 +91,7 @@ func verifySummary() string {
func verifyHelp(out io.Writer) {
vf := newVerifyFlags()
out.Write([]byte("Usage of " + os.Args[0] + " " + verifySummary() + "\n"))
_, _ = out.Write([]byte("Usage of " + os.Args[0] + " " + verifySummary() + "\n"))
vf.set.SetOutput(out)
vf.set.PrintDefaults()
}

View File

@@ -3,13 +3,14 @@ package main
import (
"bytes"
"crypto/rand"
"github.com/slackhq/nebula/cert"
"github.com/stretchr/testify/assert"
"golang.org/x/crypto/ed25519"
"io/ioutil"
"os"
"testing"
"time"
"github.com/slackhq/nebula/cert"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"golang.org/x/crypto/ed25519"
)
func Test_verifySummary(t *testing.T) {
@@ -37,105 +38,87 @@ func Test_verify(t *testing.T) {
// required args
assertHelpError(t, verify([]string{"-ca", "derp"}, ob, eb), "-crt is required")
assert.Equal(t, "", ob.String())
assert.Equal(t, "", eb.String())
assert.Empty(t, ob.String())
assert.Empty(t, eb.String())
assertHelpError(t, verify([]string{"-crt", "derp"}, ob, eb), "-ca is required")
assert.Equal(t, "", ob.String())
assert.Equal(t, "", eb.String())
assert.Empty(t, ob.String())
assert.Empty(t, eb.String())
// no ca at path
ob.Reset()
eb.Reset()
err := verify([]string{"-ca", "does_not_exist", "-crt", "does_not_exist"}, ob, eb)
assert.Equal(t, "", ob.String())
assert.Equal(t, "", eb.String())
assert.EqualError(t, err, "error while reading ca: open does_not_exist: "+NoSuchFileError)
assert.Empty(t, ob.String())
assert.Empty(t, eb.String())
require.EqualError(t, err, "error while reading ca: open does_not_exist: "+NoSuchFileError)
// invalid ca at path
ob.Reset()
eb.Reset()
caFile, err := ioutil.TempFile("", "verify-ca")
assert.Nil(t, err)
caFile, err := os.CreateTemp("", "verify-ca")
require.NoError(t, err)
defer os.Remove(caFile.Name())
caFile.WriteString("-----BEGIN NOPE-----")
err = verify([]string{"-ca", caFile.Name(), "-crt", "does_not_exist"}, ob, eb)
assert.Equal(t, "", ob.String())
assert.Equal(t, "", eb.String())
assert.EqualError(t, err, "error while adding ca cert to pool: input did not contain a valid PEM encoded block")
assert.Empty(t, ob.String())
assert.Empty(t, eb.String())
require.EqualError(t, err, "error while adding ca cert to pool: input did not contain a valid PEM encoded block")
// make a ca for later
caPub, caPriv, _ := ed25519.GenerateKey(rand.Reader)
ca := cert.NebulaCertificate{
Details: cert.NebulaCertificateDetails{
Name: "test-ca",
NotBefore: time.Now().Add(time.Hour * -1),
NotAfter: time.Now().Add(time.Hour),
PublicKey: caPub,
IsCA: true,
},
}
ca.Sign(caPriv)
b, _ := ca.MarshalToPEM()
ca, _ := NewTestCaCert("test-ca", caPub, caPriv, time.Now().Add(time.Hour*-1), time.Now().Add(time.Hour*2), nil, nil, nil)
b, _ := ca.MarshalPEM()
caFile.Truncate(0)
caFile.Seek(0, 0)
caFile.Write(b)
// no crt at path
err = verify([]string{"-ca", caFile.Name(), "-crt", "does_not_exist"}, ob, eb)
assert.Equal(t, "", ob.String())
assert.Equal(t, "", eb.String())
assert.EqualError(t, err, "unable to read crt; open does_not_exist: "+NoSuchFileError)
assert.Empty(t, ob.String())
assert.Empty(t, eb.String())
require.EqualError(t, err, "unable to read crt: open does_not_exist: "+NoSuchFileError)
// invalid crt at path
ob.Reset()
eb.Reset()
certFile, err := ioutil.TempFile("", "verify-cert")
assert.Nil(t, err)
certFile, err := os.CreateTemp("", "verify-cert")
require.NoError(t, err)
defer os.Remove(certFile.Name())
certFile.WriteString("-----BEGIN NOPE-----")
err = verify([]string{"-ca", caFile.Name(), "-crt", certFile.Name()}, ob, eb)
assert.Equal(t, "", ob.String())
assert.Equal(t, "", eb.String())
assert.EqualError(t, err, "error while parsing crt: input did not contain a valid PEM encoded block")
assert.Empty(t, ob.String())
assert.Empty(t, eb.String())
require.EqualError(t, err, "error while parsing crt: input did not contain a valid PEM encoded block")
// unverifiable cert at path
_, badPriv, _ := ed25519.GenerateKey(rand.Reader)
certPub, _ := x25519Keypair()
signer, _ := ca.Sha256Sum()
crt := cert.NebulaCertificate{
Details: cert.NebulaCertificateDetails{
Name: "test-cert",
NotBefore: time.Now().Add(time.Hour * -1),
NotAfter: time.Now().Add(time.Hour),
PublicKey: certPub,
IsCA: false,
Issuer: signer,
},
crt, _ := NewTestCert(ca, caPriv, "test-cert", time.Now().Add(time.Hour*-1), time.Now().Add(time.Hour), nil, nil, nil)
// Slightly evil hack to modify the certificate after it was sealed to generate an invalid signature
pub := crt.PublicKey()
for i, _ := range pub {
pub[i] = 0
}
crt.Sign(badPriv)
b, _ = crt.MarshalToPEM()
b, _ = crt.MarshalPEM()
certFile.Truncate(0)
certFile.Seek(0, 0)
certFile.Write(b)
err = verify([]string{"-ca", caFile.Name(), "-crt", certFile.Name()}, ob, eb)
assert.Equal(t, "", ob.String())
assert.Equal(t, "", eb.String())
assert.EqualError(t, err, "certificate signature did not match")
assert.Empty(t, ob.String())
assert.Empty(t, eb.String())
require.ErrorIs(t, err, cert.ErrSignatureMismatch)
// verified cert at path
crt.Sign(caPriv)
b, _ = crt.MarshalToPEM()
crt, _ = NewTestCert(ca, caPriv, "test-cert", time.Now().Add(time.Hour*-1), time.Now().Add(time.Hour), nil, nil, nil)
b, _ = crt.MarshalPEM()
certFile.Truncate(0)
certFile.Seek(0, 0)
certFile.Write(b)
err = verify([]string{"-ca", caFile.Name(), "-crt", certFile.Name()}, ob, eb)
assert.Equal(t, "", ob.String())
assert.Equal(t, "", eb.String())
assert.Nil(t, err)
assert.Empty(t, ob.String())
assert.Empty(t, eb.String())
require.NoError(t, err)
}

View File

@@ -0,0 +1,10 @@
//go:build !windows
// +build !windows
package main
import "github.com/sirupsen/logrus"
func HookLogger(l *logrus.Logger) {
// Do nothing, let the logs flow to stdout/stderr
}

View File

@@ -0,0 +1,54 @@
package main
import (
"fmt"
"io/ioutil"
"os"
"github.com/kardianos/service"
"github.com/sirupsen/logrus"
)
// HookLogger routes the logrus logs through the service logger so that they end up in the Windows Event Viewer
// logrus output will be discarded
func HookLogger(l *logrus.Logger) {
l.AddHook(newLogHook(logger))
l.SetOutput(ioutil.Discard)
}
type logHook struct {
sl service.Logger
}
func newLogHook(sl service.Logger) *logHook {
return &logHook{sl: sl}
}
func (h *logHook) Fire(entry *logrus.Entry) error {
line, err := entry.String()
if err != nil {
fmt.Fprintf(os.Stderr, "Unable to read entry, %v", err)
return err
}
switch entry.Level {
case logrus.PanicLevel:
return h.sl.Error(line)
case logrus.FatalLevel:
return h.sl.Error(line)
case logrus.ErrorLevel:
return h.sl.Error(line)
case logrus.WarnLevel:
return h.sl.Warning(line)
case logrus.InfoLevel:
return h.sl.Info(line)
case logrus.DebugLevel:
return h.sl.Info(line)
default:
return nil
}
}
func (h *logHook) Levels() []logrus.Level {
return logrus.AllLevels
}

View File

@@ -5,12 +5,15 @@ import (
"fmt"
"os"
"github.com/sirupsen/logrus"
"github.com/slackhq/nebula"
"github.com/slackhq/nebula/config"
"github.com/slackhq/nebula/util"
)
// A version string that can be set with
//
// -ldflags "-X main.Build=SOMEVERSION"
// -ldflags "-X main.Build=SOMEVERSION"
//
// at compile-time.
var Build string
@@ -45,5 +48,26 @@ func main() {
os.Exit(1)
}
nebula.Main(*configPath, *configTest, Build)
l := logrus.New()
l.Out = os.Stdout
c := config.NewC(l)
err := c.Load(*configPath)
if err != nil {
fmt.Printf("failed to load config: %s", err)
os.Exit(1)
}
ctrl, err := nebula.Main(c, *configTest, Build, l, nil)
if err != nil {
util.LogWithContextIfNeeded("Failed to start", err, l)
os.Exit(1)
}
if !*configTest {
ctrl.Start()
ctrl.ShutdownBlock()
}
os.Exit(0)
}

View File

@@ -1,50 +1,72 @@
package main
import (
"fmt"
"log"
"os"
"path/filepath"
"github.com/kardianos/service"
"github.com/sirupsen/logrus"
"github.com/slackhq/nebula"
"github.com/slackhq/nebula/config"
)
var logger service.Logger
type program struct {
exit chan struct{}
configPath *string
configTest *bool
build string
control *nebula.Control
}
func (p *program) Start(s service.Service) error {
logger.Info("Nebula service starting.")
p.exit = make(chan struct{})
// Start should not block.
go p.run()
return nil
}
logger.Info("Nebula service starting.")
func (p *program) run() error {
nebula.Main(*p.configPath, *p.configTest, Build)
l := logrus.New()
HookLogger(l)
c := config.NewC(l)
err := c.Load(*p.configPath)
if err != nil {
return fmt.Errorf("failed to load config: %s", err)
}
p.control, err = nebula.Main(c, *p.configTest, Build, l, nil)
if err != nil {
return err
}
p.control.Start()
return nil
}
func (p *program) Stop(s service.Service) error {
logger.Info("Nebula service stopping.")
close(p.exit)
p.control.Stop()
return nil
}
func doService(configPath *string, configTest *bool, build string, serviceFlag *string) {
func fileExists(filename string) bool {
_, err := os.Stat(filename)
if os.IsNotExist(err) {
return false
}
return true
}
func doService(configPath *string, configTest *bool, build string, serviceFlag *string) {
if *configPath == "" {
ex, err := os.Executable()
if err != nil {
panic(err)
}
*configPath = filepath.Dir(ex) + "/config.yaml"
if !fileExists(*configPath) {
*configPath = filepath.Dir(ex) + "/config.yml"
}
}
svcConfig := &service.Config{
@@ -60,6 +82,10 @@ func doService(configPath *string, configTest *bool, build string, serviceFlag *
build: build,
}
// Here are what the different loggers are doing:
// - `log` is the standard go log utility, meant to be used while the process is still attached to stdout/stderr
// - `logger` is the service log utility that may be attached to a special place depending on OS (Windows will have it attached to the event log)
// - above, in `Run` we create a `logrus.Logger` which is what nebula expects to use
s, err := service.New(prg, svcConfig)
if err != nil {
log.Fatal(err)
@@ -75,6 +101,7 @@ func doService(configPath *string, configTest *bool, build string, serviceFlag *
for {
err := <-errs
if err != nil {
// Route any errors from the system logger to stdout as a best effort to notice issues there
log.Print(err)
}
}
@@ -84,6 +111,7 @@ func doService(configPath *string, configTest *bool, build string, serviceFlag *
case "run":
err = s.Run()
if err != nil {
// Route any errors to the system logger
logger.Error(err)
}
default:

View File

@@ -5,12 +5,15 @@ import (
"fmt"
"os"
"github.com/sirupsen/logrus"
"github.com/slackhq/nebula"
"github.com/slackhq/nebula/config"
"github.com/slackhq/nebula/util"
)
// A version string that can be set with
//
// -ldflags "-X main.Build=SOMEVERSION"
// -ldflags "-X main.Build=SOMEVERSION"
//
// at compile-time.
var Build string
@@ -39,5 +42,27 @@ func main() {
os.Exit(1)
}
nebula.Main(*configPath, *configTest, Build)
l := logrus.New()
l.Out = os.Stdout
c := config.NewC(l)
err := c.Load(*configPath)
if err != nil {
fmt.Printf("failed to load config: %s", err)
os.Exit(1)
}
ctrl, err := nebula.Main(c, *configTest, Build, l, nil)
if err != nil {
util.LogWithContextIfNeeded("Failed to start", err, l)
os.Exit(1)
}
if !*configTest {
ctrl.Start()
notifyReady(l)
ctrl.ShutdownBlock()
}
os.Exit(0)
}

View File

@@ -0,0 +1,42 @@
package main
import (
"net"
"os"
"time"
"github.com/sirupsen/logrus"
)
// SdNotifyReady tells systemd the service is ready and dependent services can now be started
// https://www.freedesktop.org/software/systemd/man/sd_notify.html
// https://www.freedesktop.org/software/systemd/man/systemd.service.html
const SdNotifyReady = "READY=1"
func notifyReady(l *logrus.Logger) {
sockName := os.Getenv("NOTIFY_SOCKET")
if sockName == "" {
l.Debugln("NOTIFY_SOCKET systemd env var not set, not sending ready signal")
return
}
conn, err := net.DialTimeout("unixgram", sockName, time.Second)
if err != nil {
l.WithError(err).Error("failed to connect to systemd notification socket")
return
}
defer conn.Close()
err = conn.SetWriteDeadline(time.Now().Add(time.Second))
if err != nil {
l.WithError(err).Error("failed to set the write deadline for the systemd notification socket")
return
}
if _, err = conn.Write([]byte(SdNotifyReady)); err != nil {
l.WithError(err).Error("failed to signal the systemd notification socket")
return
}
l.Debugln("notified systemd the service is ready")
}

View File

@@ -0,0 +1,10 @@
//go:build !linux
// +build !linux
package main
import "github.com/sirupsen/logrus"
func notifyReady(_ *logrus.Logger) {
// No init service to notify
}

484
config.go
View File

@@ -1,484 +0,0 @@
package nebula
import (
"fmt"
"github.com/imdario/mergo"
"github.com/sirupsen/logrus"
"gopkg.in/yaml.v2"
"io/ioutil"
"net"
"os"
"os/signal"
"path/filepath"
"regexp"
"sort"
"strconv"
"strings"
"syscall"
"time"
)
type Config struct {
path string
files []string
Settings map[interface{}]interface{}
oldSettings map[interface{}]interface{}
callbacks []func(*Config)
}
func NewConfig() *Config {
return &Config{
Settings: make(map[interface{}]interface{}),
}
}
// Load will find all yaml files within path and load them in lexical order
func (c *Config) Load(path string) error {
c.path = path
c.files = make([]string, 0)
err := c.resolve(path, true)
if err != nil {
return err
}
if len(c.files) == 0 {
return fmt.Errorf("no config files found at %s", path)
}
sort.Strings(c.files)
err = c.parse()
if err != nil {
return err
}
return nil
}
// RegisterReloadCallback stores a function to be called when a config reload is triggered. The functions registered
// here should decide if they need to make a change to the current process before making the change. HasChanged can be
// used to help decide if a change is necessary.
// These functions should return quickly or spawn their own go routine if they will take a while
func (c *Config) RegisterReloadCallback(f func(*Config)) {
c.callbacks = append(c.callbacks, f)
}
// HasChanged checks if the underlying structure of the provided key has changed after a config reload. The value of
// k in both the old and new settings will be serialized, the result of the string comparison is returned.
// If k is an empty string the entire config is tested.
// It's important to note that this is very rudimentary and susceptible to configuration ordering issues indicating
// there is change when there actually wasn't any.
func (c *Config) HasChanged(k string) bool {
if c.oldSettings == nil {
return false
}
var (
nv interface{}
ov interface{}
)
if k == "" {
nv = c.Settings
ov = c.oldSettings
k = "all settings"
} else {
nv = c.get(k, c.Settings)
ov = c.get(k, c.oldSettings)
}
newVals, err := yaml.Marshal(nv)
if err != nil {
l.WithField("config_path", k).WithError(err).Error("Error while marshaling new config")
}
oldVals, err := yaml.Marshal(ov)
if err != nil {
l.WithField("config_path", k).WithError(err).Error("Error while marshaling old config")
}
return string(newVals) != string(oldVals)
}
// CatchHUP will listen for the HUP signal in a go routine and reload all configs found in the
// original path provided to Load. The old settings are shallow copied for change detection after the reload.
func (c *Config) CatchHUP() {
ch := make(chan os.Signal, 1)
signal.Notify(ch, syscall.SIGHUP)
go func() {
for range ch {
l.Info("Caught HUP, reloading config")
c.ReloadConfig()
}
}()
}
func (c *Config) ReloadConfig() {
c.oldSettings = make(map[interface{}]interface{})
for k, v := range c.Settings {
c.oldSettings[k] = v
}
err := c.Load(c.path)
if err != nil {
l.WithField("config_path", c.path).WithError(err).Error("Error occurred while reloading config")
return
}
for _, v := range c.callbacks {
v(c)
}
}
// GetString will get the string for k or return the default d if not found or invalid
func (c *Config) GetString(k, d string) string {
r := c.Get(k)
if r == nil {
return d
}
return fmt.Sprintf("%v", r)
}
// GetStringSlice will get the slice of strings for k or return the default d if not found or invalid
func (c *Config) GetStringSlice(k string, d []string) []string {
r := c.Get(k)
if r == nil {
return d
}
rv, ok := r.([]interface{})
if !ok {
return d
}
v := make([]string, len(rv))
for i := 0; i < len(v); i++ {
v[i] = fmt.Sprintf("%v", rv[i])
}
return v
}
// GetMap will get the map for k or return the default d if not found or invalid
func (c *Config) GetMap(k string, d map[interface{}]interface{}) map[interface{}]interface{} {
r := c.Get(k)
if r == nil {
return d
}
v, ok := r.(map[interface{}]interface{})
if !ok {
return d
}
return v
}
// GetInt will get the int for k or return the default d if not found or invalid
func (c *Config) GetInt(k string, d int) int {
r := c.GetString(k, strconv.Itoa(d))
v, err := strconv.Atoi(r)
if err != nil {
return d
}
return v
}
// GetBool will get the bool for k or return the default d if not found or invalid
func (c *Config) GetBool(k string, d bool) bool {
r := strings.ToLower(c.GetString(k, fmt.Sprintf("%v", d)))
v, err := strconv.ParseBool(r)
if err != nil {
switch r {
case "y", "yes":
return true
case "n", "no":
return false
}
return d
}
return v
}
// GetDuration will get the duration for k or return the default d if not found or invalid
func (c *Config) GetDuration(k string, d time.Duration) time.Duration {
r := c.GetString(k, "")
v, err := time.ParseDuration(r)
if err != nil {
return d
}
return v
}
func (c *Config) GetAllowList(k string, allowInterfaces bool) (*AllowList, error) {
r := c.Get(k)
if r == nil {
return nil, nil
}
rawMap, ok := r.(map[interface{}]interface{})
if !ok {
return nil, fmt.Errorf("config `%s` has invalid type: %T", k, r)
}
tree := NewCIDRTree()
var nameRules []AllowListNameRule
firstValue := true
allValuesMatch := true
defaultSet := false
var allValues bool
for rawKey, rawValue := range rawMap {
rawCIDR, ok := rawKey.(string)
if !ok {
return nil, fmt.Errorf("config `%s` has invalid key (type %T): %v", k, rawKey, rawKey)
}
// Special rule for interface names
if rawCIDR == "interfaces" {
if !allowInterfaces {
return nil, fmt.Errorf("config `%s` does not support `interfaces`", k)
}
var err error
nameRules, err = c.getAllowListInterfaces(k, rawValue)
if err != nil {
return nil, err
}
continue
}
value, ok := rawValue.(bool)
if !ok {
return nil, fmt.Errorf("config `%s` has invalid value (type %T): %v", k, rawValue, rawValue)
}
_, cidr, err := net.ParseCIDR(rawCIDR)
if err != nil {
return nil, fmt.Errorf("config `%s` has invalid CIDR: %s", k, rawCIDR)
}
// TODO: should we error on duplicate CIDRs in the config?
tree.AddCIDR(cidr, value)
if firstValue {
allValues = value
firstValue = false
} else {
if value != allValues {
allValuesMatch = false
}
}
// Check if this is 0.0.0.0/0
bits, size := cidr.Mask.Size()
if bits == 0 && size == 32 {
defaultSet = true
}
}
if !defaultSet {
if allValuesMatch {
_, zeroCIDR, _ := net.ParseCIDR("0.0.0.0/0")
tree.AddCIDR(zeroCIDR, !allValues)
} else {
return nil, fmt.Errorf("config `%s` contains both true and false rules, but no default set for 0.0.0.0/0", k)
}
}
return &AllowList{cidrTree: tree, nameRules: nameRules}, nil
}
func (c *Config) getAllowListInterfaces(k string, v interface{}) ([]AllowListNameRule, error) {
var nameRules []AllowListNameRule
rawRules, ok := v.(map[interface{}]interface{})
if !ok {
return nil, fmt.Errorf("config `%s.interfaces` is invalid (type %T): %v", k, v, v)
}
firstEntry := true
var allValues bool
for rawName, rawAllow := range rawRules {
name, ok := rawName.(string)
if !ok {
return nil, fmt.Errorf("config `%s.interfaces` has invalid key (type %T): %v", k, rawName, rawName)
}
allow, ok := rawAllow.(bool)
if !ok {
return nil, fmt.Errorf("config `%s.interfaces` has invalid value (type %T): %v", k, rawAllow, rawAllow)
}
nameRE, err := regexp.Compile("^" + name + "$")
if err != nil {
return nil, fmt.Errorf("config `%s.interfaces` has invalid key: %s: %v", k, name, err)
}
nameRules = append(nameRules, AllowListNameRule{
Name: nameRE,
Allow: allow,
})
if firstEntry {
allValues = allow
firstEntry = false
} else {
if allow != allValues {
return nil, fmt.Errorf("config `%s.interfaces` values must all be the same true/false value", k)
}
}
}
return nameRules, nil
}
func (c *Config) Get(k string) interface{} {
return c.get(k, c.Settings)
}
func (c *Config) IsSet(k string) bool {
return c.get(k, c.Settings) != nil
}
func (c *Config) get(k string, v interface{}) interface{} {
parts := strings.Split(k, ".")
for _, p := range parts {
m, ok := v.(map[interface{}]interface{})
if !ok {
return nil
}
v, ok = m[p]
if !ok {
return nil
}
}
return v
}
// direct signifies if this is the config path directly specified by the user,
// versus a file/dir found by recursing into that path
func (c *Config) resolve(path string, direct bool) error {
i, err := os.Stat(path)
if err != nil {
return nil
}
if !i.IsDir() {
c.addFile(path, direct)
return nil
}
paths, err := readDirNames(path)
if err != nil {
return fmt.Errorf("problem while reading directory %s: %s", path, err)
}
for _, p := range paths {
err := c.resolve(filepath.Join(path, p), false)
if err != nil {
return err
}
}
return nil
}
func (c *Config) addFile(path string, direct bool) error {
ext := filepath.Ext(path)
if !direct && ext != ".yaml" && ext != ".yml" {
return nil
}
ap, err := filepath.Abs(path)
if err != nil {
return err
}
c.files = append(c.files, ap)
return nil
}
func (c *Config) parse() error {
var m map[interface{}]interface{}
for _, path := range c.files {
b, err := ioutil.ReadFile(path)
if err != nil {
return err
}
var nm map[interface{}]interface{}
err = yaml.Unmarshal(b, &nm)
if err != nil {
return err
}
// We need to use WithAppendSlice so that firewall rules in separate
// files are appended together
err = mergo.Merge(&nm, m, mergo.WithAppendSlice)
m = nm
if err != nil {
return err
}
}
c.Settings = m
return nil
}
func readDirNames(path string) ([]string, error) {
f, err := os.Open(path)
if err != nil {
return nil, err
}
paths, err := f.Readdirnames(-1)
f.Close()
if err != nil {
return nil, err
}
sort.Strings(paths)
return paths, nil
}
func configLogger(c *Config) error {
// set up our logging level
logLevel, err := logrus.ParseLevel(strings.ToLower(c.GetString("logging.level", "info")))
if err != nil {
return fmt.Errorf("%s; possible levels: %s", err, logrus.AllLevels)
}
l.SetLevel(logLevel)
timestampFormat := c.GetString("logging.timestamp_format", "")
fullTimestamp := (timestampFormat != "")
if timestampFormat == "" {
timestampFormat = time.RFC3339
}
logFormat := strings.ToLower(c.GetString("logging.format", "text"))
switch logFormat {
case "text":
l.Formatter = &logrus.TextFormatter{
TimestampFormat: timestampFormat,
FullTimestamp: fullTimestamp,
}
case "json":
l.Formatter = &logrus.JSONFormatter{
TimestampFormat: timestampFormat,
}
default:
return fmt.Errorf("unknown log format `%s`. possible formats: %s", logFormat, []string{"text", "json"})
}
return nil
}

418
config/config.go Normal file
View File

@@ -0,0 +1,418 @@
package config
import (
"context"
"errors"
"fmt"
"math"
"os"
"os/signal"
"path/filepath"
"sort"
"strconv"
"strings"
"sync"
"syscall"
"time"
"dario.cat/mergo"
"github.com/sirupsen/logrus"
"gopkg.in/yaml.v3"
)
type C struct {
path string
files []string
Settings map[string]any
oldSettings map[string]any
callbacks []func(*C)
l *logrus.Logger
reloadLock sync.Mutex
}
func NewC(l *logrus.Logger) *C {
return &C{
Settings: make(map[string]any),
l: l,
}
}
// Load will find all yaml files within path and load them in lexical order
func (c *C) Load(path string) error {
c.path = path
c.files = make([]string, 0)
err := c.resolve(path, true)
if err != nil {
return err
}
if len(c.files) == 0 {
return fmt.Errorf("no config files found at %s", path)
}
sort.Strings(c.files)
err = c.parse()
if err != nil {
return err
}
return nil
}
func (c *C) LoadString(raw string) error {
if raw == "" {
return errors.New("Empty configuration")
}
return c.parseRaw([]byte(raw))
}
// RegisterReloadCallback stores a function to be called when a config reload is triggered. The functions registered
// here should decide if they need to make a change to the current process before making the change. HasChanged can be
// used to help decide if a change is necessary.
// These functions should return quickly or spawn their own go routine if they will take a while
func (c *C) RegisterReloadCallback(f func(*C)) {
c.callbacks = append(c.callbacks, f)
}
// InitialLoad returns true if this is the first load of the config, and ReloadConfig has not been called yet.
func (c *C) InitialLoad() bool {
return c.oldSettings == nil
}
// HasChanged checks if the underlying structure of the provided key has changed after a config reload. The value of
// k in both the old and new settings will be serialized, the result of the string comparison is returned.
// If k is an empty string the entire config is tested.
// It's important to note that this is very rudimentary and susceptible to configuration ordering issues indicating
// there is change when there actually wasn't any.
func (c *C) HasChanged(k string) bool {
if c.oldSettings == nil {
return false
}
var (
nv any
ov any
)
if k == "" {
nv = c.Settings
ov = c.oldSettings
k = "all settings"
} else {
nv = c.get(k, c.Settings)
ov = c.get(k, c.oldSettings)
}
newVals, err := yaml.Marshal(nv)
if err != nil {
c.l.WithField("config_path", k).WithError(err).Error("Error while marshaling new config")
}
oldVals, err := yaml.Marshal(ov)
if err != nil {
c.l.WithField("config_path", k).WithError(err).Error("Error while marshaling old config")
}
return string(newVals) != string(oldVals)
}
// CatchHUP will listen for the HUP signal in a go routine and reload all configs found in the
// original path provided to Load. The old settings are shallow copied for change detection after the reload.
func (c *C) CatchHUP(ctx context.Context) {
if c.path == "" {
return
}
ch := make(chan os.Signal, 1)
signal.Notify(ch, syscall.SIGHUP)
go func() {
for {
select {
case <-ctx.Done():
signal.Stop(ch)
close(ch)
return
case <-ch:
c.l.Info("Caught HUP, reloading config")
c.ReloadConfig()
}
}
}()
}
func (c *C) ReloadConfig() {
c.reloadLock.Lock()
defer c.reloadLock.Unlock()
c.oldSettings = make(map[string]any)
for k, v := range c.Settings {
c.oldSettings[k] = v
}
err := c.Load(c.path)
if err != nil {
c.l.WithField("config_path", c.path).WithError(err).Error("Error occurred while reloading config")
return
}
for _, v := range c.callbacks {
v(c)
}
}
func (c *C) ReloadConfigString(raw string) error {
c.reloadLock.Lock()
defer c.reloadLock.Unlock()
c.oldSettings = make(map[string]any)
for k, v := range c.Settings {
c.oldSettings[k] = v
}
err := c.LoadString(raw)
if err != nil {
return err
}
for _, v := range c.callbacks {
v(c)
}
return nil
}
// GetString will get the string for k or return the default d if not found or invalid
func (c *C) GetString(k, d string) string {
r := c.Get(k)
if r == nil {
return d
}
return fmt.Sprintf("%v", r)
}
// GetStringSlice will get the slice of strings for k or return the default d if not found or invalid
func (c *C) GetStringSlice(k string, d []string) []string {
r := c.Get(k)
if r == nil {
return d
}
rv, ok := r.([]any)
if !ok {
return d
}
v := make([]string, len(rv))
for i := 0; i < len(v); i++ {
v[i] = fmt.Sprintf("%v", rv[i])
}
return v
}
// GetMap will get the map for k or return the default d if not found or invalid
func (c *C) GetMap(k string, d map[string]any) map[string]any {
r := c.Get(k)
if r == nil {
return d
}
v, ok := r.(map[string]any)
if !ok {
return d
}
return v
}
// GetInt will get the int for k or return the default d if not found or invalid
func (c *C) GetInt(k string, d int) int {
r := c.GetString(k, strconv.Itoa(d))
v, err := strconv.Atoi(r)
if err != nil {
return d
}
return v
}
// GetUint32 will get the uint32 for k or return the default d if not found or invalid
func (c *C) GetUint32(k string, d uint32) uint32 {
r := c.GetInt(k, int(d))
if r < 0 || uint64(r) > uint64(math.MaxUint32) {
return d
}
return uint32(r)
}
// GetBool will get the bool for k or return the default d if not found or invalid
func (c *C) GetBool(k string, d bool) bool {
r := strings.ToLower(c.GetString(k, fmt.Sprintf("%v", d)))
v, err := strconv.ParseBool(r)
if err != nil {
switch r {
case "y", "yes":
return true
case "n", "no":
return false
}
return d
}
return v
}
func AsBool(v any) (value bool, ok bool) {
switch x := v.(type) {
case bool:
return x, true
case string:
switch x {
case "y", "yes":
return true, true
case "n", "no":
return false, true
}
}
return false, false
}
// GetDuration will get the duration for k or return the default d if not found or invalid
func (c *C) GetDuration(k string, d time.Duration) time.Duration {
r := c.GetString(k, "")
v, err := time.ParseDuration(r)
if err != nil {
return d
}
return v
}
func (c *C) Get(k string) any {
return c.get(k, c.Settings)
}
func (c *C) IsSet(k string) bool {
return c.get(k, c.Settings) != nil
}
func (c *C) get(k string, v any) any {
parts := strings.Split(k, ".")
for _, p := range parts {
m, ok := v.(map[string]any)
if !ok {
return nil
}
v, ok = m[p]
if !ok {
return nil
}
}
return v
}
// direct signifies if this is the config path directly specified by the user,
// versus a file/dir found by recursing into that path
func (c *C) resolve(path string, direct bool) error {
i, err := os.Stat(path)
if err != nil {
return nil
}
if !i.IsDir() {
c.addFile(path, direct)
return nil
}
paths, err := readDirNames(path)
if err != nil {
return fmt.Errorf("problem while reading directory %s: %s", path, err)
}
for _, p := range paths {
err := c.resolve(filepath.Join(path, p), false)
if err != nil {
return err
}
}
return nil
}
func (c *C) addFile(path string, direct bool) error {
ext := filepath.Ext(path)
if !direct && ext != ".yaml" && ext != ".yml" {
return nil
}
ap, err := filepath.Abs(path)
if err != nil {
return err
}
c.files = append(c.files, ap)
return nil
}
func (c *C) parseRaw(b []byte) error {
var m map[string]any
err := yaml.Unmarshal(b, &m)
if err != nil {
return err
}
c.Settings = m
return nil
}
func (c *C) parse() error {
var m map[string]any
for _, path := range c.files {
b, err := os.ReadFile(path)
if err != nil {
return err
}
var nm map[string]any
err = yaml.Unmarshal(b, &nm)
if err != nil {
return err
}
// We need to use WithAppendSlice so that firewall rules in separate
// files are appended together
err = mergo.Merge(&nm, m, mergo.WithAppendSlice)
m = nm
if err != nil {
return err
}
}
c.Settings = m
return nil
}
func readDirNames(path string) ([]string, error) {
f, err := os.Open(path)
if err != nil {
return nil, err
}
paths, err := f.Readdirnames(-1)
f.Close()
if err != nil {
return nil, err
}
sort.Strings(paths)
return paths, nil
}

222
config/config_test.go Normal file
View File

@@ -0,0 +1,222 @@
package config
import (
"os"
"path/filepath"
"testing"
"time"
"dario.cat/mergo"
"github.com/slackhq/nebula/test"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"gopkg.in/yaml.v3"
)
func TestConfig_Load(t *testing.T) {
l := test.NewLogger()
dir, err := os.MkdirTemp("", "config-test")
// invalid yaml
c := NewC(l)
os.WriteFile(filepath.Join(dir, "01.yaml"), []byte(" invalid yaml"), 0644)
require.EqualError(t, c.Load(dir), "yaml: unmarshal errors:\n line 1: cannot unmarshal !!str `invalid...` into map[string]interface {}")
// simple multi config merge
c = NewC(l)
os.RemoveAll(dir)
os.Mkdir(dir, 0755)
require.NoError(t, err)
os.WriteFile(filepath.Join(dir, "01.yaml"), []byte("outer:\n inner: hi"), 0644)
os.WriteFile(filepath.Join(dir, "02.yml"), []byte("outer:\n inner: override\nnew: hi"), 0644)
require.NoError(t, c.Load(dir))
expected := map[string]any{
"outer": map[string]any{
"inner": "override",
},
"new": "hi",
}
assert.Equal(t, expected, c.Settings)
}
func TestConfig_Get(t *testing.T) {
l := test.NewLogger()
// test simple type
c := NewC(l)
c.Settings["firewall"] = map[string]any{"outbound": "hi"}
assert.Equal(t, "hi", c.Get("firewall.outbound"))
// test complex type
inner := []map[string]any{{"port": "1", "code": "2"}}
c.Settings["firewall"] = map[string]any{"outbound": inner}
assert.EqualValues(t, inner, c.Get("firewall.outbound"))
// test missing
assert.Nil(t, c.Get("firewall.nope"))
}
func TestConfig_GetStringSlice(t *testing.T) {
l := test.NewLogger()
c := NewC(l)
c.Settings["slice"] = []any{"one", "two"}
assert.Equal(t, []string{"one", "two"}, c.GetStringSlice("slice", []string{}))
}
func TestConfig_GetBool(t *testing.T) {
l := test.NewLogger()
c := NewC(l)
c.Settings["bool"] = true
assert.True(t, c.GetBool("bool", false))
c.Settings["bool"] = "true"
assert.True(t, c.GetBool("bool", false))
c.Settings["bool"] = false
assert.False(t, c.GetBool("bool", true))
c.Settings["bool"] = "false"
assert.False(t, c.GetBool("bool", true))
c.Settings["bool"] = "Y"
assert.True(t, c.GetBool("bool", false))
c.Settings["bool"] = "yEs"
assert.True(t, c.GetBool("bool", false))
c.Settings["bool"] = "N"
assert.False(t, c.GetBool("bool", true))
c.Settings["bool"] = "nO"
assert.False(t, c.GetBool("bool", true))
}
func TestConfig_HasChanged(t *testing.T) {
l := test.NewLogger()
// No reload has occurred, return false
c := NewC(l)
c.Settings["test"] = "hi"
assert.False(t, c.HasChanged(""))
// Test key change
c = NewC(l)
c.Settings["test"] = "hi"
c.oldSettings = map[string]any{"test": "no"}
assert.True(t, c.HasChanged("test"))
assert.True(t, c.HasChanged(""))
// No key change
c = NewC(l)
c.Settings["test"] = "hi"
c.oldSettings = map[string]any{"test": "hi"}
assert.False(t, c.HasChanged("test"))
assert.False(t, c.HasChanged(""))
}
func TestConfig_ReloadConfig(t *testing.T) {
l := test.NewLogger()
done := make(chan bool, 1)
dir, err := os.MkdirTemp("", "config-test")
require.NoError(t, err)
os.WriteFile(filepath.Join(dir, "01.yaml"), []byte("outer:\n inner: hi"), 0644)
c := NewC(l)
require.NoError(t, c.Load(dir))
assert.False(t, c.HasChanged("outer.inner"))
assert.False(t, c.HasChanged("outer"))
assert.False(t, c.HasChanged(""))
os.WriteFile(filepath.Join(dir, "01.yaml"), []byte("outer:\n inner: ho"), 0644)
c.RegisterReloadCallback(func(c *C) {
done <- true
})
c.ReloadConfig()
assert.True(t, c.HasChanged("outer.inner"))
assert.True(t, c.HasChanged("outer"))
assert.True(t, c.HasChanged(""))
// Make sure we call the callbacks
select {
case <-done:
case <-time.After(1 * time.Second):
panic("timeout")
}
}
// Ensure mergo merges are done the way we expect.
// This is needed to test for potential regressions, like:
// - https://github.com/imdario/mergo/issues/187
func TestConfig_MergoMerge(t *testing.T) {
configs := [][]byte{
[]byte(`
listen:
port: 1234
`),
[]byte(`
firewall:
inbound:
- port: 443
proto: tcp
groups:
- server
- port: 443
proto: tcp
groups:
- webapp
`),
[]byte(`
listen:
host: 0.0.0.0
port: 4242
firewall:
outbound:
- port: any
proto: any
host: any
inbound:
- port: any
proto: icmp
host: any
`),
}
var m map[string]any
// merge the same way config.parse() merges
for _, b := range configs {
var nm map[string]any
err := yaml.Unmarshal(b, &nm)
require.NoError(t, err)
// We need to use WithAppendSlice so that firewall rules in separate
// files are appended together
err = mergo.Merge(&nm, m, mergo.WithAppendSlice)
m = nm
require.NoError(t, err)
}
t.Logf("Merged Config: %#v", m)
mYaml, err := yaml.Marshal(m)
require.NoError(t, err)
t.Logf("Merged Config as YAML:\n%s", mYaml)
// If a bug is present, some items might be replaced instead of merged like we expect
expected := map[string]any{
"firewall": map[string]any{
"inbound": []any{
map[string]any{"host": "any", "port": "any", "proto": "icmp"},
map[string]any{"groups": []any{"server"}, "port": 443, "proto": "tcp"},
map[string]any{"groups": []any{"webapp"}, "port": 443, "proto": "tcp"}},
"outbound": []any{
map[string]any{"host": "any", "port": "any", "proto": "any"}}},
"listen": map[string]any{
"host": "0.0.0.0",
"port": 4242,
},
}
assert.Equal(t, expected, m)
}

View File

@@ -1,211 +0,0 @@
package nebula
import (
"github.com/stretchr/testify/assert"
"io/ioutil"
"os"
"path/filepath"
"testing"
"time"
)
func TestConfig_Load(t *testing.T) {
dir, err := ioutil.TempDir("", "config-test")
// invalid yaml
c := NewConfig()
ioutil.WriteFile(filepath.Join(dir, "01.yaml"), []byte(" invalid yaml"), 0644)
assert.EqualError(t, c.Load(dir), "yaml: unmarshal errors:\n line 1: cannot unmarshal !!str `invalid...` into map[interface {}]interface {}")
// simple multi config merge
c = NewConfig()
os.RemoveAll(dir)
os.Mkdir(dir, 0755)
assert.Nil(t, err)
ioutil.WriteFile(filepath.Join(dir, "01.yaml"), []byte("outer:\n inner: hi"), 0644)
ioutil.WriteFile(filepath.Join(dir, "02.yml"), []byte("outer:\n inner: override\nnew: hi"), 0644)
assert.Nil(t, c.Load(dir))
expected := map[interface{}]interface{}{
"outer": map[interface{}]interface{}{
"inner": "override",
},
"new": "hi",
}
assert.Equal(t, expected, c.Settings)
//TODO: test symlinked file
//TODO: test symlinked directory
}
func TestConfig_Get(t *testing.T) {
// test simple type
c := NewConfig()
c.Settings["firewall"] = map[interface{}]interface{}{"outbound": "hi"}
assert.Equal(t, "hi", c.Get("firewall.outbound"))
// test complex type
inner := []map[interface{}]interface{}{{"port": "1", "code": "2"}}
c.Settings["firewall"] = map[interface{}]interface{}{"outbound": inner}
assert.EqualValues(t, inner, c.Get("firewall.outbound"))
// test missing
assert.Nil(t, c.Get("firewall.nope"))
}
func TestConfig_GetStringSlice(t *testing.T) {
c := NewConfig()
c.Settings["slice"] = []interface{}{"one", "two"}
assert.Equal(t, []string{"one", "two"}, c.GetStringSlice("slice", []string{}))
}
func TestConfig_GetBool(t *testing.T) {
c := NewConfig()
c.Settings["bool"] = true
assert.Equal(t, true, c.GetBool("bool", false))
c.Settings["bool"] = "true"
assert.Equal(t, true, c.GetBool("bool", false))
c.Settings["bool"] = false
assert.Equal(t, false, c.GetBool("bool", true))
c.Settings["bool"] = "false"
assert.Equal(t, false, c.GetBool("bool", true))
c.Settings["bool"] = "Y"
assert.Equal(t, true, c.GetBool("bool", false))
c.Settings["bool"] = "yEs"
assert.Equal(t, true, c.GetBool("bool", false))
c.Settings["bool"] = "N"
assert.Equal(t, false, c.GetBool("bool", true))
c.Settings["bool"] = "nO"
assert.Equal(t, false, c.GetBool("bool", true))
}
func TestConfig_GetAllowList(t *testing.T) {
c := NewConfig()
c.Settings["allowlist"] = map[interface{}]interface{}{
"192.168.0.0": true,
}
r, err := c.GetAllowList("allowlist", false)
assert.EqualError(t, err, "config `allowlist` has invalid CIDR: 192.168.0.0")
assert.Nil(t, r)
c.Settings["allowlist"] = map[interface{}]interface{}{
"192.168.0.0/16": "abc",
}
r, err = c.GetAllowList("allowlist", false)
assert.EqualError(t, err, "config `allowlist` has invalid value (type string): abc")
c.Settings["allowlist"] = map[interface{}]interface{}{
"192.168.0.0/16": true,
"10.0.0.0/8": false,
}
r, err = c.GetAllowList("allowlist", false)
assert.EqualError(t, err, "config `allowlist` contains both true and false rules, but no default set for 0.0.0.0/0")
c.Settings["allowlist"] = map[interface{}]interface{}{
"0.0.0.0/0": true,
"10.0.0.0/8": false,
"10.42.42.0/24": true,
}
r, err = c.GetAllowList("allowlist", false)
if assert.NoError(t, err) {
assert.NotNil(t, r)
}
// Test interface names
c.Settings["allowlist"] = map[interface{}]interface{}{
"interfaces": map[interface{}]interface{}{
`docker.*`: false,
},
}
r, err = c.GetAllowList("allowlist", false)
assert.EqualError(t, err, "config `allowlist` does not support `interfaces`")
c.Settings["allowlist"] = map[interface{}]interface{}{
"interfaces": map[interface{}]interface{}{
`docker.*`: "foo",
},
}
r, err = c.GetAllowList("allowlist", true)
assert.EqualError(t, err, "config `allowlist.interfaces` has invalid value (type string): foo")
c.Settings["allowlist"] = map[interface{}]interface{}{
"interfaces": map[interface{}]interface{}{
`docker.*`: false,
`eth.*`: true,
},
}
r, err = c.GetAllowList("allowlist", true)
assert.EqualError(t, err, "config `allowlist.interfaces` values must all be the same true/false value")
c.Settings["allowlist"] = map[interface{}]interface{}{
"interfaces": map[interface{}]interface{}{
`docker.*`: false,
},
}
r, err = c.GetAllowList("allowlist", true)
if assert.NoError(t, err) {
assert.NotNil(t, r)
}
}
func TestConfig_HasChanged(t *testing.T) {
// No reload has occurred, return false
c := NewConfig()
c.Settings["test"] = "hi"
assert.False(t, c.HasChanged(""))
// Test key change
c = NewConfig()
c.Settings["test"] = "hi"
c.oldSettings = map[interface{}]interface{}{"test": "no"}
assert.True(t, c.HasChanged("test"))
assert.True(t, c.HasChanged(""))
// No key change
c = NewConfig()
c.Settings["test"] = "hi"
c.oldSettings = map[interface{}]interface{}{"test": "hi"}
assert.False(t, c.HasChanged("test"))
assert.False(t, c.HasChanged(""))
}
func TestConfig_ReloadConfig(t *testing.T) {
done := make(chan bool, 1)
dir, err := ioutil.TempDir("", "config-test")
assert.Nil(t, err)
ioutil.WriteFile(filepath.Join(dir, "01.yaml"), []byte("outer:\n inner: hi"), 0644)
c := NewConfig()
assert.Nil(t, c.Load(dir))
assert.False(t, c.HasChanged("outer.inner"))
assert.False(t, c.HasChanged("outer"))
assert.False(t, c.HasChanged(""))
ioutil.WriteFile(filepath.Join(dir, "01.yaml"), []byte("outer:\n inner: ho"), 0644)
c.RegisterReloadCallback(func(c *Config) {
done <- true
})
c.ReloadConfig()
assert.True(t, c.HasChanged("outer.inner"))
assert.True(t, c.HasChanged("outer"))
assert.True(t, c.HasChanged(""))
// Make sure we call the callbacks
select {
case <-done:
case <-time.After(1 * time.Second):
panic("timeout")
}
}

View File

@@ -1,254 +1,544 @@
package nebula
import (
"bytes"
"context"
"encoding/binary"
"fmt"
"net/netip"
"sync"
"sync/atomic"
"time"
"github.com/rcrowley/go-metrics"
"github.com/sirupsen/logrus"
"github.com/slackhq/nebula/cert"
"github.com/slackhq/nebula/config"
"github.com/slackhq/nebula/header"
)
// TODO: incount and outcount are intended as a shortcut to locking the mutexes for every single packet
// and something like every 10 packets we could lock, send 10, then unlock for a moment
type trafficDecision int
const (
doNothing trafficDecision = 0
deleteTunnel trafficDecision = 1 // delete the hostinfo on our side, do not notify the remote
closeTunnel trafficDecision = 2 // delete the hostinfo and notify the remote
swapPrimary trafficDecision = 3
migrateRelays trafficDecision = 4
tryRehandshake trafficDecision = 5
sendTestPacket trafficDecision = 6
)
type connectionManager struct {
// relayUsed holds which relay localIndexs are in use
relayUsed map[uint32]struct{}
relayUsedLock *sync.RWMutex
hostMap *HostMap
in map[uint32]struct{}
inLock *sync.RWMutex
inCount int
out map[uint32]struct{}
outLock *sync.RWMutex
outCount int
TrafficTimer *SystemTimerWheel
trafficTimer *LockingTimerWheel[uint32]
intf *Interface
punchy *Punchy
pendingDeletion map[uint32]int
pendingDeletionLock *sync.RWMutex
pendingDeletionTimer *SystemTimerWheel
// Configuration settings
checkInterval time.Duration
pendingDeletionInterval time.Duration
inactivityTimeout atomic.Int64
dropInactive atomic.Bool
checkInterval int
pendingDeletionInterval int
metricsTxPunchy metrics.Counter
// I wanted to call one matLock
l *logrus.Logger
}
func newConnectionManager(intf *Interface, checkInterval, pendingDeletionInterval int) *connectionManager {
nc := &connectionManager{
hostMap: intf.hostMap,
in: make(map[uint32]struct{}),
inLock: &sync.RWMutex{},
inCount: 0,
out: make(map[uint32]struct{}),
outLock: &sync.RWMutex{},
outCount: 0,
TrafficTimer: NewSystemTimerWheel(time.Millisecond*500, time.Second*60),
intf: intf,
pendingDeletion: make(map[uint32]int),
pendingDeletionLock: &sync.RWMutex{},
pendingDeletionTimer: NewSystemTimerWheel(time.Millisecond*500, time.Second*60),
checkInterval: checkInterval,
pendingDeletionInterval: pendingDeletionInterval,
func newConnectionManagerFromConfig(l *logrus.Logger, c *config.C, hm *HostMap, p *Punchy) *connectionManager {
cm := &connectionManager{
hostMap: hm,
l: l,
punchy: p,
relayUsed: make(map[uint32]struct{}),
relayUsedLock: &sync.RWMutex{},
metricsTxPunchy: metrics.GetOrRegisterCounter("messages.tx.punchy", nil),
}
nc.Start()
return nc
cm.reload(c, true)
c.RegisterReloadCallback(func(c *config.C) {
cm.reload(c, false)
})
return cm
}
func (n *connectionManager) In(ip uint32) {
n.inLock.RLock()
// If this already exists, return
if _, ok := n.in[ip]; ok {
n.inLock.RUnlock()
return
func (cm *connectionManager) reload(c *config.C, initial bool) {
if initial {
cm.checkInterval = time.Duration(c.GetInt("timers.connection_alive_interval", 5)) * time.Second
cm.pendingDeletionInterval = time.Duration(c.GetInt("timers.pending_deletion_interval", 10)) * time.Second
// We want at least a minimum resolution of 500ms per tick so that we can hit these intervals
// pretty close to their configured duration.
// The inactivity duration is checked each time a hostinfo ticks through so we don't need the wheel to contain it.
minDuration := min(time.Millisecond*500, cm.checkInterval, cm.pendingDeletionInterval)
maxDuration := max(cm.checkInterval, cm.pendingDeletionInterval)
cm.trafficTimer = NewLockingTimerWheel[uint32](minDuration, maxDuration)
}
n.inLock.RUnlock()
n.inLock.Lock()
n.in[ip] = struct{}{}
n.inLock.Unlock()
}
func (n *connectionManager) Out(ip uint32) {
n.outLock.RLock()
// If this already exists, return
if _, ok := n.out[ip]; ok {
n.outLock.RUnlock()
return
}
n.outLock.RUnlock()
n.outLock.Lock()
// double check since we dropped the lock temporarily
if _, ok := n.out[ip]; ok {
n.outLock.Unlock()
return
}
n.out[ip] = struct{}{}
n.AddTrafficWatch(ip, n.checkInterval)
n.outLock.Unlock()
}
func (n *connectionManager) CheckIn(vpnIP uint32) bool {
n.inLock.RLock()
if _, ok := n.in[vpnIP]; ok {
n.inLock.RUnlock()
return true
}
n.inLock.RUnlock()
return false
}
func (n *connectionManager) ClearIP(ip uint32) {
n.inLock.Lock()
n.outLock.Lock()
delete(n.in, ip)
delete(n.out, ip)
n.inLock.Unlock()
n.outLock.Unlock()
}
func (n *connectionManager) ClearPendingDeletion(ip uint32) {
n.pendingDeletionLock.Lock()
delete(n.pendingDeletion, ip)
n.pendingDeletionLock.Unlock()
}
func (n *connectionManager) AddPendingDeletion(ip uint32) {
n.pendingDeletionLock.Lock()
if _, ok := n.pendingDeletion[ip]; ok {
n.pendingDeletion[ip] += 1
} else {
n.pendingDeletion[ip] = 0
}
n.pendingDeletionTimer.Add(ip, time.Second*time.Duration(n.pendingDeletionInterval))
n.pendingDeletionLock.Unlock()
}
func (n *connectionManager) checkPendingDeletion(ip uint32) bool {
n.pendingDeletionLock.RLock()
if _, ok := n.pendingDeletion[ip]; ok {
n.pendingDeletionLock.RUnlock()
return true
}
n.pendingDeletionLock.RUnlock()
return false
}
func (n *connectionManager) AddTrafficWatch(vpnIP uint32, seconds int) {
n.TrafficTimer.Add(vpnIP, time.Second*time.Duration(seconds))
}
func (n *connectionManager) Start() {
go n.Run()
}
func (n *connectionManager) Run() {
clockSource := time.Tick(500 * time.Millisecond)
for now := range clockSource {
n.HandleMonitorTick(now)
n.HandleDeletionTick(now)
}
}
func (n *connectionManager) HandleMonitorTick(now time.Time) {
n.TrafficTimer.advance(now)
for {
ep := n.TrafficTimer.Purge()
if ep == nil {
break
if initial || c.HasChanged("tunnels.inactivity_timeout") {
old := cm.getInactivityTimeout()
cm.inactivityTimeout.Store((int64)(c.GetDuration("tunnels.inactivity_timeout", 10*time.Minute)))
if !initial {
cm.l.WithField("oldDuration", old).
WithField("newDuration", cm.getInactivityTimeout()).
Info("Inactivity timeout has changed")
}
}
vpnIP := ep.(uint32)
if initial || c.HasChanged("tunnels.drop_inactive") {
old := cm.dropInactive.Load()
cm.dropInactive.Store(c.GetBool("tunnels.drop_inactive", false))
if !initial {
cm.l.WithField("oldBool", old).
WithField("newBool", cm.dropInactive.Load()).
Info("Drop inactive setting has changed")
}
}
}
// Check for traffic coming back in from this host.
traf := n.CheckIn(vpnIP)
func (cm *connectionManager) getInactivityTimeout() time.Duration {
return (time.Duration)(cm.inactivityTimeout.Load())
}
// If we saw incoming packets from this ip, just return
if traf {
if l.Level >= logrus.DebugLevel {
l.WithField("vpnIp", IntIp(vpnIP)).
WithField("tunnelCheck", m{"state": "alive", "method": "passive"}).
Debug("Tunnel status")
func (cm *connectionManager) In(h *HostInfo) {
h.in.Store(true)
}
func (cm *connectionManager) Out(h *HostInfo) {
h.out.Store(true)
}
func (cm *connectionManager) RelayUsed(localIndex uint32) {
cm.relayUsedLock.RLock()
// If this already exists, return
if _, ok := cm.relayUsed[localIndex]; ok {
cm.relayUsedLock.RUnlock()
return
}
cm.relayUsedLock.RUnlock()
cm.relayUsedLock.Lock()
cm.relayUsed[localIndex] = struct{}{}
cm.relayUsedLock.Unlock()
}
// getAndResetTrafficCheck returns if there was any inbound or outbound traffic within the last tick and
// resets the state for this local index
func (cm *connectionManager) getAndResetTrafficCheck(h *HostInfo, now time.Time) (bool, bool) {
in := h.in.Swap(false)
out := h.out.Swap(false)
if in || out {
h.lastUsed = now
}
return in, out
}
// AddTrafficWatch must be called for every new HostInfo.
// We will continue to monitor the HostInfo until the tunnel is dropped.
func (cm *connectionManager) AddTrafficWatch(h *HostInfo) {
if h.out.Swap(true) == false {
cm.trafficTimer.Add(h.localIndexId, cm.checkInterval)
}
}
func (cm *connectionManager) Start(ctx context.Context) {
clockSource := time.NewTicker(cm.trafficTimer.t.tickDuration)
defer clockSource.Stop()
p := []byte("")
nb := make([]byte, 12, 12)
out := make([]byte, mtu)
for {
select {
case <-ctx.Done():
return
case now := <-clockSource.C:
cm.trafficTimer.Advance(now)
for {
localIndex, has := cm.trafficTimer.Purge()
if !has {
break
}
cm.doTrafficCheck(localIndex, p, nb, out, now)
}
n.ClearIP(vpnIP)
n.ClearPendingDeletion(vpnIP)
continue
}
// If we didn't we may need to probe or destroy the conn
hostinfo, err := n.hostMap.QueryVpnIP(vpnIP)
if err != nil {
l.Debugf("Not found in hostmap: %s", IntIp(vpnIP))
n.ClearIP(vpnIP)
n.ClearPendingDeletion(vpnIP)
continue
}
hostinfo.logger().
WithField("tunnelCheck", m{"state": "testing", "method": "active"}).
Debug("Tunnel status")
if hostinfo != nil && hostinfo.ConnectionState != nil {
// Send a test packet to trigger an authenticated tunnel test, this should suss out any lingering tunnel issues
n.intf.SendMessageToVpnIp(test, testRequest, vpnIP, []byte(""), make([]byte, 12, 12), make([]byte, mtu))
} else {
hostinfo.logger().Debugf("Hostinfo sadness: %s", IntIp(vpnIP))
}
n.AddPendingDeletion(vpnIP)
}
}
func (n *connectionManager) HandleDeletionTick(now time.Time) {
n.pendingDeletionTimer.advance(now)
for {
ep := n.pendingDeletionTimer.Purge()
if ep == nil {
break
func (cm *connectionManager) doTrafficCheck(localIndex uint32, p, nb, out []byte, now time.Time) {
decision, hostinfo, primary := cm.makeTrafficDecision(localIndex, now)
switch decision {
case deleteTunnel:
if cm.hostMap.DeleteHostInfo(hostinfo) {
// Only clearing the lighthouse cache if this is the last hostinfo for this vpn ip in the hostmap
cm.intf.lightHouse.DeleteVpnAddrs(hostinfo.vpnAddrs)
}
vpnIP := ep.(uint32)
case closeTunnel:
cm.intf.sendCloseTunnel(hostinfo)
cm.intf.closeTunnel(hostinfo)
// If we saw incoming packets from this ip, just return
traf := n.CheckIn(vpnIP)
if traf {
l.WithField("vpnIp", IntIp(vpnIP)).
WithField("tunnelCheck", m{"state": "alive", "method": "active"}).
case swapPrimary:
cm.swapPrimary(hostinfo, primary)
case migrateRelays:
cm.migrateRelayUsed(hostinfo, primary)
case tryRehandshake:
cm.tryRehandshake(hostinfo)
case sendTestPacket:
cm.intf.SendMessageToHostInfo(header.Test, header.TestRequest, hostinfo, p, nb, out)
}
cm.resetRelayTrafficCheck(hostinfo)
}
func (cm *connectionManager) resetRelayTrafficCheck(hostinfo *HostInfo) {
if hostinfo != nil {
cm.relayUsedLock.Lock()
defer cm.relayUsedLock.Unlock()
// No need to migrate any relays, delete usage info now.
for _, idx := range hostinfo.relayState.CopyRelayForIdxs() {
delete(cm.relayUsed, idx)
}
}
}
func (cm *connectionManager) migrateRelayUsed(oldhostinfo, newhostinfo *HostInfo) {
relayFor := oldhostinfo.relayState.CopyAllRelayFor()
for _, r := range relayFor {
existing, ok := newhostinfo.relayState.QueryRelayForByIp(r.PeerAddr)
var index uint32
var relayFrom netip.Addr
var relayTo netip.Addr
switch {
case ok:
switch existing.State {
case Established, PeerRequested, Disestablished:
// This relay already exists in newhostinfo, then do nothing.
continue
case Requested:
// The relay exists in a Requested state; re-send the request
index = existing.LocalIndex
switch r.Type {
case TerminalType:
relayFrom = cm.intf.myVpnAddrs[0]
relayTo = existing.PeerAddr
case ForwardingType:
relayFrom = existing.PeerAddr
relayTo = newhostinfo.vpnAddrs[0]
default:
// should never happen
panic(fmt.Sprintf("Migrating unknown relay type: %v", r.Type))
}
}
case !ok:
cm.relayUsedLock.RLock()
if _, relayUsed := cm.relayUsed[r.LocalIndex]; !relayUsed {
// The relay hasn't been used; don't migrate it.
cm.relayUsedLock.RUnlock()
continue
}
cm.relayUsedLock.RUnlock()
// The relay doesn't exist at all; create some relay state and send the request.
var err error
index, err = AddRelay(cm.l, newhostinfo, cm.hostMap, r.PeerAddr, nil, r.Type, Requested)
if err != nil {
cm.l.WithError(err).Error("failed to migrate relay to new hostinfo")
continue
}
switch r.Type {
case TerminalType:
relayFrom = cm.intf.myVpnAddrs[0]
relayTo = r.PeerAddr
case ForwardingType:
relayFrom = r.PeerAddr
relayTo = newhostinfo.vpnAddrs[0]
default:
// should never happen
panic(fmt.Sprintf("Migrating unknown relay type: %v", r.Type))
}
}
// Send a CreateRelayRequest to the peer.
req := NebulaControl{
Type: NebulaControl_CreateRelayRequest,
InitiatorRelayIndex: index,
}
switch newhostinfo.GetCert().Certificate.Version() {
case cert.Version1:
if !relayFrom.Is4() {
cm.l.Error("can not migrate v1 relay with a v6 network because the relay is not running a current nebula version")
continue
}
if !relayTo.Is4() {
cm.l.Error("can not migrate v1 relay with a v6 remote network because the relay is not running a current nebula version")
continue
}
b := relayFrom.As4()
req.OldRelayFromAddr = binary.BigEndian.Uint32(b[:])
b = relayTo.As4()
req.OldRelayToAddr = binary.BigEndian.Uint32(b[:])
case cert.Version2:
req.RelayFromAddr = netAddrToProtoAddr(relayFrom)
req.RelayToAddr = netAddrToProtoAddr(relayTo)
default:
newhostinfo.logger(cm.l).Error("Unknown certificate version found while attempting to migrate relay")
continue
}
msg, err := req.Marshal()
if err != nil {
cm.l.WithError(err).Error("failed to marshal Control message to migrate relay")
} else {
cm.intf.SendMessageToHostInfo(header.Control, 0, newhostinfo, msg, make([]byte, 12), make([]byte, mtu))
cm.l.WithFields(logrus.Fields{
"relayFrom": req.RelayFromAddr,
"relayTo": req.RelayToAddr,
"initiatorRelayIndex": req.InitiatorRelayIndex,
"responderRelayIndex": req.ResponderRelayIndex,
"vpnAddrs": newhostinfo.vpnAddrs}).
Info("send CreateRelayRequest")
}
}
}
func (cm *connectionManager) makeTrafficDecision(localIndex uint32, now time.Time) (trafficDecision, *HostInfo, *HostInfo) {
// Read lock the main hostmap to order decisions based on tunnels being the primary tunnel
cm.hostMap.RLock()
defer cm.hostMap.RUnlock()
hostinfo := cm.hostMap.Indexes[localIndex]
if hostinfo == nil {
cm.l.WithField("localIndex", localIndex).Debugln("Not found in hostmap")
return doNothing, nil, nil
}
if cm.isInvalidCertificate(now, hostinfo) {
return closeTunnel, hostinfo, nil
}
primary := cm.hostMap.Hosts[hostinfo.vpnAddrs[0]]
mainHostInfo := true
if primary != nil && primary != hostinfo {
mainHostInfo = false
}
// Check for traffic on this hostinfo
inTraffic, outTraffic := cm.getAndResetTrafficCheck(hostinfo, now)
// A hostinfo is determined alive if there is incoming traffic
if inTraffic {
decision := doNothing
if cm.l.Level >= logrus.DebugLevel {
hostinfo.logger(cm.l).
WithField("tunnelCheck", m{"state": "alive", "method": "passive"}).
Debug("Tunnel status")
n.ClearIP(vpnIP)
n.ClearPendingDeletion(vpnIP)
continue
}
hostinfo.pendingDeletion.Store(false)
hostinfo, err := n.hostMap.QueryVpnIP(vpnIP)
if err != nil {
n.ClearIP(vpnIP)
n.ClearPendingDeletion(vpnIP)
l.Debugf("Not found in hostmap: %s", IntIp(vpnIP))
continue
}
if mainHostInfo {
decision = tryRehandshake
// If it comes around on deletion wheel and hasn't resolved itself, delete
if n.checkPendingDeletion(vpnIP) {
cn := ""
if hostinfo.ConnectionState != nil && hostinfo.ConnectionState.peerCert != nil {
cn = hostinfo.ConnectionState.peerCert.Details.Name
}
hostinfo.logger().
WithField("tunnelCheck", m{"state": "dead", "method": "active"}).
WithField("certName", cn).
Info("Tunnel status")
n.ClearIP(vpnIP)
n.ClearPendingDeletion(vpnIP)
// TODO: This is only here to let tests work. Should do proper mocking
if n.intf.lightHouse != nil {
n.intf.lightHouse.DeleteVpnIP(vpnIP)
}
n.hostMap.DeleteVpnIP(vpnIP)
n.hostMap.DeleteIndex(hostinfo.localIndexId)
} else {
n.ClearIP(vpnIP)
n.ClearPendingDeletion(vpnIP)
if cm.shouldSwapPrimary(hostinfo, primary) {
decision = swapPrimary
} else {
// migrate the relays to the primary, if in use.
decision = migrateRelays
}
}
cm.trafficTimer.Add(hostinfo.localIndexId, cm.checkInterval)
if !outTraffic {
// Send a punch packet to keep the NAT state alive
cm.sendPunch(hostinfo)
}
return decision, hostinfo, primary
}
if hostinfo.pendingDeletion.Load() {
// We have already sent a test packet and nothing was returned, this hostinfo is dead
hostinfo.logger(cm.l).
WithField("tunnelCheck", m{"state": "dead", "method": "active"}).
Info("Tunnel status")
return deleteTunnel, hostinfo, nil
}
decision := doNothing
if hostinfo != nil && hostinfo.ConnectionState != nil && mainHostInfo {
if !outTraffic {
inactiveFor, isInactive := cm.isInactive(hostinfo, now)
if isInactive {
// Tunnel is inactive, tear it down
hostinfo.logger(cm.l).
WithField("inactiveDuration", inactiveFor).
WithField("primary", mainHostInfo).
Info("Dropping tunnel due to inactivity")
return closeTunnel, hostinfo, primary
}
// If we aren't sending or receiving traffic then its an unused tunnel and we don't to test the tunnel.
// Just maintain NAT state if configured to do so.
cm.sendPunch(hostinfo)
cm.trafficTimer.Add(hostinfo.localIndexId, cm.checkInterval)
return doNothing, nil, nil
}
if cm.punchy.GetTargetEverything() {
// This is similar to the old punchy behavior with a slight optimization.
// We aren't receiving traffic but we are sending it, punch on all known
// ips in case we need to re-prime NAT state
cm.sendPunch(hostinfo)
}
if cm.l.Level >= logrus.DebugLevel {
hostinfo.logger(cm.l).
WithField("tunnelCheck", m{"state": "testing", "method": "active"}).
Debug("Tunnel status")
}
// Send a test packet to trigger an authenticated tunnel test, this should suss out any lingering tunnel issues
decision = sendTestPacket
} else {
if cm.l.Level >= logrus.DebugLevel {
hostinfo.logger(cm.l).Debugf("Hostinfo sadness")
}
}
hostinfo.pendingDeletion.Store(true)
cm.trafficTimer.Add(hostinfo.localIndexId, cm.pendingDeletionInterval)
return decision, hostinfo, nil
}
func (cm *connectionManager) isInactive(hostinfo *HostInfo, now time.Time) (time.Duration, bool) {
if cm.dropInactive.Load() == false {
// We aren't configured to drop inactive tunnels
return 0, false
}
inactiveDuration := now.Sub(hostinfo.lastUsed)
if inactiveDuration < cm.getInactivityTimeout() {
// It's not considered inactive
return inactiveDuration, false
}
// The tunnel is inactive
return inactiveDuration, true
}
func (cm *connectionManager) shouldSwapPrimary(current, primary *HostInfo) bool {
// The primary tunnel is the most recent handshake to complete locally and should work entirely fine.
// If we are here then we have multiple tunnels for a host pair and neither side believes the same tunnel is primary.
// Let's sort this out.
// Only one side should swap because if both swap then we may never resolve to a single tunnel.
// vpn addr is static across all tunnels for this host pair so lets
// use that to determine if we should consider swapping.
if current.vpnAddrs[0].Compare(cm.intf.myVpnAddrs[0]) < 0 {
// Their primary vpn addr is less than mine. Do not swap.
return false
}
crt := cm.intf.pki.getCertState().getCertificate(current.ConnectionState.myCert.Version())
// If this tunnel is using the latest certificate then we should swap it to primary for a bit and see if things
// settle down.
return bytes.Equal(current.ConnectionState.myCert.Signature(), crt.Signature())
}
func (cm *connectionManager) swapPrimary(current, primary *HostInfo) {
cm.hostMap.Lock()
// Make sure the primary is still the same after the write lock. This avoids a race with a rehandshake.
if cm.hostMap.Hosts[current.vpnAddrs[0]] == primary {
cm.hostMap.unlockedMakePrimary(current)
}
cm.hostMap.Unlock()
}
// isInvalidCertificate will check if we should destroy a tunnel if pki.disconnect_invalid is true and
// the certificate is no longer valid. Block listed certificates will skip the pki.disconnect_invalid
// check and return true.
func (cm *connectionManager) isInvalidCertificate(now time.Time, hostinfo *HostInfo) bool {
remoteCert := hostinfo.GetCert()
if remoteCert == nil {
return false
}
caPool := cm.intf.pki.GetCAPool()
err := caPool.VerifyCachedCertificate(now, remoteCert)
if err == nil {
return false
}
if !cm.intf.disconnectInvalid.Load() && err != cert.ErrBlockListed {
// Block listed certificates should always be disconnected
return false
}
hostinfo.logger(cm.l).WithError(err).
WithField("fingerprint", remoteCert.Fingerprint).
Info("Remote certificate is no longer valid, tearing down the tunnel")
return true
}
func (cm *connectionManager) sendPunch(hostinfo *HostInfo) {
if !cm.punchy.GetPunch() {
// Punching is disabled
return
}
if cm.intf.lightHouse.IsAnyLighthouseAddr(hostinfo.vpnAddrs) {
// Do not punch to lighthouses, we assume our lighthouse update interval is good enough.
// In the event the update interval is not sufficient to maintain NAT state then a publicly available lighthouse
// would lose the ability to notify us and punchy.respond would become unreliable.
return
}
if cm.punchy.GetTargetEverything() {
hostinfo.remotes.ForEach(cm.hostMap.GetPreferredRanges(), func(addr netip.AddrPort, preferred bool) {
cm.metricsTxPunchy.Inc(1)
cm.intf.outside.WriteTo([]byte{1}, addr)
})
} else if hostinfo.remote.IsValid() {
cm.metricsTxPunchy.Inc(1)
cm.intf.outside.WriteTo([]byte{1}, hostinfo.remote)
}
}
func (cm *connectionManager) tryRehandshake(hostinfo *HostInfo) {
cs := cm.intf.pki.getCertState()
curCrt := hostinfo.ConnectionState.myCert
myCrt := cs.getCertificate(curCrt.Version())
if curCrt.Version() >= cs.initiatingVersion && bytes.Equal(curCrt.Signature(), myCrt.Signature()) == true {
// The current tunnel is using the latest certificate and version, no need to rehandshake.
return
}
cm.l.WithField("vpnAddrs", hostinfo.vpnAddrs).
WithField("reason", "local certificate is not current").
Info("Re-handshaking with remote")
cm.intf.handshakeManager.StartHandshake(hostinfo.vpnAddrs[0], nil)
}

View File

@@ -1,142 +1,503 @@
package nebula
import (
"net"
"crypto/ed25519"
"crypto/rand"
"net/netip"
"testing"
"time"
"github.com/flynn/noise"
"github.com/slackhq/nebula/cert"
"github.com/slackhq/nebula/config"
"github.com/slackhq/nebula/test"
"github.com/slackhq/nebula/udp"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
var vpnIP uint32
func newTestLighthouse() *LightHouse {
lh := &LightHouse{
l: test.NewLogger(),
addrMap: map[netip.Addr]*RemoteList{},
queryChan: make(chan netip.Addr, 10),
}
lighthouses := map[netip.Addr]struct{}{}
staticList := map[netip.Addr]struct{}{}
lh.lighthouses.Store(&lighthouses)
lh.staticList.Store(&staticList)
return lh
}
func Test_NewConnectionManagerTest(t *testing.T) {
l := test.NewLogger()
//_, tuncidr, _ := net.ParseCIDR("1.1.1.1/24")
_, vpncidr, _ := net.ParseCIDR("172.1.1.1/24")
_, localrange, _ := net.ParseCIDR("10.1.1.1/24")
vpnIP = ip2int(net.ParseIP("172.1.1.2"))
preferredRanges := []*net.IPNet{localrange}
localrange := netip.MustParsePrefix("10.1.1.1/24")
vpnIp := netip.MustParseAddr("172.1.1.2")
preferredRanges := []netip.Prefix{localrange}
// Very incomplete mock objects
hostMap := NewHostMap("test", vpncidr, preferredRanges)
hostMap := newHostMap(l)
hostMap.preferredRanges.Store(&preferredRanges)
cs := &CertState{
rawCertificate: []byte{},
privateKey: []byte{},
certificate: &cert.NebulaCertificate{},
rawCertificateNoKey: []byte{},
initiatingVersion: cert.Version1,
privateKey: []byte{},
v1Cert: &dummyCert{version: cert.Version1},
v1HandshakeBytes: []byte{},
}
lh := NewLightHouse(false, 0, []uint32{}, 1000, 0, &udpConn{}, false, 1)
lh := newTestLighthouse()
ifce := &Interface{
hostMap: hostMap,
inside: &Tun{},
outside: &udpConn{},
certState: cs,
inside: &test.NoopTun{},
outside: &udp.NoopConn{},
firewall: &Firewall{},
lightHouse: lh,
handshakeManager: NewHandshakeManager(vpncidr, preferredRanges, hostMap, lh, &udpConn{}, defaultHandshakeConfig),
pki: &PKI{},
handshakeManager: NewHandshakeManager(l, hostMap, lh, &udp.NoopConn{}, defaultHandshakeConfig),
l: l,
}
now := time.Now()
ifce.pki.cs.Store(cs)
// Create manager
nc := newConnectionManager(ifce, 5, 10)
nc.HandleMonitorTick(now)
conf := config.NewC(l)
punchy := NewPunchyFromConfig(l, conf)
nc := newConnectionManagerFromConfig(l, conf, hostMap, punchy)
nc.intf = ifce
p := []byte("")
nb := make([]byte, 12, 12)
out := make([]byte, mtu)
// Add an ip we have established a connection w/ to hostmap
hostinfo := nc.hostMap.AddVpnIP(vpnIP)
hostinfo.ConnectionState = &ConnectionState{
certState: cs,
H: &noise.HandshakeState{},
messageCounter: new(uint64),
hostinfo := &HostInfo{
vpnAddrs: []netip.Addr{vpnIp},
localIndexId: 1099,
remoteIndexId: 9901,
}
hostinfo.ConnectionState = &ConnectionState{
myCert: &dummyCert{version: cert.Version1},
H: &noise.HandshakeState{},
}
nc.hostMap.unlockedAddHostInfo(hostinfo, ifce)
// We saw traffic out to vpnIP
nc.Out(vpnIP)
assert.NotContains(t, nc.pendingDeletion, vpnIP)
assert.Contains(t, nc.hostMap.Hosts, vpnIP)
// Move ahead 5s. Nothing should happen
next_tick := now.Add(5 * time.Second)
nc.HandleMonitorTick(next_tick)
nc.HandleDeletionTick(next_tick)
// Move ahead 6s. We haven't heard back
next_tick = now.Add(6 * time.Second)
nc.HandleMonitorTick(next_tick)
nc.HandleDeletionTick(next_tick)
// This host should now be up for deletion
assert.Contains(t, nc.pendingDeletion, vpnIP)
assert.Contains(t, nc.hostMap.Hosts, vpnIP)
// Move ahead some more
next_tick = now.Add(45 * time.Second)
nc.HandleMonitorTick(next_tick)
nc.HandleDeletionTick(next_tick)
// The host should be evicted
assert.NotContains(t, nc.pendingDeletion, vpnIP)
assert.NotContains(t, nc.hostMap.Hosts, vpnIP)
// We saw traffic out to vpnIp
nc.Out(hostinfo)
nc.In(hostinfo)
assert.False(t, hostinfo.pendingDeletion.Load())
assert.Contains(t, nc.hostMap.Hosts, hostinfo.vpnAddrs[0])
assert.Contains(t, nc.hostMap.Indexes, hostinfo.localIndexId)
assert.True(t, hostinfo.out.Load())
assert.True(t, hostinfo.in.Load())
// Do a traffic check tick, should not be pending deletion but should not have any in/out packets recorded
nc.doTrafficCheck(hostinfo.localIndexId, p, nb, out, time.Now())
assert.False(t, hostinfo.pendingDeletion.Load())
assert.False(t, hostinfo.out.Load())
assert.False(t, hostinfo.in.Load())
// Do another traffic check tick, this host should be pending deletion now
nc.Out(hostinfo)
assert.True(t, hostinfo.out.Load())
nc.doTrafficCheck(hostinfo.localIndexId, p, nb, out, time.Now())
assert.True(t, hostinfo.pendingDeletion.Load())
assert.False(t, hostinfo.out.Load())
assert.False(t, hostinfo.in.Load())
assert.Contains(t, nc.hostMap.Indexes, hostinfo.localIndexId)
assert.Contains(t, nc.hostMap.Hosts, hostinfo.vpnAddrs[0])
// Do a final traffic check tick, the host should now be removed
nc.doTrafficCheck(hostinfo.localIndexId, p, nb, out, time.Now())
assert.NotContains(t, nc.hostMap.Hosts, hostinfo.vpnAddrs)
assert.NotContains(t, nc.hostMap.Indexes, hostinfo.localIndexId)
}
func Test_NewConnectionManagerTest2(t *testing.T) {
l := test.NewLogger()
//_, tuncidr, _ := net.ParseCIDR("1.1.1.1/24")
_, vpncidr, _ := net.ParseCIDR("172.1.1.1/24")
_, localrange, _ := net.ParseCIDR("10.1.1.1/24")
preferredRanges := []*net.IPNet{localrange}
localrange := netip.MustParsePrefix("10.1.1.1/24")
vpnIp := netip.MustParseAddr("172.1.1.2")
preferredRanges := []netip.Prefix{localrange}
// Very incomplete mock objects
hostMap := NewHostMap("test", vpncidr, preferredRanges)
hostMap := newHostMap(l)
hostMap.preferredRanges.Store(&preferredRanges)
cs := &CertState{
rawCertificate: []byte{},
privateKey: []byte{},
certificate: &cert.NebulaCertificate{},
rawCertificateNoKey: []byte{},
initiatingVersion: cert.Version1,
privateKey: []byte{},
v1Cert: &dummyCert{version: cert.Version1},
v1HandshakeBytes: []byte{},
}
lh := NewLightHouse(false, 0, []uint32{}, 1000, 0, &udpConn{}, false, 1)
lh := newTestLighthouse()
ifce := &Interface{
hostMap: hostMap,
inside: &Tun{},
outside: &udpConn{},
certState: cs,
inside: &test.NoopTun{},
outside: &udp.NoopConn{},
firewall: &Firewall{},
lightHouse: lh,
handshakeManager: NewHandshakeManager(vpncidr, preferredRanges, hostMap, lh, &udpConn{}, defaultHandshakeConfig),
pki: &PKI{},
handshakeManager: NewHandshakeManager(l, hostMap, lh, &udp.NoopConn{}, defaultHandshakeConfig),
l: l,
}
now := time.Now()
ifce.pki.cs.Store(cs)
// Create manager
nc := newConnectionManager(ifce, 5, 10)
nc.HandleMonitorTick(now)
conf := config.NewC(l)
punchy := NewPunchyFromConfig(l, conf)
nc := newConnectionManagerFromConfig(l, conf, hostMap, punchy)
nc.intf = ifce
p := []byte("")
nb := make([]byte, 12, 12)
out := make([]byte, mtu)
// Add an ip we have established a connection w/ to hostmap
hostinfo := nc.hostMap.AddVpnIP(vpnIP)
hostinfo := &HostInfo{
vpnAddrs: []netip.Addr{vpnIp},
localIndexId: 1099,
remoteIndexId: 9901,
}
hostinfo.ConnectionState = &ConnectionState{
certState: cs,
H: &noise.HandshakeState{},
messageCounter: new(uint64),
myCert: &dummyCert{version: cert.Version1},
H: &noise.HandshakeState{},
}
nc.hostMap.unlockedAddHostInfo(hostinfo, ifce)
// We saw traffic out to vpnIp
nc.Out(hostinfo)
nc.In(hostinfo)
assert.True(t, hostinfo.in.Load())
assert.True(t, hostinfo.out.Load())
assert.False(t, hostinfo.pendingDeletion.Load())
assert.Contains(t, nc.hostMap.Hosts, hostinfo.vpnAddrs[0])
assert.Contains(t, nc.hostMap.Indexes, hostinfo.localIndexId)
// Do a traffic check tick, should not be pending deletion but should not have any in/out packets recorded
nc.doTrafficCheck(hostinfo.localIndexId, p, nb, out, time.Now())
assert.False(t, hostinfo.pendingDeletion.Load())
assert.False(t, hostinfo.out.Load())
assert.False(t, hostinfo.in.Load())
// Do another traffic check tick, this host should be pending deletion now
nc.Out(hostinfo)
nc.doTrafficCheck(hostinfo.localIndexId, p, nb, out, time.Now())
assert.True(t, hostinfo.pendingDeletion.Load())
assert.False(t, hostinfo.out.Load())
assert.False(t, hostinfo.in.Load())
assert.Contains(t, nc.hostMap.Indexes, hostinfo.localIndexId)
assert.Contains(t, nc.hostMap.Hosts, hostinfo.vpnAddrs[0])
// We saw traffic, should no longer be pending deletion
nc.In(hostinfo)
nc.doTrafficCheck(hostinfo.localIndexId, p, nb, out, time.Now())
assert.False(t, hostinfo.pendingDeletion.Load())
assert.False(t, hostinfo.out.Load())
assert.False(t, hostinfo.in.Load())
assert.Contains(t, nc.hostMap.Indexes, hostinfo.localIndexId)
assert.Contains(t, nc.hostMap.Hosts, hostinfo.vpnAddrs[0])
}
func Test_NewConnectionManager_DisconnectInactive(t *testing.T) {
l := test.NewLogger()
localrange := netip.MustParsePrefix("10.1.1.1/24")
vpnAddrs := []netip.Addr{netip.MustParseAddr("172.1.1.2")}
preferredRanges := []netip.Prefix{localrange}
// Very incomplete mock objects
hostMap := newHostMap(l)
hostMap.preferredRanges.Store(&preferredRanges)
cs := &CertState{
initiatingVersion: cert.Version1,
privateKey: []byte{},
v1Cert: &dummyCert{version: cert.Version1},
v1HandshakeBytes: []byte{},
}
// We saw traffic out to vpnIP
nc.Out(vpnIP)
assert.NotContains(t, nc.pendingDeletion, vpnIP)
assert.Contains(t, nc.hostMap.Hosts, vpnIP)
// Move ahead 5s. Nothing should happen
next_tick := now.Add(5 * time.Second)
nc.HandleMonitorTick(next_tick)
nc.HandleDeletionTick(next_tick)
// Move ahead 6s. We haven't heard back
next_tick = now.Add(6 * time.Second)
nc.HandleMonitorTick(next_tick)
nc.HandleDeletionTick(next_tick)
// This host should now be up for deletion
assert.Contains(t, nc.pendingDeletion, vpnIP)
assert.Contains(t, nc.hostMap.Hosts, vpnIP)
// We heard back this time
nc.In(vpnIP)
// Move ahead some more
next_tick = now.Add(45 * time.Second)
nc.HandleMonitorTick(next_tick)
nc.HandleDeletionTick(next_tick)
// The host should be evicted
assert.NotContains(t, nc.pendingDeletion, vpnIP)
assert.Contains(t, nc.hostMap.Hosts, vpnIP)
lh := newTestLighthouse()
ifce := &Interface{
hostMap: hostMap,
inside: &test.NoopTun{},
outside: &udp.NoopConn{},
firewall: &Firewall{},
lightHouse: lh,
pki: &PKI{},
handshakeManager: NewHandshakeManager(l, hostMap, lh, &udp.NoopConn{}, defaultHandshakeConfig),
l: l,
}
ifce.pki.cs.Store(cs)
// Create manager
conf := config.NewC(l)
conf.Settings["tunnels"] = map[string]any{
"drop_inactive": true,
}
punchy := NewPunchyFromConfig(l, conf)
nc := newConnectionManagerFromConfig(l, conf, hostMap, punchy)
assert.True(t, nc.dropInactive.Load())
nc.intf = ifce
// Add an ip we have established a connection w/ to hostmap
hostinfo := &HostInfo{
vpnAddrs: vpnAddrs,
localIndexId: 1099,
remoteIndexId: 9901,
}
hostinfo.ConnectionState = &ConnectionState{
myCert: &dummyCert{version: cert.Version1},
H: &noise.HandshakeState{},
}
nc.hostMap.unlockedAddHostInfo(hostinfo, ifce)
// Do a traffic check tick, in and out should be cleared but should not be pending deletion
nc.Out(hostinfo)
nc.In(hostinfo)
assert.True(t, hostinfo.out.Load())
assert.True(t, hostinfo.in.Load())
now := time.Now()
decision, _, _ := nc.makeTrafficDecision(hostinfo.localIndexId, now)
assert.Equal(t, tryRehandshake, decision)
assert.Equal(t, now, hostinfo.lastUsed)
assert.False(t, hostinfo.pendingDeletion.Load())
assert.False(t, hostinfo.out.Load())
assert.False(t, hostinfo.in.Load())
decision, _, _ = nc.makeTrafficDecision(hostinfo.localIndexId, now.Add(time.Second*5))
assert.Equal(t, doNothing, decision)
assert.Equal(t, now, hostinfo.lastUsed)
assert.False(t, hostinfo.pendingDeletion.Load())
assert.False(t, hostinfo.out.Load())
assert.False(t, hostinfo.in.Load())
// Do another traffic check tick, should still not be pending deletion
decision, _, _ = nc.makeTrafficDecision(hostinfo.localIndexId, now.Add(time.Second*10))
assert.Equal(t, doNothing, decision)
assert.Equal(t, now, hostinfo.lastUsed)
assert.False(t, hostinfo.pendingDeletion.Load())
assert.False(t, hostinfo.out.Load())
assert.False(t, hostinfo.in.Load())
assert.Contains(t, nc.hostMap.Indexes, hostinfo.localIndexId)
assert.Contains(t, nc.hostMap.Hosts, hostinfo.vpnAddrs[0])
// Finally advance beyond the inactivity timeout
decision, _, _ = nc.makeTrafficDecision(hostinfo.localIndexId, now.Add(time.Minute*10))
assert.Equal(t, closeTunnel, decision)
assert.Equal(t, now, hostinfo.lastUsed)
assert.False(t, hostinfo.pendingDeletion.Load())
assert.False(t, hostinfo.out.Load())
assert.False(t, hostinfo.in.Load())
assert.Contains(t, nc.hostMap.Indexes, hostinfo.localIndexId)
assert.Contains(t, nc.hostMap.Hosts, hostinfo.vpnAddrs[0])
}
// Check if we can disconnect the peer.
// Validate if the peer's certificate is invalid (expired, etc.)
// Disconnect only if disconnectInvalid: true is set.
func Test_NewConnectionManagerTest_DisconnectInvalid(t *testing.T) {
now := time.Now()
l := test.NewLogger()
vpncidr := netip.MustParsePrefix("172.1.1.1/24")
localrange := netip.MustParsePrefix("10.1.1.1/24")
vpnIp := netip.MustParseAddr("172.1.1.2")
preferredRanges := []netip.Prefix{localrange}
hostMap := newHostMap(l)
hostMap.preferredRanges.Store(&preferredRanges)
// Generate keys for CA and peer's cert.
pubCA, privCA, _ := ed25519.GenerateKey(rand.Reader)
tbs := &cert.TBSCertificate{
Version: 1,
Name: "ca",
IsCA: true,
NotBefore: now,
NotAfter: now.Add(1 * time.Hour),
PublicKey: pubCA,
}
caCert, err := tbs.Sign(nil, cert.Curve_CURVE25519, privCA)
require.NoError(t, err)
ncp := cert.NewCAPool()
require.NoError(t, ncp.AddCA(caCert))
pubCrt, _, _ := ed25519.GenerateKey(rand.Reader)
tbs = &cert.TBSCertificate{
Version: 1,
Name: "host",
Networks: []netip.Prefix{vpncidr},
NotBefore: now,
NotAfter: now.Add(60 * time.Second),
PublicKey: pubCrt,
}
peerCert, err := tbs.Sign(caCert, cert.Curve_CURVE25519, privCA)
require.NoError(t, err)
cachedPeerCert, err := ncp.VerifyCertificate(now.Add(time.Second), peerCert)
cs := &CertState{
privateKey: []byte{},
v1Cert: &dummyCert{},
v1HandshakeBytes: []byte{},
}
lh := newTestLighthouse()
ifce := &Interface{
hostMap: hostMap,
inside: &test.NoopTun{},
outside: &udp.NoopConn{},
firewall: &Firewall{},
lightHouse: lh,
handshakeManager: NewHandshakeManager(l, hostMap, lh, &udp.NoopConn{}, defaultHandshakeConfig),
l: l,
pki: &PKI{},
}
ifce.pki.cs.Store(cs)
ifce.pki.caPool.Store(ncp)
ifce.disconnectInvalid.Store(true)
// Create manager
conf := config.NewC(l)
punchy := NewPunchyFromConfig(l, conf)
nc := newConnectionManagerFromConfig(l, conf, hostMap, punchy)
nc.intf = ifce
ifce.connectionManager = nc
hostinfo := &HostInfo{
vpnAddrs: []netip.Addr{vpnIp},
ConnectionState: &ConnectionState{
myCert: &dummyCert{},
peerCert: cachedPeerCert,
H: &noise.HandshakeState{},
},
}
nc.hostMap.unlockedAddHostInfo(hostinfo, ifce)
// Move ahead 45s.
// Check if to disconnect with invalid certificate.
// Should be alive.
nextTick := now.Add(45 * time.Second)
invalid := nc.isInvalidCertificate(nextTick, hostinfo)
assert.False(t, invalid)
// Move ahead 61s.
// Check if to disconnect with invalid certificate.
// Should be disconnected.
nextTick = now.Add(61 * time.Second)
invalid = nc.isInvalidCertificate(nextTick, hostinfo)
assert.True(t, invalid)
}
type dummyCert struct {
version cert.Version
curve cert.Curve
groups []string
isCa bool
issuer string
name string
networks []netip.Prefix
notAfter time.Time
notBefore time.Time
publicKey []byte
signature []byte
unsafeNetworks []netip.Prefix
}
func (d *dummyCert) Version() cert.Version {
return d.version
}
func (d *dummyCert) Curve() cert.Curve {
return d.curve
}
func (d *dummyCert) Groups() []string {
return d.groups
}
func (d *dummyCert) IsCA() bool {
return d.isCa
}
func (d *dummyCert) Issuer() string {
return d.issuer
}
func (d *dummyCert) Name() string {
return d.name
}
func (d *dummyCert) Networks() []netip.Prefix {
return d.networks
}
func (d *dummyCert) NotAfter() time.Time {
return d.notAfter
}
func (d *dummyCert) NotBefore() time.Time {
return d.notBefore
}
func (d *dummyCert) PublicKey() []byte {
return d.publicKey
}
func (d *dummyCert) Signature() []byte {
return d.signature
}
func (d *dummyCert) UnsafeNetworks() []netip.Prefix {
return d.unsafeNetworks
}
func (d *dummyCert) MarshalForHandshakes() ([]byte, error) {
return nil, nil
}
func (d *dummyCert) Sign(curve cert.Curve, key []byte) error {
return nil
}
func (d *dummyCert) CheckSignature(key []byte) bool {
return true
}
func (d *dummyCert) Expired(t time.Time) bool {
return false
}
func (d *dummyCert) CheckRootConstraints(signer cert.Certificate) error {
return nil
}
func (d *dummyCert) VerifyPrivateKey(curve cert.Curve, key []byte) error {
return nil
}
func (d *dummyCert) String() string {
return ""
}
func (d *dummyCert) Marshal() ([]byte, error) {
return nil, nil
}
func (d *dummyCert) MarshalPEM() ([]byte, error) {
return nil, nil
}
func (d *dummyCert) Fingerprint() (string, error) {
return "", nil
}
func (d *dummyCert) MarshalJSON() ([]byte, error) {
return nil, nil
}
func (d *dummyCert) Copy() cert.Certificate {
return d
}

View File

@@ -3,10 +3,14 @@ package nebula
import (
"crypto/rand"
"encoding/json"
"fmt"
"sync"
"sync/atomic"
"github.com/flynn/noise"
"github.com/sirupsen/logrus"
"github.com/slackhq/nebula/cert"
"github.com/slackhq/nebula/noiseutil"
)
const ReplayWindow = 1024
@@ -15,61 +19,78 @@ type ConnectionState struct {
eKey *NebulaCipherState
dKey *NebulaCipherState
H *noise.HandshakeState
certState *CertState
peerCert *cert.NebulaCertificate
myCert cert.Certificate
peerCert *cert.CachedCertificate
initiator bool
messageCounter *uint64
messageCounter atomic.Uint64
window *Bits
queueLock sync.Mutex
writeLock sync.Mutex
ready bool
}
func (f *Interface) newConnectionState(initiator bool, pattern noise.HandshakePattern, psk []byte, pskStage int) *ConnectionState {
cs := noise.NewCipherSuite(noise.DH25519, noise.CipherAESGCM, noise.HashSHA256)
if f.cipher == "chachapoly" {
cs = noise.NewCipherSuite(noise.DH25519, noise.CipherChaChaPoly, noise.HashSHA256)
func NewConnectionState(l *logrus.Logger, cs *CertState, crt cert.Certificate, initiator bool, pattern noise.HandshakePattern) (*ConnectionState, error) {
var dhFunc noise.DHFunc
switch crt.Curve() {
case cert.Curve_CURVE25519:
dhFunc = noise.DH25519
case cert.Curve_P256:
if cs.pkcs11Backed {
dhFunc = noiseutil.DHP256PKCS11
} else {
dhFunc = noiseutil.DHP256
}
default:
return nil, fmt.Errorf("invalid curve: %s", crt.Curve())
}
curCertState := f.certState
static := noise.DHKey{Private: curCertState.privateKey, Public: curCertState.publicKey}
var ncs noise.CipherSuite
if cs.cipher == "chachapoly" {
ncs = noise.NewCipherSuite(dhFunc, noise.CipherChaChaPoly, noise.HashSHA256)
} else {
ncs = noise.NewCipherSuite(dhFunc, noiseutil.CipherAESGCM, noise.HashSHA256)
}
static := noise.DHKey{Private: cs.privateKey, Public: crt.PublicKey()}
b := NewBits(ReplayWindow)
// Clear out bit 0, we never transmit it and we don't want it showing as packet loss
b.Update(0)
// Clear out bit 0, we never transmit it, and we don't want it showing as packet loss
b.Update(l, 0)
hs, err := noise.NewHandshakeState(noise.Config{
CipherSuite: cs,
Random: rand.Reader,
Pattern: pattern,
Initiator: initiator,
StaticKeypair: static,
PresharedKey: psk,
PresharedKeyPlacement: pskStage,
CipherSuite: ncs,
Random: rand.Reader,
Pattern: pattern,
Initiator: initiator,
StaticKeypair: static,
//NOTE: These should come from CertState (pki.go) when we finally implement it
PresharedKey: []byte{},
PresharedKeyPlacement: 0,
})
if err != nil {
return nil
return nil, fmt.Errorf("NewConnectionState: %s", err)
}
// The queue and ready params prevent a counter race that would happen when
// sending stored packets and simultaneously accepting new traffic.
ci := &ConnectionState{
H: hs,
initiator: initiator,
window: b,
ready: false,
certState: curCertState,
messageCounter: new(uint64),
H: hs,
initiator: initiator,
window: b,
myCert: crt,
}
// always start the counter from 2, as packet 1 and packet 2 are handshake packets.
ci.messageCounter.Add(2)
return ci
return ci, nil
}
func (cs *ConnectionState) MarshalJSON() ([]byte, error) {
return json.Marshal(m{
"certificate": cs.peerCert,
"initiator": cs.initiator,
"message_counter": cs.messageCounter,
"ready": cs.ready,
"message_counter": cs.messageCounter.Load(),
})
}
func (cs *ConnectionState) Curve() cert.Curve {
return cs.myCert.Curve()
}

319
control.go Normal file
View File

@@ -0,0 +1,319 @@
package nebula
import (
"context"
"net/netip"
"os"
"os/signal"
"syscall"
"github.com/sirupsen/logrus"
"github.com/slackhq/nebula/cert"
"github.com/slackhq/nebula/header"
"github.com/slackhq/nebula/overlay"
)
// Every interaction here needs to take extra care to copy memory and not return or use arguments "as is" when touching
// core. This means copying IP objects, slices, de-referencing pointers and taking the actual value, etc
type controlEach func(h *HostInfo)
type controlHostLister interface {
QueryVpnAddr(vpnAddr netip.Addr) *HostInfo
ForEachIndex(each controlEach)
ForEachVpnAddr(each controlEach)
GetPreferredRanges() []netip.Prefix
}
type Control struct {
f *Interface
l *logrus.Logger
ctx context.Context
cancel context.CancelFunc
sshStart func()
statsStart func()
dnsStart func()
lighthouseStart func()
connectionManagerStart func(context.Context)
}
type ControlHostInfo struct {
VpnAddrs []netip.Addr `json:"vpnAddrs"`
LocalIndex uint32 `json:"localIndex"`
RemoteIndex uint32 `json:"remoteIndex"`
RemoteAddrs []netip.AddrPort `json:"remoteAddrs"`
Cert cert.Certificate `json:"cert"`
MessageCounter uint64 `json:"messageCounter"`
CurrentRemote netip.AddrPort `json:"currentRemote"`
CurrentRelaysToMe []netip.Addr `json:"currentRelaysToMe"`
CurrentRelaysThroughMe []netip.Addr `json:"currentRelaysThroughMe"`
}
// Start actually runs nebula, this is a nonblocking call. To block use Control.ShutdownBlock()
func (c *Control) Start() {
// Activate the interface
c.f.activate()
// Call all the delayed funcs that waited patiently for the interface to be created.
if c.sshStart != nil {
go c.sshStart()
}
if c.statsStart != nil {
go c.statsStart()
}
if c.dnsStart != nil {
go c.dnsStart()
}
if c.connectionManagerStart != nil {
go c.connectionManagerStart(c.ctx)
}
if c.lighthouseStart != nil {
c.lighthouseStart()
}
// Start reading packets.
c.f.run()
}
func (c *Control) Context() context.Context {
return c.ctx
}
// Stop signals nebula to shutdown and close all tunnels, returns after the shutdown is complete
func (c *Control) Stop() {
// Stop the handshakeManager (and other services), to prevent new tunnels from
// being created while we're shutting them all down.
c.cancel()
c.CloseAllTunnels(false)
if err := c.f.Close(); err != nil {
c.l.WithError(err).Error("Close interface failed")
}
c.l.Info("Goodbye")
}
// ShutdownBlock will listen for and block on term and interrupt signals, calling Control.Stop() once signalled
func (c *Control) ShutdownBlock() {
sigChan := make(chan os.Signal, 1)
signal.Notify(sigChan, syscall.SIGTERM)
signal.Notify(sigChan, syscall.SIGINT)
rawSig := <-sigChan
sig := rawSig.String()
c.l.WithField("signal", sig).Info("Caught signal, shutting down")
c.Stop()
}
// RebindUDPServer asks the UDP listener to rebind it's listener. Mainly used on mobile clients when interfaces change
func (c *Control) RebindUDPServer() {
_ = c.f.outside.Rebind()
// Trigger a lighthouse update, useful for mobile clients that should have an update interval of 0
c.f.lightHouse.SendUpdate()
// Let the main interface know that we rebound so that underlying tunnels know to trigger punches from their remotes
c.f.rebindCount++
}
// ListHostmapHosts returns details about the actual or pending (handshaking) hostmap by vpn ip
func (c *Control) ListHostmapHosts(pendingMap bool) []ControlHostInfo {
if pendingMap {
return listHostMapHosts(c.f.handshakeManager)
} else {
return listHostMapHosts(c.f.hostMap)
}
}
// ListHostmapIndexes returns details about the actual or pending (handshaking) hostmap by local index id
func (c *Control) ListHostmapIndexes(pendingMap bool) []ControlHostInfo {
if pendingMap {
return listHostMapIndexes(c.f.handshakeManager)
} else {
return listHostMapIndexes(c.f.hostMap)
}
}
// GetCertByVpnIp returns the authenticated certificate of the given vpn IP, or nil if not found
func (c *Control) GetCertByVpnIp(vpnIp netip.Addr) cert.Certificate {
if c.f.myVpnAddrsTable.Contains(vpnIp) {
// Only returning the default certificate since its impossible
// for any other host but ourselves to have more than 1
return c.f.pki.getCertState().GetDefaultCertificate().Copy()
}
hi := c.f.hostMap.QueryVpnAddr(vpnIp)
if hi == nil {
return nil
}
return hi.GetCert().Certificate.Copy()
}
// CreateTunnel creates a new tunnel to the given vpn ip.
func (c *Control) CreateTunnel(vpnIp netip.Addr) {
c.f.handshakeManager.StartHandshake(vpnIp, nil)
}
// PrintTunnel creates a new tunnel to the given vpn ip.
func (c *Control) PrintTunnel(vpnIp netip.Addr) *ControlHostInfo {
hi := c.f.hostMap.QueryVpnAddr(vpnIp)
if hi == nil {
return nil
}
chi := copyHostInfo(hi, c.f.hostMap.GetPreferredRanges())
return &chi
}
// QueryLighthouse queries the lighthouse.
func (c *Control) QueryLighthouse(vpnIp netip.Addr) *CacheMap {
hi := c.f.lightHouse.Query(vpnIp)
if hi == nil {
return nil
}
return hi.CopyCache()
}
// GetHostInfoByVpnAddr returns a single tunnels hostInfo, or nil if not found
// Caller should take care to Unmap() any 4in6 addresses prior to calling.
func (c *Control) GetHostInfoByVpnAddr(vpnAddr netip.Addr, pending bool) *ControlHostInfo {
var hl controlHostLister
if pending {
hl = c.f.handshakeManager
} else {
hl = c.f.hostMap
}
h := hl.QueryVpnAddr(vpnAddr)
if h == nil {
return nil
}
ch := copyHostInfo(h, c.f.hostMap.GetPreferredRanges())
return &ch
}
// SetRemoteForTunnel forces a tunnel to use a specific remote
// Caller should take care to Unmap() any 4in6 addresses prior to calling.
func (c *Control) SetRemoteForTunnel(vpnIp netip.Addr, addr netip.AddrPort) *ControlHostInfo {
hostInfo := c.f.hostMap.QueryVpnAddr(vpnIp)
if hostInfo == nil {
return nil
}
hostInfo.SetRemote(addr)
ch := copyHostInfo(hostInfo, c.f.hostMap.GetPreferredRanges())
return &ch
}
// CloseTunnel closes a fully established tunnel. If localOnly is false it will notify the remote end as well.
// Caller should take care to Unmap() any 4in6 addresses prior to calling.
func (c *Control) CloseTunnel(vpnIp netip.Addr, localOnly bool) bool {
hostInfo := c.f.hostMap.QueryVpnAddr(vpnIp)
if hostInfo == nil {
return false
}
if !localOnly {
c.f.send(
header.CloseTunnel,
0,
hostInfo.ConnectionState,
hostInfo,
[]byte{},
make([]byte, 12, 12),
make([]byte, mtu),
)
}
c.f.closeTunnel(hostInfo)
return true
}
// CloseAllTunnels is just like CloseTunnel except it goes through and shuts them all down, optionally you can avoid shutting down lighthouse tunnels
// the int returned is a count of tunnels closed
func (c *Control) CloseAllTunnels(excludeLighthouses bool) (closed int) {
shutdown := func(h *HostInfo) {
if excludeLighthouses && c.f.lightHouse.IsAnyLighthouseAddr(h.vpnAddrs) {
return
}
c.f.send(header.CloseTunnel, 0, h.ConnectionState, h, []byte{}, make([]byte, 12, 12), make([]byte, mtu))
c.f.closeTunnel(h)
c.l.WithField("vpnAddrs", h.vpnAddrs).WithField("udpAddr", h.remote).
Debug("Sending close tunnel message")
closed++
}
// Learn which hosts are being used as relays, so we can shut them down last.
relayingHosts := map[netip.Addr]*HostInfo{}
// Grab the hostMap lock to access the Relays map
c.f.hostMap.Lock()
for _, relayingHost := range c.f.hostMap.Relays {
relayingHosts[relayingHost.vpnAddrs[0]] = relayingHost
}
c.f.hostMap.Unlock()
hostInfos := []*HostInfo{}
// Grab the hostMap lock to access the Hosts map
c.f.hostMap.Lock()
for _, relayHost := range c.f.hostMap.Indexes {
if _, ok := relayingHosts[relayHost.vpnAddrs[0]]; !ok {
hostInfos = append(hostInfos, relayHost)
}
}
c.f.hostMap.Unlock()
for _, h := range hostInfos {
shutdown(h)
}
for _, h := range relayingHosts {
shutdown(h)
}
return
}
func (c *Control) Device() overlay.Device {
return c.f.inside
}
func copyHostInfo(h *HostInfo, preferredRanges []netip.Prefix) ControlHostInfo {
chi := ControlHostInfo{
VpnAddrs: make([]netip.Addr, len(h.vpnAddrs)),
LocalIndex: h.localIndexId,
RemoteIndex: h.remoteIndexId,
RemoteAddrs: h.remotes.CopyAddrs(preferredRanges),
CurrentRelaysToMe: h.relayState.CopyRelayIps(),
CurrentRelaysThroughMe: h.relayState.CopyRelayForIps(),
CurrentRemote: h.remote,
}
for i, a := range h.vpnAddrs {
chi.VpnAddrs[i] = a
}
if h.ConnectionState != nil {
chi.MessageCounter = h.ConnectionState.messageCounter.Load()
}
if c := h.GetCert(); c != nil {
chi.Cert = c.Certificate.Copy()
}
return chi
}
func listHostMapHosts(hl controlHostLister) []ControlHostInfo {
hosts := make([]ControlHostInfo, 0)
pr := hl.GetPreferredRanges()
hl.ForEachVpnAddr(func(hostinfo *HostInfo) {
hosts = append(hosts, copyHostInfo(hostinfo, pr))
})
return hosts
}
func listHostMapIndexes(hl controlHostLister) []ControlHostInfo {
hosts := make([]ControlHostInfo, 0)
pr := hl.GetPreferredRanges()
hl.ForEachIndex(func(hostinfo *HostInfo) {
hosts = append(hosts, copyHostInfo(hostinfo, pr))
})
return hosts
}

121
control_test.go Normal file
View File

@@ -0,0 +1,121 @@
package nebula
import (
"net"
"net/netip"
"reflect"
"testing"
"github.com/sirupsen/logrus"
"github.com/slackhq/nebula/cert"
"github.com/slackhq/nebula/test"
"github.com/stretchr/testify/assert"
)
func TestControl_GetHostInfoByVpnIp(t *testing.T) {
//TODO: CERT-V2 with multiple certificate versions we have a problem with this test
// Some certs versions have different characteristics and each version implements their own Copy() func
// which means this is not a good place to test for exposing memory
l := test.NewLogger()
// Special care must be taken to re-use all objects provided to the hostmap and certificate in the expectedInfo object
// To properly ensure we are not exposing core memory to the caller
hm := newHostMap(l)
hm.preferredRanges.Store(&[]netip.Prefix{})
remote1 := netip.MustParseAddrPort("0.0.0.100:4444")
remote2 := netip.MustParseAddrPort("[1:2:3:4:5:6:7:8]:4444")
ipNet := net.IPNet{
IP: remote1.Addr().AsSlice(),
Mask: net.IPMask{255, 255, 255, 0},
}
ipNet2 := net.IPNet{
IP: remote2.Addr().AsSlice(),
Mask: net.IPMask{255, 255, 255, 0},
}
remotes := NewRemoteList([]netip.Addr{netip.IPv4Unspecified()}, nil)
remotes.unlockedPrependV4(netip.IPv4Unspecified(), netAddrToProtoV4AddrPort(remote1.Addr(), remote1.Port()))
remotes.unlockedPrependV6(netip.IPv4Unspecified(), netAddrToProtoV6AddrPort(remote2.Addr(), remote2.Port()))
vpnIp, ok := netip.AddrFromSlice(ipNet.IP)
assert.True(t, ok)
crt := &dummyCert{}
hm.unlockedAddHostInfo(&HostInfo{
remote: remote1,
remotes: remotes,
ConnectionState: &ConnectionState{
peerCert: &cert.CachedCertificate{Certificate: crt},
},
remoteIndexId: 200,
localIndexId: 201,
vpnAddrs: []netip.Addr{vpnIp},
relayState: RelayState{
relays: nil,
relayForByAddr: map[netip.Addr]*Relay{},
relayForByIdx: map[uint32]*Relay{},
},
}, &Interface{})
vpnIp2, ok := netip.AddrFromSlice(ipNet2.IP)
assert.True(t, ok)
hm.unlockedAddHostInfo(&HostInfo{
remote: remote1,
remotes: remotes,
ConnectionState: &ConnectionState{
peerCert: nil,
},
remoteIndexId: 200,
localIndexId: 201,
vpnAddrs: []netip.Addr{vpnIp2},
relayState: RelayState{
relays: nil,
relayForByAddr: map[netip.Addr]*Relay{},
relayForByIdx: map[uint32]*Relay{},
},
}, &Interface{})
c := Control{
f: &Interface{
hostMap: hm,
},
l: logrus.New(),
}
thi := c.GetHostInfoByVpnAddr(vpnIp, false)
expectedInfo := ControlHostInfo{
VpnAddrs: []netip.Addr{vpnIp},
LocalIndex: 201,
RemoteIndex: 200,
RemoteAddrs: []netip.AddrPort{remote2, remote1},
Cert: crt.Copy(),
MessageCounter: 0,
CurrentRemote: remote1,
CurrentRelaysToMe: []netip.Addr{},
CurrentRelaysThroughMe: []netip.Addr{},
}
// Make sure we don't have any unexpected fields
assertFields(t, []string{"VpnAddrs", "LocalIndex", "RemoteIndex", "RemoteAddrs", "Cert", "MessageCounter", "CurrentRemote", "CurrentRelaysToMe", "CurrentRelaysThroughMe"}, thi)
assert.Equal(t, &expectedInfo, thi)
test.AssertDeepCopyEqual(t, &expectedInfo, thi)
// Make sure we don't panic if the host info doesn't have a cert yet
assert.NotPanics(t, func() {
thi = c.GetHostInfoByVpnAddr(vpnIp2, false)
})
}
func assertFields(t *testing.T, expected []string, actualStruct any) {
val := reflect.ValueOf(actualStruct).Elem()
fields := make([]string, val.NumField())
for i := 0; i < val.NumField(); i++ {
fields[i] = val.Type().Field(i).Name
}
assert.Equal(t, expected, fields)
}

183
control_tester.go Normal file
View File

@@ -0,0 +1,183 @@
//go:build e2e_testing
// +build e2e_testing
package nebula
import (
"net/netip"
"github.com/google/gopacket"
"github.com/google/gopacket/layers"
"github.com/slackhq/nebula/header"
"github.com/slackhq/nebula/overlay"
"github.com/slackhq/nebula/udp"
)
// WaitForType will pipe all messages from this control device into the pipeTo control device
// returning after a message matching the criteria has been piped
func (c *Control) WaitForType(msgType header.MessageType, subType header.MessageSubType, pipeTo *Control) {
h := &header.H{}
for {
p := c.f.outside.(*udp.TesterConn).Get(true)
if err := h.Parse(p.Data); err != nil {
panic(err)
}
pipeTo.InjectUDPPacket(p)
if h.Type == msgType && h.Subtype == subType {
return
}
}
}
// WaitForTypeByIndex is similar to WaitForType except it adds an index check
// Useful if you have many nodes communicating and want to wait to find a specific nodes packet
func (c *Control) WaitForTypeByIndex(toIndex uint32, msgType header.MessageType, subType header.MessageSubType, pipeTo *Control) {
h := &header.H{}
for {
p := c.f.outside.(*udp.TesterConn).Get(true)
if err := h.Parse(p.Data); err != nil {
panic(err)
}
pipeTo.InjectUDPPacket(p)
if h.RemoteIndex == toIndex && h.Type == msgType && h.Subtype == subType {
return
}
}
}
// InjectLightHouseAddr will push toAddr into the local lighthouse cache for the vpnIp
// This is necessary if you did not configure static hosts or are not running a lighthouse
func (c *Control) InjectLightHouseAddr(vpnIp netip.Addr, toAddr netip.AddrPort) {
c.f.lightHouse.Lock()
remoteList := c.f.lightHouse.unlockedGetRemoteList([]netip.Addr{vpnIp})
remoteList.Lock()
defer remoteList.Unlock()
c.f.lightHouse.Unlock()
if toAddr.Addr().Is4() {
remoteList.unlockedPrependV4(vpnIp, netAddrToProtoV4AddrPort(toAddr.Addr(), toAddr.Port()))
} else {
remoteList.unlockedPrependV6(vpnIp, netAddrToProtoV6AddrPort(toAddr.Addr(), toAddr.Port()))
}
}
// InjectRelays will push relayVpnIps into the local lighthouse cache for the vpnIp
// This is necessary to inform an initiator of possible relays for communicating with a responder
func (c *Control) InjectRelays(vpnIp netip.Addr, relayVpnIps []netip.Addr) {
c.f.lightHouse.Lock()
remoteList := c.f.lightHouse.unlockedGetRemoteList([]netip.Addr{vpnIp})
remoteList.Lock()
defer remoteList.Unlock()
c.f.lightHouse.Unlock()
remoteList.unlockedSetRelay(vpnIp, relayVpnIps)
}
// GetFromTun will pull a packet off the tun side of nebula
func (c *Control) GetFromTun(block bool) []byte {
return c.f.inside.(*overlay.TestTun).Get(block)
}
// GetFromUDP will pull a udp packet off the udp side of nebula
func (c *Control) GetFromUDP(block bool) *udp.Packet {
return c.f.outside.(*udp.TesterConn).Get(block)
}
func (c *Control) GetUDPTxChan() <-chan *udp.Packet {
return c.f.outside.(*udp.TesterConn).TxPackets
}
func (c *Control) GetTunTxChan() <-chan []byte {
return c.f.inside.(*overlay.TestTun).TxPackets
}
// InjectUDPPacket will inject a packet into the udp side of nebula
func (c *Control) InjectUDPPacket(p *udp.Packet) {
c.f.outside.(*udp.TesterConn).Send(p)
}
// InjectTunUDPPacket puts a udp packet on the tun interface. Using UDP here because it's a simpler protocol
func (c *Control) InjectTunUDPPacket(toAddr netip.Addr, toPort uint16, fromAddr netip.Addr, fromPort uint16, data []byte) {
serialize := make([]gopacket.SerializableLayer, 0)
var netLayer gopacket.NetworkLayer
if toAddr.Is6() {
if !fromAddr.Is6() {
panic("Cant send ipv6 to ipv4")
}
ip := &layers.IPv6{
Version: 6,
NextHeader: layers.IPProtocolUDP,
SrcIP: fromAddr.Unmap().AsSlice(),
DstIP: toAddr.Unmap().AsSlice(),
}
serialize = append(serialize, ip)
netLayer = ip
} else {
if !fromAddr.Is4() {
panic("Cant send ipv4 to ipv6")
}
ip := &layers.IPv4{
Version: 4,
TTL: 64,
Protocol: layers.IPProtocolUDP,
SrcIP: fromAddr.Unmap().AsSlice(),
DstIP: toAddr.Unmap().AsSlice(),
}
serialize = append(serialize, ip)
netLayer = ip
}
udp := layers.UDP{
SrcPort: layers.UDPPort(fromPort),
DstPort: layers.UDPPort(toPort),
}
err := udp.SetNetworkLayerForChecksum(netLayer)
if err != nil {
panic(err)
}
buffer := gopacket.NewSerializeBuffer()
opt := gopacket.SerializeOptions{
ComputeChecksums: true,
FixLengths: true,
}
serialize = append(serialize, &udp, gopacket.Payload(data))
err = gopacket.SerializeLayers(buffer, opt, serialize...)
if err != nil {
panic(err)
}
c.f.inside.(*overlay.TestTun).Send(buffer.Bytes())
}
func (c *Control) GetVpnAddrs() []netip.Addr {
return c.f.myVpnAddrs
}
func (c *Control) GetUDPAddr() netip.AddrPort {
return c.f.outside.(*udp.TesterConn).Addr
}
func (c *Control) KillPendingTunnel(vpnIp netip.Addr) bool {
hostinfo := c.f.handshakeManager.QueryVpnAddr(vpnIp)
if hostinfo == nil {
return false
}
c.f.handshakeManager.DeleteHostInfo(hostinfo)
return true
}
func (c *Control) GetHostmap() *HostMap {
return c.f.hostMap
}
func (c *Control) GetCertState() *CertState {
return c.f.pki.getCertState()
}
func (c *Control) ReHandshake(vpnIp netip.Addr) {
c.f.handshakeManager.StartHandshake(vpnIp, nil)
}

View File

@@ -1,15 +0,0 @@
[Unit]
Description=nebula
Wants=basic.target
After=basic.target network.target
[Service]
SyslogIdentifier=nebula
StandardOutput=syslog
StandardError=syslog
ExecReload=/bin/kill -HUP $MAINPID
ExecStart=/usr/bin/nebula -config /etc/nebula/config.yml
Restart=always
[Install]
WantedBy=multi-user.target

84
dist/windows/wintun/LICENSE.txt vendored Normal file
View File

@@ -0,0 +1,84 @@
Prebuilt Binaries License
-------------------------
1. DEFINITIONS. "Software" means the precise contents of the "wintun.dll"
files that are included in the .zip file that contains this document as
downloaded from wintun.net/builds.
2. LICENSE GRANT. WireGuard LLC grants to you a non-exclusive and
non-transferable right to use Software for lawful purposes under certain
obligations and limited rights as set forth in this agreement.
3. RESTRICTIONS. Software is owned and copyrighted by WireGuard LLC. It is
licensed, not sold. Title to Software and all associated intellectual
property rights are retained by WireGuard. You must not:
a. reverse engineer, decompile, disassemble, extract from, or otherwise
modify the Software;
b. modify or create derivative work based upon Software in whole or in
parts, except insofar as only the API interfaces of the "wintun.h" file
distributed alongside the Software (the "Permitted API") are used;
c. remove any proprietary notices, labels, or copyrights from the Software;
d. resell, redistribute, lease, rent, transfer, sublicense, or otherwise
transfer rights of the Software without the prior written consent of
WireGuard LLC, except insofar as the Software is distributed alongside
other software that uses the Software only via the Permitted API;
e. use the name of WireGuard LLC, the WireGuard project, the Wintun
project, or the names of its contributors to endorse or promote products
derived from the Software without specific prior written consent.
4. LIMITED WARRANTY. THE SOFTWARE IS PROVIDED "AS IS" AND WITHOUT WARRANTY OF
ANY KIND. WIREGUARD LLC HEREBY EXCLUDES AND DISCLAIMS ALL IMPLIED OR
STATUTORY WARRANTIES, INCLUDING ANY WARRANTIES OF MERCHANTABILITY, FITNESS
FOR A PARTICULAR PURPOSE, QUALITY, NON-INFRINGEMENT, TITLE, RESULTS,
EFFORTS, OR QUIET ENJOYMENT. THERE IS NO WARRANTY THAT THE PRODUCT WILL BE
ERROR-FREE OR WILL FUNCTION WITHOUT INTERRUPTION. YOU ASSUME THE ENTIRE
RISK FOR THE RESULTS OBTAINED USING THE PRODUCT. TO THE EXTENT THAT
WIREGUARD LLC MAY NOT DISCLAIM ANY WARRANTY AS A MATTER OF APPLICABLE LAW,
THE SCOPE AND DURATION OF SUCH WARRANTY WILL BE THE MINIMUM PERMITTED UNDER
SUCH LAW. ALL EXPRESS OR IMPLIED CONDITIONS, REPRESENTATIONS AND
WARRANTIES, INCLUDING ANY IMPLIED WARRANTY OF MERCHANTABILITY, FITNESS FOR
A PARTICULAR PURPOSE OR NON-INFRINGEMENT ARE DISCLAIMED, EXCEPT TO THE
EXTENT THAT THESE DISCLAIMERS ARE HELD TO BE LEGALLY INVALID.
5. LIMITATION OF LIABILITY. To the extent not prohibited by law, in no event
WireGuard LLC or any third-party-developer will be liable for any lost
revenue, profit or data or for special, indirect, consequential, incidental
or punitive damages, however caused regardless of the theory of liability,
arising out of or related to the use of or inability to use Software, even
if WireGuard LLC has been advised of the possibility of such damages.
Solely you are responsible for determining the appropriateness of using
Software and accept full responsibility for all risks associated with its
exercise of rights under this agreement, including but not limited to the
risks and costs of program errors, compliance with applicable laws, damage
to or loss of data, programs or equipment, and unavailability or
interruption of operations. The foregoing limitations will apply even if
the above stated warranty fails of its essential purpose. You acknowledge,
that it is in the nature of software that software is complex and not
completely free of errors. In no event shall WireGuard LLC or any
third-party-developer be liable to you under any theory for any damages
suffered by you or any user of Software or for any special, incidental,
indirect, consequential or similar damages (including without limitation
damages for loss of business profits, business interruption, loss of
business information or any other pecuniary loss) arising out of the use or
inability to use Software, even if WireGuard LLC has been advised of the
possibility of such damages and regardless of the legal or quitable theory
(contract, tort, or otherwise) upon which the claim is based.
6. TERMINATION. This agreement is affected until terminated. You may
terminate this agreement at any time. This agreement will terminate
immediately without notice from WireGuard LLC if you fail to comply with
the terms and conditions of this agreement. Upon termination, you must
delete Software and all copies of Software and cease all forms of
distribution of Software.
7. SEVERABILITY. If any provision of this agreement is held to be
unenforceable, this agreement will remain in effect with the provision
omitted, unless omission would frustrate the intent of the parties, in
which case this agreement will immediately terminate.
8. RESERVATION OF RIGHTS. All rights not expressly granted in this agreement
are reserved by WireGuard LLC. For example, WireGuard LLC reserves the
right at any time to cease development of Software, to alter distribution
details, features, specifications, capabilities, functions, licensing
terms, release dates, APIs, ABIs, general availability, or other
characteristics of the Software.

339
dist/windows/wintun/README.md vendored Normal file
View File

@@ -0,0 +1,339 @@
# [Wintun Network Adapter](https://www.wintun.net/)
### TUN Device Driver for Windows
This is a layer 3 TUN driver for Windows 7, 8, 8.1, and 10. Originally created for [WireGuard](https://www.wireguard.com/), it is intended to be useful to a wide variety of projects that require layer 3 tunneling devices with implementations primarily in userspace.
## Installation
Wintun is deployed as a platform-specific `wintun.dll` file. Install the `wintun.dll` file side-by-side with your application. Download the dll from [wintun.net](https://www.wintun.net/), alongside the header file for your application described below.
## Usage
Include the [`wintun.h` file](https://git.zx2c4.com/wintun/tree/api/wintun.h) in your project simply by copying it there and dynamically load the `wintun.dll` using [`LoadLibraryEx()`](https://docs.microsoft.com/en-us/windows/win32/api/libloaderapi/nf-libloaderapi-loadlibraryexa) and [`GetProcAddress()`](https://docs.microsoft.com/en-us/windows/win32/api/libloaderapi/nf-libloaderapi-getprocaddress) to resolve each function, using the typedefs provided in the header file. The [`InitializeWintun` function in the example.c code](https://git.zx2c4.com/wintun/tree/example/example.c) provides this in a function that you can simply copy and paste.
With the library setup, Wintun can then be used by first creating an adapter, configuring it, and then setting its status to "up". Adapters have names (e.g. "OfficeNet") and types (e.g. "Wintun").
```C
WINTUN_ADAPTER_HANDLE Adapter1 = WintunCreateAdapter(L"OfficeNet", L"Wintun", &SomeFixedGUID1);
WINTUN_ADAPTER_HANDLE Adapter2 = WintunCreateAdapter(L"HomeNet", L"Wintun", &SomeFixedGUID2);
WINTUN_ADAPTER_HANDLE Adapter3 = WintunCreateAdapter(L"Data Center", L"Wintun", &SomeFixedGUID3);
```
After creating an adapter, we can use it by starting a session:
```C
WINTUN_SESSION_HANDLE Session = WintunStartSession(Adapter2, 0x400000);
```
Then, the `WintunAllocateSendPacket` and `WintunSendPacket` functions can be used for sending packets ([used by `SendPackets` in the example.c code](https://git.zx2c4.com/wintun/tree/example/example.c)):
```C
BYTE *OutgoingPacket = WintunAllocateSendPacket(Session, PacketDataSize);
if (OutgoingPacket)
{
memcpy(OutgoingPacket, PacketData, PacketDataSize);
WintunSendPacket(Session, OutgoingPacket);
}
else if (GetLastError() != ERROR_BUFFER_OVERFLOW) // Silently drop packets if the ring is full
Log(L"Packet write failed");
```
And the `WintunReceivePacket` and `WintunReleaseReceivePacket` functions can be used for receiving packets ([used by `ReceivePackets` in the example.c code](https://git.zx2c4.com/wintun/tree/example/example.c)):
```C
for (;;)
{
DWORD IncomingPacketSize;
BYTE *IncomingPacket = WintunReceivePacket(Session, &IncomingPacketSize);
if (IncomingPacket)
{
DoSomethingWithPacket(IncomingPacket, IncomingPacketSize);
WintunReleaseReceivePacket(Session, IncomingPacket);
}
else if (GetLastError() == ERROR_NO_MORE_ITEMS)
WaitForSingleObject(WintunGetReadWaitEvent(Session), INFINITE);
else
{
Log(L"Packet read failed");
break;
}
}
```
Some high performance use cases may want to spin on `WintunReceivePackets` for a number of cycles before falling back to waiting on the read-wait event.
You are **highly encouraged** to read the [**example.c short example**](https://git.zx2c4.com/wintun/tree/example/example.c) to see how to put together a simple userspace network tunnel.
The various functions and definitions are [documented in the reference below](#Reference).
## Reference
### Macro Definitions
#### WINTUN\_MAX\_POOL
`#define WINTUN_MAX_POOL 256`
Maximum pool name length including zero terminator
#### WINTUN\_MIN\_RING\_CAPACITY
`#define WINTUN_MIN_RING_CAPACITY 0x20000 /* 128kiB */`
Minimum ring capacity.
#### WINTUN\_MAX\_RING\_CAPACITY
`#define WINTUN_MAX_RING_CAPACITY 0x4000000 /* 64MiB */`
Maximum ring capacity.
#### WINTUN\_MAX\_IP\_PACKET\_SIZE
`#define WINTUN_MAX_IP_PACKET_SIZE 0xFFFF`
Maximum IP packet size
### Typedefs
#### WINTUN\_ADAPTER\_HANDLE
`typedef void* WINTUN_ADAPTER_HANDLE`
A handle representing Wintun adapter
#### WINTUN\_ENUM\_CALLBACK
`typedef BOOL(* WINTUN_ENUM_CALLBACK) (WINTUN_ADAPTER_HANDLE Adapter, LPARAM Param)`
Called by WintunEnumAdapters for each adapter in the pool.
**Parameters**
- *Adapter*: Adapter handle, which will be freed when this function returns.
- *Param*: An application-defined value passed to the WintunEnumAdapters.
**Returns**
Non-zero to continue iterating adapters; zero to stop.
#### WINTUN\_LOGGER\_CALLBACK
`typedef void(* WINTUN_LOGGER_CALLBACK) (WINTUN_LOGGER_LEVEL Level, DWORD64 Timestamp, const WCHAR *Message)`
Called by internal logger to report diagnostic messages
**Parameters**
- *Level*: Message level.
- *Timestamp*: Message timestamp in in 100ns intervals since 1601-01-01 UTC.
- *Message*: Message text.
#### WINTUN\_SESSION\_HANDLE
`typedef void* WINTUN_SESSION_HANDLE`
A handle representing Wintun session
### Enumeration Types
#### WINTUN\_LOGGER\_LEVEL
`enum WINTUN_LOGGER_LEVEL`
Determines the level of logging, passed to WINTUN\_LOGGER\_CALLBACK.
- *WINTUN\_LOG\_INFO*: Informational
- *WINTUN\_LOG\_WARN*: Warning
- *WINTUN\_LOG\_ERR*: Error
Enumerator
### Functions
#### WintunCreateAdapter()
`WINTUN_ADAPTER_HANDLE WintunCreateAdapter (const WCHAR * Name, const WCHAR * TunnelType, const GUID * RequestedGUID)`
Creates a new Wintun adapter.
**Parameters**
- *Name*: The requested name of the adapter. Zero-terminated string of up to MAX\_ADAPTER\_NAME-1 characters.
- *Name*: Name of the adapter tunnel type. Zero-terminated string of up to MAX\_ADAPTER\_NAME-1 characters.
- *RequestedGUID*: The GUID of the created network adapter, which then influences NLA generation deterministically. If it is set to NULL, the GUID is chosen by the system at random, and hence a new NLA entry is created for each new adapter. It is called "requested" GUID because the API it uses is completely undocumented, and so there could be minor interesting complications with its usage.
**Returns**
If the function succeeds, the return value is the adapter handle. Must be released with WintunCloseAdapter. If the function fails, the return value is NULL. To get extended error information, call GetLastError.
#### WintunOpenAdapter()
`WINTUN_ADAPTER_HANDLE WintunOpenAdapter (const WCHAR * Name)`
Opens an existing Wintun adapter.
**Parameters**
- *Name*: The requested name of the adapter. Zero-terminated string of up to MAX\_ADAPTER\_NAME-1 characters.
**Returns**
If the function succeeds, the return value is adapter handle. Must be released with WintunCloseAdapter. If the function fails, the return value is NULL. To get extended error information, call GetLastError.
#### WintunCloseAdapter()
`void WintunCloseAdapter (WINTUN_ADAPTER_HANDLE Adapter)`
Releases Wintun adapter resources and, if adapter was created with WintunCreateAdapter, removes adapter.
**Parameters**
- *Adapter*: Adapter handle obtained with WintunCreateAdapter or WintunOpenAdapter.
#### WintunDeleteDriver()
`BOOL WintunDeleteDriver ()`
Deletes the Wintun driver if there are no more adapters in use.
**Returns**
If the function succeeds, the return value is nonzero. If the function fails, the return value is zero. To get extended error information, call GetLastError.
#### WintunGetAdapterLuid()
`void WintunGetAdapterLuid (WINTUN_ADAPTER_HANDLE Adapter, NET_LUID * Luid)`
Returns the LUID of the adapter.
**Parameters**
- *Adapter*: Adapter handle obtained with WintunOpenAdapter or WintunCreateAdapter
- *Luid*: Pointer to LUID to receive adapter LUID.
#### WintunGetRunningDriverVersion()
`DWORD WintunGetRunningDriverVersion (void )`
Determines the version of the Wintun driver currently loaded.
**Returns**
If the function succeeds, the return value is the version number. If the function fails, the return value is zero. To get extended error information, call GetLastError. Possible errors include the following: ERROR\_FILE\_NOT\_FOUND Wintun not loaded
#### WintunSetLogger()
`void WintunSetLogger (WINTUN_LOGGER_CALLBACK NewLogger)`
Sets logger callback function.
**Parameters**
- *NewLogger*: Pointer to callback function to use as a new global logger. NewLogger may be called from various threads concurrently. Should the logging require serialization, you must handle serialization in NewLogger. Set to NULL to disable.
#### WintunStartSession()
`WINTUN_SESSION_HANDLE WintunStartSession (WINTUN_ADAPTER_HANDLE Adapter, DWORD Capacity)`
Starts Wintun session.
**Parameters**
- *Adapter*: Adapter handle obtained with WintunOpenAdapter or WintunCreateAdapter
- *Capacity*: Rings capacity. Must be between WINTUN\_MIN\_RING\_CAPACITY and WINTUN\_MAX\_RING\_CAPACITY (incl.) Must be a power of two.
**Returns**
Wintun session handle. Must be released with WintunEndSession. If the function fails, the return value is NULL. To get extended error information, call GetLastError.
#### WintunEndSession()
`void WintunEndSession (WINTUN_SESSION_HANDLE Session)`
Ends Wintun session.
**Parameters**
- *Session*: Wintun session handle obtained with WintunStartSession
#### WintunGetReadWaitEvent()
`HANDLE WintunGetReadWaitEvent (WINTUN_SESSION_HANDLE Session)`
Gets Wintun session's read-wait event handle.
**Parameters**
- *Session*: Wintun session handle obtained with WintunStartSession
**Returns**
Pointer to receive event handle to wait for available data when reading. Should WintunReceivePackets return ERROR\_NO\_MORE\_ITEMS (after spinning on it for a while under heavy load), wait for this event to become signaled before retrying WintunReceivePackets. Do not call CloseHandle on this event - it is managed by the session.
#### WintunReceivePacket()
`BYTE* WintunReceivePacket (WINTUN_SESSION_HANDLE Session, DWORD * PacketSize)`
Retrieves one or packet. After the packet content is consumed, call WintunReleaseReceivePacket with Packet returned from this function to release internal buffer. This function is thread-safe.
**Parameters**
- *Session*: Wintun session handle obtained with WintunStartSession
- *PacketSize*: Pointer to receive packet size.
**Returns**
Pointer to layer 3 IPv4 or IPv6 packet. Client may modify its content at will. If the function fails, the return value is NULL. To get extended error information, call GetLastError. Possible errors include the following: ERROR\_HANDLE\_EOF Wintun adapter is terminating; ERROR\_NO\_MORE\_ITEMS Wintun buffer is exhausted; ERROR\_INVALID\_DATA Wintun buffer is corrupt
#### WintunReleaseReceivePacket()
`void WintunReleaseReceivePacket (WINTUN_SESSION_HANDLE Session, const BYTE * Packet)`
Releases internal buffer after the received packet has been processed by the client. This function is thread-safe.
**Parameters**
- *Session*: Wintun session handle obtained with WintunStartSession
- *Packet*: Packet obtained with WintunReceivePacket
#### WintunAllocateSendPacket()
`BYTE* WintunAllocateSendPacket (WINTUN_SESSION_HANDLE Session, DWORD PacketSize)`
Allocates memory for a packet to send. After the memory is filled with packet data, call WintunSendPacket to send and release internal buffer. WintunAllocateSendPacket is thread-safe and the WintunAllocateSendPacket order of calls define the packet sending order.
**Parameters**
- *Session*: Wintun session handle obtained with WintunStartSession
- *PacketSize*: Exact packet size. Must be less or equal to WINTUN\_MAX\_IP\_PACKET\_SIZE.
**Returns**
Returns pointer to memory where to prepare layer 3 IPv4 or IPv6 packet for sending. If the function fails, the return value is NULL. To get extended error information, call GetLastError. Possible errors include the following: ERROR\_HANDLE\_EOF Wintun adapter is terminating; ERROR\_BUFFER\_OVERFLOW Wintun buffer is full;
#### WintunSendPacket()
`void WintunSendPacket (WINTUN_SESSION_HANDLE Session, const BYTE * Packet)`
Sends the packet and releases internal buffer. WintunSendPacket is thread-safe, but the WintunAllocateSendPacket order of calls define the packet sending order. This means the packet is not guaranteed to be sent in the WintunSendPacket yet.
**Parameters**
- *Session*: Wintun session handle obtained with WintunStartSession
- *Packet*: Packet obtained with WintunAllocateSendPacket
## Building
**Do not distribute drivers or files named "Wintun", as they will most certainly clash with official deployments. Instead distribute [`wintun.dll` as downloaded from wintun.net](https://www.wintun.net).**
General requirements:
- [Visual Studio 2019](https://visualstudio.microsoft.com/downloads/) with Windows SDK
- [Windows Driver Kit](https://docs.microsoft.com/en-us/windows-hardware/drivers/download-the-wdk)
`wintun.sln` may be opened in Visual Studio for development and building. Be sure to run `bcdedit /set testsigning on` and then reboot before to enable unsigned driver loading. The default run sequence (F5) in Visual Studio will build the example project and its dependencies.
## License
The entire contents of [the repository](https://git.zx2c4.com/wintun/), including all documentation and example code, is "Copyright © 2018-2021 WireGuard LLC. All Rights Reserved." Source code is licensed under the [GPLv2](COPYING). Prebuilt binaries from [wintun.net](https://www.wintun.net/) are released under a more permissive license suitable for more forms of software contained inside of the .zip files distributed there.

BIN
dist/windows/wintun/bin/amd64/wintun.dll vendored Normal file

Binary file not shown.

Some files were not shown because too many files have changed in this diff Show More