Compare commits

..

270 Commits

Author SHA1 Message Date
Wade Simmons
9c6fb08a6d make boringcrypto: add checklinkname flag for go1.23
Starting with go1.23, we need to set -checklinkname=0 when building for
boringcrypto because we need to use go:linkname to access `newGCMTLS`.

Note that this does break builds when using a go version less than
go1.23.0. We can probably assume that someone using this Makefile and
manually building is using the latest release of Go though.

See:

- https://go.dev/doc/go1.23#linker
2024-08-13 13:52:34 -04:00
Jack Doan
248cf194cd fix integer wraparound in the calculation of handshake timeouts on 32-bit targets (#1185)
Fixes: #1169
2024-08-13 09:25:18 -04:00
dependabot[bot]
8a6a0f0636 Bump the golang-x-dependencies group with 2 updates (#1190)
Bumps the golang-x-dependencies group with 2 updates: [golang.org/x/sync](https://github.com/golang/sync) and [golang.org/x/sys](https://github.com/golang/sys).


Updates `golang.org/x/sync` from 0.7.0 to 0.8.0
- [Commits](https://github.com/golang/sync/compare/v0.7.0...v0.8.0)

Updates `golang.org/x/sys` from 0.22.0 to 0.23.0
- [Commits](https://github.com/golang/sys/compare/v0.22.0...v0.23.0)

---
updated-dependencies:
- dependency-name: golang.org/x/sync
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
- dependency-name: golang.org/x/sys
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-08-07 11:58:46 -04:00
Wade Simmons
f5f6c269ac fix rare panic when local index collision happens (#1191)
A local index collision happens when two tunnels attempt to use the same
random int32 index ID. This is a rare chance, and we have code to deal
with it, but we have a panic because we return the wrong thing in this
case. This change should fix the panic.
2024-08-07 11:53:32 -04:00
brad-defined
9a63fa0a07 Make some Nebula state programmatically available via control object (#1188) 2024-08-01 13:40:05 -04:00
Nate Brown
e264a0ff88 Switch most everything to netip in prep for ipv6 in the overlay (#1173) 2024-07-31 10:18:56 -05:00
dependabot[bot]
00458302ca Bump the golang-x-dependencies group with 4 updates (#1174)
Bumps the golang-x-dependencies group with 4 updates: [golang.org/x/crypto](https://github.com/golang/crypto), [golang.org/x/net](https://github.com/golang/net), [golang.org/x/sys](https://github.com/golang/sys) and [golang.org/x/term](https://github.com/golang/term).


Updates `golang.org/x/crypto` from 0.24.0 to 0.25.0
- [Commits](https://github.com/golang/crypto/compare/v0.24.0...v0.25.0)

Updates `golang.org/x/net` from 0.26.0 to 0.27.0
- [Commits](https://github.com/golang/net/compare/v0.26.0...v0.27.0)

Updates `golang.org/x/sys` from 0.21.0 to 0.22.0
- [Commits](https://github.com/golang/sys/compare/v0.21.0...v0.22.0)

Updates `golang.org/x/term` from 0.21.0 to 0.22.0
- [Commits](https://github.com/golang/term/compare/v0.21.0...v0.22.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
- dependency-name: golang.org/x/net
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
- dependency-name: golang.org/x/sys
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
- dependency-name: golang.org/x/term
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-07-29 11:42:33 -04:00
Wade Simmons
e6009b8491 github actions: use macos-latest (#1171)
macos-11 was deprecated and removed:

> The macos-11 label has been deprecated and will no longer be available after 28 June 2024.

We can just use macos-latest instead.
2024-07-02 11:50:51 -04:00
dependabot[bot]
b9aace1e58 Bump github.com/prometheus/client_golang from 1.19.0 to 1.19.1 (#1147)
Bumps [github.com/prometheus/client_golang](https://github.com/prometheus/client_golang) from 1.19.0 to 1.19.1.
- [Release notes](https://github.com/prometheus/client_golang/releases)
- [Changelog](https://github.com/prometheus/client_golang/blob/main/CHANGELOG.md)
- [Commits](https://github.com/prometheus/client_golang/compare/v1.19.0...v1.19.1)

---
updated-dependencies:
- dependency-name: github.com/prometheus/client_golang
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-06-24 14:54:51 -04:00
dependabot[bot]
a76723eaf5 Bump Apple-Actions/import-codesign-certs from 2 to 3 (#1146)
Bumps [Apple-Actions/import-codesign-certs](https://github.com/apple-actions/import-codesign-certs) from 2 to 3.
- [Release notes](https://github.com/apple-actions/import-codesign-certs/releases)
- [Commits](https://github.com/apple-actions/import-codesign-certs/compare/v2...v3)

---
updated-dependencies:
- dependency-name: Apple-Actions/import-codesign-certs
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-06-24 14:54:05 -04:00
Caleb Jasik
8109cf2170 Add puncuation to doc comment (#1164)
* Add puncuation to doc comment

* Fix list formatting inside `EncryptDanger` doc comment
2024-06-24 14:50:17 -04:00
Wade Simmons
97e9834f82 cleanup SK_MEMINFO vars (#1162)
We had to manually define these types before, but the latest release of
`golang.org/x/sys` adds these definitions:

- 6dfb94eaa3

Since we just updated with this PR, we can clean this up now:

- https://github.com/slackhq/nebula/pull/1161
2024-06-24 14:47:14 -04:00
dependabot[bot]
506ba5ab5b Bump github.com/miekg/dns from 1.1.59 to 1.1.61 (#1168)
Bumps [github.com/miekg/dns](https://github.com/miekg/dns) from 1.1.59 to 1.1.61.
- [Changelog](https://github.com/miekg/dns/blob/master/Makefile.release)
- [Commits](https://github.com/miekg/dns/compare/v1.1.59...v1.1.61)

---
updated-dependencies:
- dependency-name: github.com/miekg/dns
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-06-24 14:46:27 -04:00
dependabot[bot]
d372df56ab Bump google.golang.org/protobuf in the protobuf-dependencies group (#1167)
Bumps the protobuf-dependencies group with 1 update: google.golang.org/protobuf.


Updates `google.golang.org/protobuf` from 1.34.1 to 1.34.2

---
updated-dependencies:
- dependency-name: google.golang.org/protobuf
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: protobuf-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-06-24 14:45:52 -04:00
dependabot[bot]
40cfd00e87 Bump the golang-x-dependencies group with 4 updates (#1161)
Bumps the golang-x-dependencies group with 4 updates: [golang.org/x/crypto](https://github.com/golang/crypto), [golang.org/x/net](https://github.com/golang/net), [golang.org/x/sys](https://github.com/golang/sys) and [golang.org/x/term](https://github.com/golang/term).


Updates `golang.org/x/crypto` from 0.23.0 to 0.24.0
- [Commits](https://github.com/golang/crypto/compare/v0.23.0...v0.24.0)

Updates `golang.org/x/net` from 0.25.0 to 0.26.0
- [Commits](https://github.com/golang/net/compare/v0.25.0...v0.26.0)

Updates `golang.org/x/sys` from 0.20.0 to 0.21.0
- [Commits](https://github.com/golang/sys/compare/v0.20.0...v0.21.0)

Updates `golang.org/x/term` from 0.20.0 to 0.21.0
- [Commits](https://github.com/golang/term/compare/v0.20.0...v0.21.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
- dependency-name: golang.org/x/net
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
- dependency-name: golang.org/x/sys
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
- dependency-name: golang.org/x/term
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-06-10 16:08:43 -04:00
Wade Simmons
b14bad586a v1.9.3 (#1160)
Update CHANGELOG for Nebula v1.9.3
2024-06-06 13:17:07 -04:00
Wade Simmons
4c066d8c32 initialize messageCounter to 2 instead of verifying later (#1156)
Clean up the messageCounter checks added in #1154. Instead of checking that
messageCounter is still at 2, just initialize it to 2 and only increment for
non-handshake messages. Handshake packets will always be packets 1 and 2.
2024-06-06 13:03:07 -04:00
Wade Simmons
249ae41fec v1.9.2 (#1155)
Update CHANGELOG for Nebula v1.9.2
2024-06-03 15:50:02 -04:00
Wade Simmons
d9cae9e062 ensure messageCounter is set before handshake is complete (#1154)
Ensure we set messageCounter to 2 before the handshake is marked as
complete.
2024-06-03 15:40:51 -04:00
Wade Simmons
a92056a7db v1.9.1 (#1152)
Update CHANGELOG for Nebula v1.9.1
2024-05-29 14:06:46 -04:00
Wade Simmons
4eb1da0958 remove deadlock in GetOrHandshake (#1151)
We had a rare deadlock in GetOrHandshake because we kept the hostmap
lock when we do the call to StartHandshake. StartHandshake can block
while sending to the lighthouse query worker channel, and that worker
needs to be able to grab the hostmap lock to do its work. Other calls
for StartHandshake don't hold the hostmap lock so we should be able to
drop it here.

This lock was originally added with: https://github.com/slackhq/nebula/pull/954
2024-05-29 12:52:52 -04:00
Wade Simmons
50b24c102e v1.9.0 (#1137)
Update CHANGELOG for Nebula v1.9.0

Co-authored-by: John Maguire <john@defined.net>
2024-05-08 10:31:24 -04:00
dependabot[bot]
c0130f8161 Bump the golang-x-dependencies group with 4 updates (#1138)
Bumps the golang-x-dependencies group with 4 updates: [golang.org/x/crypto](https://github.com/golang/crypto), [golang.org/x/net](https://github.com/golang/net), [golang.org/x/sys](https://github.com/golang/sys) and [golang.org/x/term](https://github.com/golang/term).


Updates `golang.org/x/crypto` from 0.22.0 to 0.23.0
- [Commits](https://github.com/golang/crypto/compare/v0.22.0...v0.23.0)

Updates `golang.org/x/net` from 0.24.0 to 0.25.0
- [Commits](https://github.com/golang/net/compare/v0.24.0...v0.25.0)

Updates `golang.org/x/sys` from 0.19.0 to 0.20.0
- [Commits](https://github.com/golang/sys/compare/v0.19.0...v0.20.0)

Updates `golang.org/x/term` from 0.19.0 to 0.20.0
- [Commits](https://github.com/golang/term/compare/v0.19.0...v0.20.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
- dependency-name: golang.org/x/net
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
- dependency-name: golang.org/x/sys
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
- dependency-name: golang.org/x/term
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-05-06 16:17:50 -04:00
dependabot[bot]
f19a28645e Bump google.golang.org/protobuf in the protobuf-dependencies group (#1139)
Bumps the protobuf-dependencies group with 1 update: google.golang.org/protobuf.


Updates `google.golang.org/protobuf` from 1.34.0 to 1.34.1

---
updated-dependencies:
- dependency-name: google.golang.org/protobuf
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: protobuf-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-05-06 16:17:05 -04:00
Jack Doan
fd1906b16f minor text fixes (#1135) 2024-05-03 20:43:40 -05:00
Wade Simmons
d6e4b88bb5 release: use download-action v4 in docker section (#1134)
We missed this upgrade in #1047
2024-05-03 11:35:55 -04:00
dependabot[bot]
18f69af455 Bump actions/download-artifact from 3 to 4 (#1047)
Bumps [actions/download-artifact](https://github.com/actions/download-artifact) from 3 to 4.
- [Release notes](https://github.com/actions/download-artifact/releases)
- [Commits](https://github.com/actions/download-artifact/compare/v3...v4)

---
updated-dependencies:
- dependency-name: actions/download-artifact
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-05-02 11:25:22 -04:00
dependabot[bot]
aa18d7fa4f Bump actions/upload-artifact from 3 to 4 (#1046)
* Bump actions/upload-artifact from 3 to 4

Bumps [actions/upload-artifact](https://github.com/actions/upload-artifact) from 3 to 4.
- [Release notes](https://github.com/actions/upload-artifact/releases)
- [Commits](https://github.com/actions/upload-artifact/compare/v3...v4)

---
updated-dependencies:
- dependency-name: actions/upload-artifact
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

* try to fix upload conflict

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Wade Simmons <wsimmons@slack-corp.com>
2024-05-02 11:24:58 -04:00
John Maguire
b5c3486796 Push Docker images as part of the release workflow (#1037) 2024-05-02 09:37:11 -04:00
dependabot[bot]
f39bfbb7fa Bump google.golang.org/protobuf in the protobuf-dependencies group (#1133)
Bumps the protobuf-dependencies group with 1 update: google.golang.org/protobuf.


Updates `google.golang.org/protobuf` from 1.33.0 to 1.34.0

---
updated-dependencies:
- dependency-name: google.golang.org/protobuf
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: protobuf-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-30 11:45:05 -04:00
Wade Simmons
4f4941e187 Add Vagrant based smoke tests (#1067)
* WIP smoke test freebsd

* fix bitrot

We now test that the firewall blocks inbound on host3 from host2

* WIP ipv6 test

* cleanup

* rename to make clear

* fix filename

* restore

* no sudo docker

* WIP

* WIP

* WIP

* WIP

* extra smoke tests

* WIP

* WIP

* add over improvements made in smoke.sh

* more tests

* use generic/freebsd14

* cleanup from test

* smoke test openbsd-amd64

* add netbsd-amd64

* try to fix vagrant
2024-04-30 11:02:16 -04:00
fyl
5f17db5dfa Add support for LoongArch64 (#1003) 2024-04-30 09:55:44 -05:00
John Maguire
f31bab5f1a Add support for SSH CAs (#1098)
- Accept certs signed by trusted CAs
- Username must match the cert principal if set
- Any username can be used if cert principal is empty
- Don't allow removed pubkeys/CAs to be used after reload
2024-04-30 10:50:17 -04:00
kindknow
9cd944d320 chore: fix function name in comment (#1111) 2024-04-30 09:43:38 -05:00
John Maguire
f7db0eb5cc Remove Vagrant example (#1129) 2024-04-30 09:40:24 -05:00
dependabot[bot]
7e7d5e00ca Bump github.com/prometheus/client_golang from 1.18.0 to 1.19.0 (#1086)
Bumps [github.com/prometheus/client_golang](https://github.com/prometheus/client_golang) from 1.18.0 to 1.19.0.
- [Release notes](https://github.com/prometheus/client_golang/releases)
- [Changelog](https://github.com/prometheus/client_golang/blob/main/CHANGELOG.md)
- [Commits](https://github.com/prometheus/client_golang/compare/v1.18.0...v1.19.0)

---
updated-dependencies:
- dependency-name: github.com/prometheus/client_golang
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-30 10:30:18 -04:00
Wade Simmons
24f336ec56 switch off deprecated elliptic.Marshal (#1108)
elliptic.Marshal was deprecated, we can replace it with the ECDH methods
even though we aren't using ECDH here. See:

- f03fb147d7

We still using elliptic.Unmarshal because this issue needs to be
resolved:

- https://github.com/golang/go/issues/63963
2024-04-30 10:02:49 -04:00
John Maguire
d7f52dec41 Fix errant capitalisation in DNS TXT response (#1127)
Co-authored-by: Oliver Marriott <hello@omarriott.com>
2024-04-30 09:58:56 -04:00
NODA Kai
e54f9dd206 dns_server.go: parseQuery: set NXDOMAIN if there's no Answer to return (#845) 2024-04-30 09:56:57 -04:00
Andrew Kraut
df78158cfa Create service script for open-rc (#711) 2024-04-30 09:53:00 -04:00
Robin Candau
8b55caa15e Remove Arch nebula.service file (#1132) 2024-04-30 07:45:23 -04:00
Jon Rafkind
7ed9f2a688 add ssh command to print device info (#763) 2024-04-29 16:09:34 -05:00
Wade Simmons
3aca576b07 update to go1.22 (#981)
* update to go1.21

Since the first minor version update has already been released, we can
probably feel comfortable updating to go1.21. This version now enforces
that the go version on the system is compatible with the version
specified in go.mod, so we can remove the old logic around checking the
minimum version in the Makefile.

- https://go.dev/doc/go1.21#tools

> To improve forwards compatibility, Go 1.21 now reads the go line in a go.work or go.mod file as a strict minimum requirement: go 1.21.0 means that the workspace or module cannot be used with Go 1.20 or with Go 1.21rc1. This allows projects that depend on fixes made in later versions of Go to ensure that they are not used with earlier versions. It also gives better error reporting for projects that make use of new Go features: when the problem is that a newer Go version is needed, that problem is reported clearly, instead of attempting to build the code and printing errors about unresolved imports or syntax errors.

* update to go1.22

* bump gvisor

* fix merge conflicts

* use latest gvisor `go` branch

Need to use the latest commit on the `go` branch, see:

- https://github.com/google/gvisor?tab=readme-ov-file#using-go-get

* mod tidy

* more fixes

* give smoketest more time

Is this why it is failing?

* also a little more sleep here

---------

Co-authored-by: Jack Doan <me@jackdoan.com>
2024-04-29 16:44:42 -04:00
Nate Brown
a99618e95c Don't log invalid certificates (#1116) 2024-04-29 15:21:00 -05:00
Caleb Jasik
8e94eb974e Add suggested filenames for collected profiles in the ssh commands (#1109) 2024-04-29 15:20:46 -05:00
John Maguire
41e2e1de02 Remove Fedora nebula.service file (#1128) 2024-04-29 15:30:22 -04:00
dependabot[bot]
d95fb4a314 Bump the golang-x-dependencies group with 5 updates (#1110)
Bumps the golang-x-dependencies group with 5 updates:

| Package | From | To |
| --- | --- | --- |
| [golang.org/x/crypto](https://github.com/golang/crypto) | `0.21.0` | `0.22.0` |
| [golang.org/x/net](https://github.com/golang/net) | `0.22.0` | `0.24.0` |
| [golang.org/x/sync](https://github.com/golang/sync) | `0.6.0` | `0.7.0` |
| [golang.org/x/sys](https://github.com/golang/sys) | `0.18.0` | `0.19.0` |
| [golang.org/x/term](https://github.com/golang/term) | `0.18.0` | `0.19.0` |


Updates `golang.org/x/crypto` from 0.21.0 to 0.22.0
- [Commits](https://github.com/golang/crypto/compare/v0.21.0...v0.22.0)

Updates `golang.org/x/net` from 0.22.0 to 0.24.0
- [Commits](https://github.com/golang/net/compare/v0.22.0...v0.24.0)

Updates `golang.org/x/sync` from 0.6.0 to 0.7.0
- [Commits](https://github.com/golang/sync/compare/v0.6.0...v0.7.0)

Updates `golang.org/x/sys` from 0.18.0 to 0.19.0
- [Commits](https://github.com/golang/sys/compare/v0.18.0...v0.19.0)

Updates `golang.org/x/term` from 0.18.0 to 0.19.0
- [Commits](https://github.com/golang/term/compare/v0.18.0...v0.19.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
- dependency-name: golang.org/x/net
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
- dependency-name: golang.org/x/sync
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
- dependency-name: golang.org/x/sys
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
- dependency-name: golang.org/x/term
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-23 13:50:53 -04:00
dependabot[bot]
cdcea00669 Bump github.com/miekg/dns from 1.1.58 to 1.1.59 (#1126)
Bumps [github.com/miekg/dns](https://github.com/miekg/dns) from 1.1.58 to 1.1.59.
- [Changelog](https://github.com/miekg/dns/blob/master/Makefile.release)
- [Commits](https://github.com/miekg/dns/compare/v1.1.58...v1.1.59)

---
updated-dependencies:
- dependency-name: github.com/miekg/dns
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-23 11:08:08 -04:00
dependabot[bot]
9bd92a7fc2 Bump golang.org/x/net from 0.22.0 to 0.23.0 (#1123)
Bumps [golang.org/x/net](https://github.com/golang/net) from 0.22.0 to 0.23.0.
- [Commits](https://github.com/golang/net/compare/v0.22.0...v0.23.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-23 11:06:15 -04:00
Nate Brown
a5a07cc760 Allow :: in lighthouse.dns.host config (#1115) 2024-04-11 21:44:36 -05:00
Nate Brown
c1711bc9c5 Remove tcp rtt tracking from the firewall (#1114) 2024-04-11 21:44:22 -05:00
Wade Simmons
7efa750aef avoid deadlock in lighthouse queryWorker (#1112)
* avoid deadlock in lighthouse queryWorker

If the lighthouse queryWorker tries to grab to call StartHandshake on
a lighthouse vpnIp, we can deadlock on the handshake_manager lock. This
change drops the handshake_manager lock before we send on the lighthouse
queryChan (which could block), and also avoids sending to the channel if
this is a lighthouse IP itself.

* need to hold lock during cacheCb
2024-04-11 17:00:01 -04:00
Nate Brown
a390125935 Support reloading preferred_ranges (#1043) 2024-04-03 22:14:51 -05:00
Nate Brown
bbb15f8cb1 Unsafe route reload (#1083) 2024-03-28 15:17:28 -05:00
John Maguire
8b68a08723 Fix "any" firewall rules for unsafe_routes (#1099) 2024-03-28 15:17:12 -05:00
dependabot[bot]
f8fb9759e9 Bump the golang-x-dependencies group with 1 update (#1094)
Bumps the golang-x-dependencies group with 1 update: [golang.org/x/net](https://github.com/golang/net).


Updates `golang.org/x/net` from 0.21.0 to 0.22.0
- [Commits](https://github.com/golang/net/compare/v0.21.0...v0.22.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-03-22 12:58:13 -04:00
dependabot[bot]
1f1d660200 Bump google.golang.org/protobuf from 1.32.0 to 1.33.0 (#1092)
Bumps google.golang.org/protobuf from 1.32.0 to 1.33.0.

---
updated-dependencies:
- dependency-name: google.golang.org/protobuf
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-03-18 11:12:13 -04:00
dependabot[bot]
279265058f Bump github.com/stretchr/testify from 1.8.4 to 1.9.0 (#1087)
Bumps [github.com/stretchr/testify](https://github.com/stretchr/testify) from 1.8.4 to 1.9.0.
- [Release notes](https://github.com/stretchr/testify/releases)
- [Commits](https://github.com/stretchr/testify/compare/v1.8.4...v1.9.0)

---
updated-dependencies:
- dependency-name: github.com/stretchr/testify
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-03-18 11:06:18 -04:00
dependabot[bot]
2a778de07e Bump github.com/flynn/noise from 1.0.1 to 1.1.0 (#1072)
Bumps [github.com/flynn/noise](https://github.com/flynn/noise) from 1.0.1 to 1.1.0.
- [Commits](https://github.com/flynn/noise/compare/v1.0.1...v1.1.0)

---
updated-dependencies:
- dependency-name: github.com/flynn/noise
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-03-18 10:47:53 -04:00
dependabot[bot]
2affd371e3 Bump the golang-x-dependencies group with 4 updates (#1085)
Bumps the golang-x-dependencies group with 4 updates: [golang.org/x/crypto](https://github.com/golang/crypto), [golang.org/x/net](https://github.com/golang/net), [golang.org/x/sys](https://github.com/golang/sys) and [golang.org/x/term](https://github.com/golang/term).


Updates `golang.org/x/crypto` from 0.18.0 to 0.21.0
- [Commits](https://github.com/golang/crypto/compare/v0.18.0...v0.21.0)

Updates `golang.org/x/net` from 0.20.0 to 0.21.0
- [Commits](https://github.com/golang/net/compare/v0.20.0...v0.21.0)

Updates `golang.org/x/sys` from 0.16.0 to 0.18.0
- [Commits](https://github.com/golang/sys/compare/v0.16.0...v0.18.0)

Updates `golang.org/x/term` from 0.16.0 to 0.18.0
- [Commits](https://github.com/golang/term/compare/v0.16.0...v0.18.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
- dependency-name: golang.org/x/net
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
- dependency-name: golang.org/x/sys
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
- dependency-name: golang.org/x/term
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-03-18 10:43:17 -04:00
Nate Brown
cc8b3cc961 Add config option for local_cidr control 2024-02-15 11:46:45 -06:00
Nate Brown
f346cf4109 At the end 2024-02-05 10:23:10 -06:00
Nate Brown
8f44f22c37 In the middle 2024-02-05 10:23:10 -06:00
John Maguire
8822f1366c Add link to logs guide in bug report template (#1065) 2024-02-01 12:40:23 -05:00
brad-defined
e3f5a129c1 Return full error context from ContextualError.Error() (#1069) 2024-01-31 15:31:46 -05:00
mrx
0f0534d739 Fix UDP listener on IPv4-only Linux (#787)
On some systems, IPv6 is disabled (for example, CIS benchmark recommends to disable it when not used), but currently all UDP connections are using AF_INET6 sockets.
When we are binding AF_INET6 socket to an address like ::ffff:1.2.3.4 (IPv4 addresses are parsed by net.ParseIP this way), we can't send or receive IPv6 packets anyway, so this will not break any scenarios.

---------

Co-authored-by: Wade Simmons <wsimmons@slack-corp.com>
2024-01-30 15:08:14 -05:00
dependabot[bot]
c5a403b7a8 Bump github.com/vishvananda/netlink (#1034)
Bumps [github.com/vishvananda/netlink](https://github.com/vishvananda/netlink) from 1.1.1-0.20211118161826-650dca95af54 to 1.2.1-beta.2.
- [Release notes](https://github.com/vishvananda/netlink/releases)
- [Commits](https://github.com/vishvananda/netlink/commits/v1.2.1-beta.2)

---
updated-dependencies:
- dependency-name: github.com/vishvananda/netlink
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-23 10:40:29 -05:00
dependabot[bot]
f23d328561 Bump the protobuf-dependencies group with 1 update (#1053)
Bumps the protobuf-dependencies group with 1 update: google.golang.org/protobuf.


Updates `google.golang.org/protobuf` from 1.31.0 to 1.32.0

---
updated-dependencies:
- dependency-name: google.golang.org/protobuf
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: protobuf-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-23 10:39:53 -05:00
dependabot[bot]
a977ee653d Bump github.com/miekg/dns from 1.1.57 to 1.1.58 (#1063)
Bumps [github.com/miekg/dns](https://github.com/miekg/dns) from 1.1.57 to 1.1.58.
- [Changelog](https://github.com/miekg/dns/blob/master/Makefile.release)
- [Commits](https://github.com/miekg/dns/compare/v1.1.57...v1.1.58)

---
updated-dependencies:
- dependency-name: github.com/miekg/dns
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-23 10:37:53 -05:00
Lingfeng Zhang
1f83d1758d Support inlined sshd host key (#1054) 2024-01-22 13:58:44 -05:00
dependabot[bot]
3210198276 Bump github.com/prometheus/client_golang from 1.17.0 to 1.18.0 (#1055)
Bumps [github.com/prometheus/client_golang](https://github.com/prometheus/client_golang) from 1.17.0 to 1.18.0.
- [Release notes](https://github.com/prometheus/client_golang/releases)
- [Changelog](https://github.com/prometheus/client_golang/blob/main/CHANGELOG.md)
- [Commits](https://github.com/prometheus/client_golang/compare/v1.17.0...v1.18.0)

---
updated-dependencies:
- dependency-name: github.com/prometheus/client_golang
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-12 10:26:39 -05:00
dependabot[bot]
0cef634635 Bump github.com/miekg/dns from 1.1.56 to 1.1.57 (#1022)
Bumps [github.com/miekg/dns](https://github.com/miekg/dns) from 1.1.56 to 1.1.57.
- [Changelog](https://github.com/miekg/dns/blob/master/Makefile.release)
- [Commits](https://github.com/miekg/dns/compare/v1.1.56...v1.1.57)

---
updated-dependencies:
- dependency-name: github.com/miekg/dns
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-12 09:57:38 -05:00
dependabot[bot]
637dc18bf8 Bump the golang-x-dependencies group with 3 updates (#1059)
Bumps the golang-x-dependencies group with 3 updates: [golang.org/x/crypto](https://github.com/golang/crypto), [golang.org/x/net](https://github.com/golang/net) and [golang.org/x/sync](https://github.com/golang/sync).


Updates `golang.org/x/crypto` from 0.17.0 to 0.18.0
- [Commits](https://github.com/golang/crypto/compare/v0.17.0...v0.18.0)

Updates `golang.org/x/net` from 0.19.0 to 0.20.0
- [Commits](https://github.com/golang/net/compare/v0.19.0...v0.20.0)

Updates `golang.org/x/sync` from 0.5.0 to 0.6.0
- [Commits](https://github.com/golang/sync/compare/v0.5.0...v0.6.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
- dependency-name: golang.org/x/net
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
- dependency-name: golang.org/x/sync
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-12 09:55:41 -05:00
Wade Simmons
ea36949d8a v1.8.2 (#1058)
Update CHANGELOG for Nebula v1.8.2
2024-01-08 15:40:04 -05:00
Wade Simmons
0564d0a2cf when listen.port is zero, fix multiple routines (#1057)
This used to work correctly because when the multiple routines work was
first added in #382, but an important part to discover the listen port
before opening the other listeners on the same socket was lost in this
PR: #653.

This change should fix the regression and allow multiple routines to
work correctly when listen.port is set to `0`.

Thanks to @rawdigits for tracking down and discovering this regression.
2024-01-08 13:49:44 -05:00
nezu
b22ba6eb49 Update Arch Linux package link (#1024) 2023-12-27 10:38:24 -06:00
Wade Simmons
3a221812f6 test: build all non-main modules for mobile (#1036)
Ensure that we don't break the build for mobile by doing a `go build`
for all of the non-main modules in the repo. Should hopefully catch
issues like #1035 sooner.
2023-12-21 11:59:21 -05:00
dependabot[bot]
927ff4cc03 Bump github.com/flynn/noise from 1.0.0 to 1.0.1 (#1038)
Bumps [github.com/flynn/noise](https://github.com/flynn/noise) from 1.0.0 to 1.0.1.
- [Commits](https://github.com/flynn/noise/compare/v1.0.0...v1.0.1)

---
updated-dependencies:
- dependency-name: github.com/flynn/noise
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-21 11:57:53 -05:00
Wade Simmons
e5945a60aa v1.8.1 (#1049)
Update CHANGELOG for Nebula v1.8.1
2023-12-19 15:11:25 -05:00
Nate Brown
072edd56b3 Fix re-entrant GetOrHandshake issues (#1044) 2023-12-19 11:58:31 -06:00
dependabot[bot]
beb5f6bddc Bump golang.org/x/crypto from 0.16.0 to 0.17.0 (#1048)
Bumps [golang.org/x/crypto](https://github.com/golang/crypto) from 0.16.0 to 0.17.0.
- [Commits](https://github.com/golang/crypto/compare/v0.16.0...v0.17.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-19 10:57:09 -05:00
dependabot[bot]
8be9792059 Bump actions/setup-go from 4 to 5 (#1039)
Bumps [actions/setup-go](https://github.com/actions/setup-go) from 4 to 5.
- [Release notes](https://github.com/actions/setup-go/releases)
- [Commits](https://github.com/actions/setup-go/compare/v4...v5)

---
updated-dependencies:
- dependency-name: actions/setup-go
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-13 22:45:09 -06:00
John Maguire
af2fc48378 Fix mobile builds (#1035) 2023-12-06 16:18:21 -05:00
Wade Simmons
1d2f95e718 v1.8.0 (#1017)
Update CHANGELOG for Nebula v1.8.0
2023-12-06 14:38:58 -05:00
Lars Lehtonen
3a8743d511 cmd/nebula-cert: fix clobbered error (#1032)
* cmd/nebula-cert: fix clobbered error

Signed-off-by: Lars Lehtonen <lars.lehtonen@gmail.com>

* apply suggestions from Nate

This makes it much clearer what is happening in the code

---------

Signed-off-by: Lars Lehtonen <lars.lehtonen@gmail.com>
Co-authored-by: Wade Simmons <wsimmons@slack-corp.com>
2023-12-06 13:20:49 -05:00
Dave Russell
0209402942 SIGHUP is only useful when config was loaded from a file (#1030)
Have (*config.C).CatchHUP() return early when there is no file
path available from which to reload.
This will allow wrapping service to manage their own signal
trapping (which is particularly important if they've used
config from a string.
2023-12-06 10:13:38 -05:00
dependabot[bot]
fb55f5b762 Bump the golang-x-dependencies group with 3 updates (#1028)
Bumps the golang-x-dependencies group with 3 updates: [golang.org/x/crypto](https://github.com/golang/crypto), [golang.org/x/net](https://github.com/golang/net) and [golang.org/x/sync](https://github.com/golang/sync).


Updates `golang.org/x/crypto` from 0.14.0 to 0.16.0
- [Commits](https://github.com/golang/crypto/compare/v0.14.0...v0.16.0)

Updates `golang.org/x/net` from 0.17.0 to 0.19.0
- [Commits](https://github.com/golang/net/compare/v0.17.0...v0.19.0)

Updates `golang.org/x/sync` from 0.3.0 to 0.5.0
- [Commits](https://github.com/golang/sync/compare/v0.3.0...v0.5.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
- dependency-name: golang.org/x/net
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
- dependency-name: golang.org/x/sync
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-04 11:12:52 -05:00
Ben Ritcey
01cddb8013 Added firewall.rules.hash metric (#1010)
* Added firewall.rules.hash metric

Added a FNV-1 hash of the firewall rules as a Prometheus value.

* Switch FNV has to int64, include both hashes in log messages

* Use a uint32 for the FNV hash

Let go-metrics cast the uint32 to a int64, so it won't be lossy
when it eventually emits a float64 Prometheus metric.
2023-11-28 11:56:47 -05:00
Tristan Rice
1083279a45 add gvisor based service library (#965)
* add service/ library
2023-11-21 11:50:18 -05:00
Wade Simmons
fe16ea566d firewall reject packets: cleanup error cases (#957) 2023-11-13 12:43:51 -06:00
Nate Brown
3356e03d85 Default pki.disconnect_invalid to true and make it reloadable (#859) 2023-11-13 12:39:38 -06:00
dependabot[bot]
f41db52560 Bump the golang-x-dependencies group with 1 update (#1006)
Bumps the golang-x-dependencies group with 1 update: [golang.org/x/sys](https://github.com/golang/sys).

- [Commits](https://github.com/golang/sys/compare/v0.13.0...v0.14.0)

---
updated-dependencies:
- dependency-name: golang.org/x/sys
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-13 07:58:45 -08:00
Nate Brown
5181cb0474 Use generics for CIDRTrees to avoid casting issues (#1004) 2023-11-02 17:05:08 -05:00
Nate Brown
a44e1b8b05 Clean up a hostinfo to reduce memory usage (#955) 2023-11-02 16:53:59 -05:00
guangwu
276978377a chore: remove refs to deprecated io/ioutil (#987)
Signed-off-by: guoguangwu <guoguangwu@magic-shield.com>
2023-10-31 10:35:13 -04:00
dependabot[bot]
777eb96aea Bump github.com/prometheus/client_golang from 1.16.0 to 1.17.0 (#984)
Bumps [github.com/prometheus/client_golang](https://github.com/prometheus/client_golang) from 1.16.0 to 1.17.0.
- [Release notes](https://github.com/prometheus/client_golang/releases)
- [Changelog](https://github.com/prometheus/client_golang/blob/main/CHANGELOG.md)
- [Commits](https://github.com/prometheus/client_golang/compare/v1.16.0...v1.17.0)

---
updated-dependencies:
- dependency-name: github.com/prometheus/client_golang
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-10-31 10:33:04 -04:00
Wade Simmons
0912ef14f4 github actions smoke-test: run with data race detector (#988)
Run the github actions smoke tests with data race detector enabled, so
we can detect if a PR introduces a simple data race.
2023-10-31 10:32:39 -04:00
Lars Lehtonen
77a8ce1712 main: fix dropped error (#1002)
This isn't an actual issue because the current implementation of NewSSHServer never returns an error (https://github.com/slackhq/nebula/blob/v1.7.2/sshd/server.go#L56), but still good to fix so no surprises happen in the future.
2023-10-31 10:32:08 -04:00
John Maguire
87b628ba24 Fix truncated comment in config.yml (#999) 2023-10-27 08:39:34 -04:00
Nate Brown
50d6a1e8ca QueryServer needs to be done outside of the lock (#996) 2023-10-17 15:43:51 -05:00
dependabot[bot]
e78fe0b9ef Bump golang.org/x/net from 0.15.0 to 0.17.0 (#990)
Bumps [golang.org/x/net](https://github.com/golang/net) from 0.15.0 to 0.17.0.
- [Commits](https://github.com/golang/net/compare/v0.15.0...v0.17.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-10-16 13:28:59 -04:00
Nate Brown
5fccbb8676 Retry wintun creation (#985) 2023-10-16 10:06:43 -05:00
dependabot[bot]
c289c7a7ca Bump github.com/miekg/dns from 1.1.55 to 1.1.56 (#979)
Bumps [github.com/miekg/dns](https://github.com/miekg/dns) from 1.1.55 to 1.1.56.
- [Changelog](https://github.com/miekg/dns/blob/master/Makefile.release)
- [Commits](https://github.com/miekg/dns/compare/v1.1.55...v1.1.56)

---
updated-dependencies:
- dependency-name: github.com/miekg/dns
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-09-22 09:48:26 -04:00
dependabot[bot]
e3fbfbfd4d Bump golang.org/x/net from 0.14.0 to 0.15.0 (#977)
Bumps [golang.org/x/net](https://github.com/golang/net) from 0.14.0 to 0.15.0.
- [Commits](https://github.com/golang/net/compare/v0.14.0...v0.15.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-09-22 09:47:45 -04:00
dependabot[bot]
282ca4368e Bump golang.org/x/crypto from 0.12.0 to 0.13.0 (#976)
Bumps [golang.org/x/crypto](https://github.com/golang/crypto) from 0.12.0 to 0.13.0.
- [Commits](https://github.com/golang/crypto/compare/v0.12.0...v0.13.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-09-22 09:47:00 -04:00
Wade Simmons
280fa026ea smoke-test: don't assume docker needs sudo (#958)
Let the host deal with this detail if necessary
2023-09-07 13:57:41 -04:00
Lars Lehtonen
dbdb48f182 cert: fix dropped errors (#961) 2023-09-07 13:54:01 -04:00
Nate Brown
f7e392995a Fix rebind to not put the socket in blocking mode (#972) 2023-09-07 11:56:09 -05:00
dependabot[bot]
d271df8da8 Bump golang.org/x/term from 0.11.0 to 0.12.0 (#967)
Bumps [golang.org/x/term](https://github.com/golang/term) from 0.11.0 to 0.12.0.
- [Commits](https://github.com/golang/term/compare/v0.11.0...v0.12.0)

---
updated-dependencies:
- dependency-name: golang.org/x/term
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-09-05 12:47:55 -04:00
dependabot[bot]
eea5e6a5df Bump actions/checkout from 3 to 4 (#969)
Bumps [actions/checkout](https://github.com/actions/checkout) from 3 to 4.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v3...v4)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-09-05 11:43:56 -04:00
dependabot[bot]
790268a176 Bump golang.org/x/sys from 0.11.0 to 0.12.0 (#968)
Bumps [golang.org/x/sys](https://github.com/golang/sys) from 0.11.0 to 0.12.0.
- [Commits](https://github.com/golang/sys/compare/v0.11.0...v0.12.0)

---
updated-dependencies:
- dependency-name: golang.org/x/sys
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-09-05 11:42:08 -04:00
brad-defined
06b480e177 Fix relay migration (#964)
* Fix for relay migration on rehandshaking issue. On rehandshake, the relay tunnel doesn't migrate to the new hostinfo object correctly, due to an incorrect Nebula IP sent in the CreateRelayRequest message.
* Add a test for this case

---------

Co-authored-by: Nate Brown <nbrown.us@gmail.com>
2023-09-05 09:29:27 -04:00
Nate Brown
076ebc6c6e Simplify getting a hostinfo or starting a handshake with one (#954) 2023-08-21 18:51:45 -05:00
Nate Brown
7edcf620c0 We only need the certificate in ConnectionState (#953) 2023-08-21 14:11:06 -05:00
Nate Brown
5a131b2975 Combine ca, cert, and key handling (#952) 2023-08-14 21:32:40 -05:00
Nate Brown
223cc6e660 Limit how often a busy tunnel can requery the lighthouse (#940)
Co-authored-by: Wade Simmons <wadey@slack-corp.com>
2023-08-08 13:26:41 -05:00
Wade Simmons
5671c6607c dependabot: group together common deps (#950)
Group together deps that are often updated together.

- https://docs.github.com/en/code-security/dependabot/dependabot-version-updates/configuration-options-for-the-dependabot.yml-file#groups
2023-08-08 13:15:42 -04:00
dependabot[bot]
7ecafbe61d Bump golang.org/x/net from 0.13.0 to 0.14.0 (#947)
Bumps [golang.org/x/net](https://github.com/golang/net) from 0.13.0 to 0.14.0.
- [Commits](https://github.com/golang/net/compare/v0.13.0...v0.14.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-08-08 10:04:46 -05:00
dependabot[bot]
546eb3bfbc Bump golang.org/x/crypto from 0.11.0 to 0.12.0 (#949)
Bumps [golang.org/x/crypto](https://github.com/golang/crypto) from 0.11.0 to 0.12.0.
- [Commits](https://github.com/golang/crypto/compare/v0.11.0...v0.12.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-08-07 21:28:06 -05:00
dependabot[bot]
7364d99e34 Bump golang.org/x/term from 0.10.0 to 0.11.0 (#946)
Bumps [golang.org/x/term](https://github.com/golang/term) from 0.10.0 to 0.11.0.
- [Commits](https://github.com/golang/term/compare/v0.10.0...v0.11.0)

---
updated-dependencies:
- dependency-name: golang.org/x/term
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-08-07 21:07:30 -05:00
dependabot[bot]
83b6dc7b16 Bump golang.org/x/net from 0.12.0 to 0.13.0 (#943)
Bumps [golang.org/x/net](https://github.com/golang/net) from 0.12.0 to 0.13.0.
- [Commits](https://github.com/golang/net/compare/v0.12.0...v0.13.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-08-02 14:28:32 -04:00
Wade Simmons
3d0da7c859 update mergo to 1.0.0 (#941)
The mergo package has moved to a vanity URL. This causes fun issues with
dependabot. Update to the new release:

- https://github.com/darccio/mergo/releases/tag/v1.0.0
- https://github.com/darccio/mergo/compare/v0.3.15...v1.0.0
2023-08-02 14:00:20 -04:00
Caleb Jasik
ed00f5d530 Remove unused config code (last edited 4yrs ago) (#938) 2023-07-31 15:59:20 -05:00
dependabot[bot]
38e56a4858 Bump golang.org/x/net from 0.9.0 to 0.12.0 (#931) 2023-07-27 15:43:16 -05:00
dependabot[bot]
fce93ccb54 Bump google.golang.org/protobuf from 1.30.0 to 1.31.0 (#930) 2023-07-27 15:42:33 -05:00
dependabot[bot]
0d715effbc Bump Apple-Actions/import-codesign-certs from 1 to 2 (#923) 2023-07-27 15:31:36 -05:00
dependabot[bot]
0c003b64f1 Bump golang.org/x/term from 0.8.0 to 0.10.0 (#928) 2023-07-27 14:38:36 -05:00
Nate Brown
14d0106716 Send the lh update worker into its own routine instead of taking over the reload routine (#935) 2023-07-27 14:38:10 -05:00
dependabot[bot]
959b015b3b Bump github.com/sirupsen/logrus from 1.9.0 to 1.9.3 (#933) 2023-07-27 14:36:36 -05:00
Nate Brown
0bffa76b5e Build for openbsd (#812) 2023-07-27 14:27:35 -05:00
c0repwn3r
03e70210a5 Add support for NetBSD (#916) 2023-07-27 13:44:47 -05:00
Nate Brown
9c6592b159 Guard e2e udp and tun channels when closed (#934) 2023-07-26 12:52:14 -05:00
dependabot[bot]
e5af94e27a Bump github.com/prometheus/client_golang from 1.15.1 to 1.16.0 (#927)
Bumps [github.com/prometheus/client_golang](https://github.com/prometheus/client_golang) from 1.15.1 to 1.16.0.
- [Release notes](https://github.com/prometheus/client_golang/releases)
- [Changelog](https://github.com/prometheus/client_golang/blob/main/CHANGELOG.md)
- [Commits](https://github.com/prometheus/client_golang/compare/v1.15.1...v1.16.0)

---
updated-dependencies:
- dependency-name: github.com/prometheus/client_golang
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-24 13:56:09 -04:00
dependabot[bot]
96f51f78ea Bump golang.org/x/sys from 0.8.0 to 0.10.0 (#926)
Bumps [golang.org/x/sys](https://github.com/golang/sys) from 0.8.0 to 0.10.0.
- [Commits](https://github.com/golang/sys/compare/v0.8.0...v0.10.0)

---
updated-dependencies:
- dependency-name: golang.org/x/sys
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-24 13:53:39 -04:00
Nate Brown
a10baeee92 Pull hostmap and pending hostmap apart, remove unused functions (#843) 2023-07-24 12:37:52 -05:00
dependabot[bot]
52c9e360e7 Bump github.com/miekg/dns from 1.1.54 to 1.1.55 (#925)
Bumps [github.com/miekg/dns](https://github.com/miekg/dns) from 1.1.54 to 1.1.55.
- [Changelog](https://github.com/miekg/dns/blob/master/Makefile.release)
- [Commits](https://github.com/miekg/dns/compare/v1.1.54...v1.1.55)

---
updated-dependencies:
- dependency-name: github.com/miekg/dns
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-24 12:52:29 -04:00
dependabot[bot]
8caaff7109 Bump github.com/stretchr/testify from 1.8.2 to 1.8.4 (#924)
Bumps [github.com/stretchr/testify](https://github.com/stretchr/testify) from 1.8.2 to 1.8.4.
- [Release notes](https://github.com/stretchr/testify/releases)
- [Commits](https://github.com/stretchr/testify/compare/v1.8.2...v1.8.4)

---
updated-dependencies:
- dependency-name: github.com/stretchr/testify
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-24 12:51:31 -04:00
Nate Brown
1e3c155896 Attempt to notify systemd of service readiness on linux (#929) 2023-07-24 11:30:18 -05:00
Wade Simmons
f5db03c834 add dependabot config (#922)
This should give us PRs weekly with dependency updates, and also let us
manually check for updates when needed.

- https://docs.github.com/en/code-security/dependabot/dependabot-version-updates/configuration-options-for-the-dependabot.yml-file
2023-07-21 17:21:58 -04:00
Nate Brown
c5ce945852 Update README to include a link to go install docs (#919) 2023-07-20 21:30:38 -05:00
John Maguire
7e380bde7e Document new DNS config options (#879) 2023-07-10 15:19:05 -04:00
Nate Brown
a3e59a38ef Use registered io on Windows when possible (#905) 2023-07-10 12:43:48 -05:00
John Maguire
8ba5d64dbc Add support for naming FreeBSD tun devices (#903) 2023-06-22 12:13:31 -04:00
Nate Brown
3bbf5f4e67 Use an interface for udp conns (#901) 2023-06-14 10:48:52 -05:00
Wade Simmons
928731acfe fix up the release workflow (#891)
actions/create-release is deprecated, just switch to using `gh` cli.
This is actually much easier anyways!
2023-06-14 11:45:01 -04:00
Nate Brown
57eb80e9fb v1.7.2 (#887)
Update CHANGELOG for Nebula v1.7.2
2023-06-01 11:05:07 -04:00
brad-defined
96f4dcaab8 Fix reconfig freeze attempting to send to an unbuffered, unread channel (#886)
* Fixes a reocnfig freeze where the reconfig attempts to send to an unbuffered channel with no readers.
Only create stop channel when a DNS goroutine is created, and only send when the channel exists.
Buffer to size 1 so that the stop message can be immediately sent even if the goroutine is busy doing DNS lookups.
2023-05-31 16:05:46 -04:00
Wade Simmons
6d8c5f437c GitHub actions update setup-go (#881)
This does caching for us, so we can remove our manual caching of modules
2023-05-23 13:24:33 -04:00
John Maguire
165b671e70 v1.7.1 (#878)
Update CHANGELOG for Nebula v1.7.1
2023-05-18 15:39:24 -04:00
brad-defined
6be0bad68a Fix static_host_map DNS lookup Linux issue - put v4 addr into v6 slice(#877) 2023-05-18 14:13:32 -04:00
Wade Simmons
7ae3cd25f8 v1.7.0 (#870)
Update CHANGELOG for Nebula v1.7.0
2023-05-17 11:02:53 -04:00
Wade Simmons
9a7ed57a3f Cache cert verification methods (#871)
* cache cert verification

CheckSignature and Verify are expensive methods, and certificates are
static. Cache the results.

* use atomics

* make sure public key bytes match

* add VerifyWithCache and ResetCache

* cleanup

* use VerifyWithCache

* doc
2023-05-17 10:14:26 -04:00
Wade Simmons
eb9f22a8fa fix mismerge of P256 and encrypted private keys (#869)
The private key length is checked in a switch statement below these
lines, these lines should have been removed.
2023-05-09 14:05:55 -04:00
Nate Brown
54a8499c7b Fix go vet (#868) 2023-05-09 11:01:30 -05:00
Wade Simmons
419aaf2e36 issue templates: remove Report Security Vulnerability (#867)
This is redundant as Github automatically adds a section for this near the top.
2023-05-09 11:37:48 -04:00
Ilya Lukyanov
1701087035 Add destination CIDR checking (#507) 2023-05-09 10:37:23 -05:00
Nate Brown
a9cb2e06f4 Add ability to respect the system route table for unsafe route on linux (#839) 2023-05-09 10:36:55 -05:00
Wade Simmons
115b4b70b1 add SECURITY.md (#864)
* add SECURITY.md

Fixes: #699

* add Security mention to New issue template

* cleanup
2023-05-09 11:25:21 -04:00
Wade Simmons
0707caedb4 document P256 and BoringCrypto (#865)
* document P256 and BoringCrypto

Some basic descriptions of P256 and BoringCrypto added to the bottom of
README.md so that their prupose is not a mystery.

* typo
2023-05-09 11:24:52 -04:00
brad-defined
bd9cc01d62 Dns static lookerupper (#796)
* Support lighthouse DNS names, and regularly resolve the name in a background goroutine to discover DNS updates.
2023-05-09 11:22:08 -04:00
Nate Brown
d1f786419c Try rehandshaking a main hostinfo after releasing hostmap locks (#863) 2023-05-08 14:43:03 -05:00
Wade Simmons
31ed9269d7 add test for GOEXPERIMENT=boringcrypto (#861)
* add test for GOEXPERIMENT=boringcrypto

* fix NebulaCertificate.Sign

Set the PublicKey field in a more compatible way for the tests. The
current method grabs the public key from the certificate, but the
correct thing to do is to derive it from the private key. Either way
doesn't really matter as I don't think the Sign method actually even
uses the PublicKey field.

* assert boring

* cleanup tests
2023-05-08 13:27:01 -04:00
Nate Brown
48eb63899f Have lighthouses ack updates to reduce test packet traffic (#851) 2023-05-05 14:44:03 -05:00
Nate Brown
b26c13336f Fix test on master (#860) 2023-05-04 20:11:33 -05:00
Wade Simmons
e0185c4b01 Support NIST curve P256 (#769)
* Support NIST curve P256

This change adds support for NIST curve P256. When you use `nebula-cert ca`
or `nebula-cert keygen`, you can specify `-curve P256` to enable it. The
curve to use is based on the curve defined in your CA certificate.

Internally, we use ECDSA P256 to sign certificates, and ECDH P256 to do
Noise handshakes. P256 is not supported natively in Noise Protocol, so
we define `DHP256` in the `noiseutil` package to implement support for
it.

You cannot have a mixed network of Curve25519 and P256 certificates,
since the Noise protocol will only attempt to parse using the Curve
defined in the host's certificate.

* verify the curves match in VerifyPrivateKey

This would have failed anyways once we tried to actually use the bytes
in the private key, but its better to detect the issue up front with
a better error message.

* add cert.Curve argument to Sign method

* fix mismerge

* use crypto/ecdh

This is the preferred method for doing ECDH functions now, and also has
a boringcrypto specific codepath.

* remove other ecdh uses of crypto/elliptic

use crypto/ecdh instead
2023-05-04 17:50:23 -04:00
Nate Brown
702e1c59bd Always disconnect block listed hosts (#858) 2023-05-04 16:09:42 -05:00
Nate Brown
5fe8f45d05 Clear lighthouse cache for a vpn ip on a dead connection when its the final hostinfo (#857) 2023-05-04 15:42:12 -05:00
Nate Brown
03e4a7f988 Rehandshaking (#838)
Co-authored-by: Brad Higgins <brad@defined.net>
Co-authored-by: Wade Simmons <wadey@slack-corp.com>
2023-05-04 15:16:37 -05:00
Wade Simmons
0b67b19771 add boringcrypto Makefile targets (#856)
This adds a few build targets to compile with `GOEXPERIMENT=boringcrypto`:

- `bin-boringcrypto`
- `release-boringcrypto`

It also adds a field to the intial start up log indicating if
boringcrypto is enabled in the binary.
2023-05-04 15:42:45 -04:00
Wade Simmons
a0d3b93ae5 update dependencies: 2023-05 (#855)
Updates that end up in the final binaries (go version -m):

    Updated  github.com/imdario/mergo             https://github.com/imdario/mergo/compare/v0.3.13...v0.3.15
    Updated  github.com/miekg/dns                 https://github.com/miekg/dns/compare/v1.1.52...v1.1.54
    Updated  github.com/prometheus/client_golang  https://github.com/prometheus/client_golang/compare/v1.14.0...v1.15.1
    Updated  github.com/prometheus/client_model   https://github.com/prometheus/client_model/compare/v0.3.0...v0.4.0
    Updated  golang.org/x/crypto                  https://github.com/golang/crypto/compare/v0.7.0...v0.8.0
    Updated  golang.org/x/net                     https://github.com/golang/net/compare/v0.8.0...v0.9.0
    Updated  golang.org/x/sys                     https://github.com/golang/sys/compare/v0.6.0...v0.8.0
    Updated  golang.org/x/term                    https://github.com/golang/term/compare/v0.6.0...v0.8.0
    Updated  google.golang.org/protobuf           v1.29.0...v1.30.0
2023-05-04 15:42:15 -04:00
Wade Simmons
58ec1f7a7b build with go1.20 (#854)
* build with go1.20

This has been out for a bit and is up to go1.20.4. We have been using
go1.20 for the Slack builds and have seen no issues.

* need the quotes

* use go install
2023-05-04 11:35:03 -04:00
Nate Brown
397fe5f879 Add ability to skip installing unsafe routes on the os routing table (#831) 2023-04-10 12:32:37 -05:00
brad-defined
9b03053191 update EncReader and EncWriter interface function args to have concrete types (#844)
* Update LightHouseHandlerFunc to remove EncWriter param.
* Move EncWriter to interface
* EncReader, too
2023-04-07 14:28:37 -04:00
Nate Brown
3cb4e0ef57 Allow listen.host to contain names (#825) 2023-04-05 11:29:26 -05:00
Wade Simmons
e0553822b0 Use NewGCMTLS (when using experiment boringcrypto) (#803)
* Use NewGCMTLS (when using experiment boringcrypto)

This change only affects builds built using `GOEXPERIMENT=boringcrypto`.
When built with this experiment, we use the NewGCMTLS() method exposed by
goboring, which validates that the nonce is strictly monotonically increasing.
This is the TLS 1.2 specification for nonce generation (which also matches the
method used by the Noise Protocol)

- https://github.com/golang/go/blob/go1.19/src/crypto/tls/cipher_suites.go#L520-L522
- https://github.com/golang/go/blob/go1.19/src/crypto/internal/boring/aes.go#L235-L237
- https://github.com/golang/go/blob/go1.19/src/crypto/internal/boring/aes.go#L250
- ae223d6138/include/openssl/aead.h (L379-L381)
- ae223d6138/crypto/fipsmodule/cipher/e_aes.c (L1082-L1093)

* need to lock around EncryptDanger in SendVia

* fix link to test vector
2023-04-05 11:08:23 -04:00
Nate Brown
d3fe3efcb0 Fix handshake retry regression (#842) 2023-04-05 10:04:30 -05:00
Nate Brown
fd99ce9a71 Use fewer test packets (#840) 2023-04-04 13:42:24 -05:00
Wade Simmons
6685856b5d emit certificate.expiration_ttl_seconds metric (#782) 2023-04-03 20:18:16 -05:00
John Maguire
a56a97e5c3 Add ability to encrypt CA private key at rest (#386)
Fixes #8.

`nebula-cert ca` now supports encrypting the CA's private key with a
passphrase. Pass `-encrypt` in order to be prompted for a passphrase.
Encryption is performed using AES-256-GCM and Argon2id for KDF. KDF
parameters default to RFC recommendations, but can be overridden via CLI
flags `-argon-memory`, `-argon-parallelism`, and `-argon-iterations`.
2023-04-03 13:59:38 -04:00
Nate Brown
ee8e1348e9 Use connection manager to drive NAT maintenance (#835)
Co-authored-by: brad-defined <77982333+brad-defined@users.noreply.github.com>
2023-03-31 15:45:05 -05:00
Nate Brown
1a6c657451 Normalize logs (#837) 2023-03-30 15:07:31 -05:00
Nate Brown
6b3d42efa5 Use atomic.Pointer for certState (#833) 2023-03-30 13:04:09 -05:00
brad-defined
2801fb2286 Fix relay (#827)
Co-authored-by: Nate Brown <nbrown.us@gmail.com>
2023-03-30 11:09:20 -05:00
Ryan Huber
e28336c5db probes to the lh are not generally useful as recv_error should catch (#408) 2023-03-29 15:09:36 -05:00
Wade Simmons
3e5c7e6860 add punchy.respond_delay config option (#721) 2023-03-29 14:32:35 -05:00
Wade Simmons
8a82e0fb16 ssh: add save-mutex-profile (#737) 2023-03-29 14:30:28 -05:00
Nate Brown
f0ef80500d Remove dead code and re-order transit from pending to main hostmap on stage 2 (#828) 2023-03-17 15:36:24 -05:00
Wade Simmons
61b784d2bb Update dependencies 2023-03 (#824)
List of dependency updates that appear in the final binaries (other are
only used in tests, or don't actually get used by the modules we import):

    Updated	github.com/cespare/xxhash	https://github.com/cespare/xxhash/compare/v2.1.2...v2.2.0
    Updated	github.com/golang/protobuf	https://github.com/golang/protobuf/compare/v1.5.2...v1.5.3
    Updated	github.com/miekg/dns	https://github.com/miekg/dns/compare/v1.1.50...v1.1.52
    Updated	github.com/prometheus/common	https://github.com/prometheus/common/compare/v0.37.0...v0.42.0
    Updated	github.com/prometheus/procfs	https://github.com/prometheus/procfs/compare/v0.8.0...v0.9.0
    Updated	github.com/vishvananda/netns	https://github.com/vishvananda/netns/compare/v0.0.1...v0.0.4
    Updated	golang.org/x/crypto	https://github.com/golang/crypto/compare/v0.3.0...v0.7.0
    Updated	golang.org/x/net	https://github.com/golang/net/compare/v0.2.0...v0.8.0
    Updated	golang.org/x/sys	https://github.com/golang/sys/compare/v0.2.0...v0.6.0
    Updated	golang.org/x/term	https://github.com/golang/term/compare/v0.2.0...v0.6.0
    Updated	golang.zx2c4.com/wintun	415007cec224...0fa3db229ce2
    Updated	google.golang.org/protobuf	v1.28.1...v1.29.0
2023-03-13 15:37:32 -04:00
Caleb Jasik
5da79e2a4c Run make vet in CI (#693) 2023-03-13 15:35:12 -04:00
Wade Simmons
e1af37e46d add calculated_remotes (#759)
* add calculated_remotes

This setting allows us to "guess" what the remote might be for a host
while we wait for the lighthouse response. For networks that hard
designed with in mind, it can help speed up handshake performance, as well as
improve resiliency in the case that all lighthouses are down.

Example:

    lighthouse:
      # ...

      calculated_remotes:
        # For any Nebula IPs in 10.0.10.0/24, this will apply the mask and add
        # the calculated IP as an initial remote (while we wait for the response
        # from the lighthouse). Both CIDRs must have the same mask size.
        # For example, Nebula IP 10.0.10.123 will have a calculated remote of
        # 192.168.1.123

        10.0.10.0/24:
          - mask: 192.168.1.0/24
            port: 4242

* figure out what is up with this test

* add test

* better logic for sending handshakes

Keep track of the last light of hosts we sent handshakes to. Only log
handshake sent messages if the list has changed.

Remove the test Test_NewHandshakeManagerTrigger because it is faulty and
makes no sense. It relys on the fact that no handshake packets actually
get sent, but with these changes we would send packets now (which it
should!)

* use atomic.Pointer

* cleanup to make it clearer

* fix typo in example
2023-03-13 15:09:08 -04:00
Wade Simmons
6e0ae4f9a3 firewall: add option to send REJECT replies (#738)
* firewall: add option to send REJECT replies

This change allows you to configure the firewall to send REJECT packets
when a packet is denied.

    firewall:
      # Action to take when a packet is not allowed by the firewall rules.
      # Can be one of:
      #   `drop` (default): silently drop the packet.
      #   `reject`: send a reject reply.
      #     - For TCP, this will be a RST "Connection Reset" packet.
      #     - For other protocols, this will be an ICMP port unreachable packet.
      outbound_action: drop
      inbound_action: drop

These packets are only sent to established tunnels, and only on the
overlay network (currently IPv4 only).

    $ ping -c1 192.168.100.3
    PING 192.168.100.3 (192.168.100.3) 56(84) bytes of data.
    From 192.168.100.3 icmp_seq=2 Destination Port Unreachable

    --- 192.168.100.3 ping statistics ---
    2 packets transmitted, 0 received, +1 errors, 100% packet loss, time 31ms

    $ nc -nzv 192.168.100.3 22
    (UNKNOWN) [192.168.100.3] 22 (?) : Connection refused

This change also modifies the smoke test to capture tcpdump pcaps from
both the inside and outside to inspect what is going on over the wire.
It also now does TCP and UDP packet tests using the Nmap version of
ncat.

* calculate seq and ack the same was as the kernel

The logic a bit confusing, so we copy it straight from how the kernel
does iptables `--reject-with tcp-reset`:

- https://github.com/torvalds/linux/blob/v5.19/net/ipv4/netfilter/nf_reject_ipv4.c#L193-L221

* cleanup
2023-03-13 15:08:40 -04:00
Caleb Jasik
f0ac61c1f0 Add nebula.plist based on the homebrew nebula LaunchDaemon plist (#762) 2023-03-13 13:16:46 -05:00
Nate Brown
92cc32f844 Remove handshake race avoidance (#820)
Co-authored-by: Wade Simmons <wadey@slack-corp.com>
2023-03-13 12:35:14 -05:00
Nate Brown
2ea360e5e2 Render hostmaps as mermaid graphs in e2e tests (#815) 2023-02-16 13:23:33 -06:00
Caleb Jasik
469ae78748 Add homebrew install method to readme (#630) 2023-02-13 14:42:58 -06:00
Nate Brown
a06977bbd5 Track connections by local index id instead of vpn ip (#807) 2023-02-13 14:41:05 -06:00
John Maguire
5bd8712946 Immediately forward packets from self to self on FreeBSD (#808) 2023-01-23 15:51:54 -06:00
Tricia
0fc4d8192f log network as String to match the other log event in interface.go that emits network (#811)
Co-authored-by: Tricia Bogen <tbogen@slack-corp.com>
2023-01-23 14:05:35 -05:00
Nate Brown
5278b6f926 Generic timerwheel (#804) 2023-01-18 10:56:42 -06:00
Nate Brown
c177126ed0 Fix possible panic in the timerwheels (#802) 2023-01-11 19:35:19 -06:00
John Maguire
c44da3abee Make DNS queries case insensitive (#793) 2022-12-20 16:59:11 -05:00
John Maguire
b7e73da943 Add note indicating modes have usage text (#794) 2022-12-20 16:53:56 -05:00
John Maguire
ff54bfd9f3 Add nebula-cert.exe and cert files to .gitignore (#722) 2022-12-20 16:52:51 -05:00
John Maguire
b5a85a6eb8 Update example config with IPv6 note for allow lists (#742) 2022-12-20 16:50:02 -05:00
Fabio Alessandro Locati
3ae242fa5f Add nss-lookup to the systemd wants (#791)
* Add nss-lookup to the systemd wants to ensure DNS is running before starting nebula

* Add Ansible & example service scripts

* Fix #797

* Align Ansible scripts and examples

Co-authored-by: John Maguire <contact@johnmaguire.me>
2022-12-19 14:42:07 -05:00
Fabio Alessandro Locati
cb2ec861ea Nebula is now in Fedora official repositories (#719) 2022-12-19 14:40:53 -05:00
John Maguire
a3e6edf9c7 Use config.yml consistently (not config.yaml) (#789) 2022-12-19 11:45:15 -06:00
John Maguire
ad7222509d Add a link to mobile nebula in the new issue form (#790) 2022-12-19 11:28:49 -06:00
Caleb Jasik
12dbbd3dd3 Fix typos found by https://github.com/crate-ci/typos (#735) 2022-12-19 11:28:27 -06:00
John Maguire
ec48298fe8 Update config to show aes cipher instead of chacha (#788) 2022-12-07 11:38:56 -06:00
Ian VanSchooten
77769de1e6 Docs: Update doc links (#751)
* Update documentation links

* Update links
2022-11-29 11:32:43 -05:00
Alexander Averyanov
022ae83a4a Fix typo: my -> may (#758) 2022-11-28 13:59:57 -05:00
Wade Simmons
d4f9500ca5 Update dependencies (2022-11) (#780)
* update dependencies

Update to latest dependencies on Nov 21, 2022.

Here are the diffs for deps that actually end up in the binaries (based
on `go version -m`)

    Updated  github.com/imdario/mergo                          https://github.com/imdario/mergo/compare/v0.3.12...v0.3.13
    Updated  github.com/matttproud/golang_protobuf_extensions  https://github.com/matttproud/golang_protobuf_extensions/compare/v1.0.1...v1.0.4
    Updated  github.com/miekg/dns                              https://github.com/miekg/dns/compare/v1.1.48...v1.1.50
    Updated  github.com/prometheus/client_golang               https://github.com/prometheus/client_golang/compare/v1.12.1...v1.14.0
    Updated  github.com/prometheus/client_model                https://github.com/prometheus/client_model/compare/v0.2.0...v0.3.0
    Updated  github.com/prometheus/common                      https://github.com/prometheus/common/compare/v0.33.0...v0.37.0
    Updated  github.com/prometheus/procfs                      https://github.com/prometheus/procfs/compare/v0.7.3...v0.8.0
    Updated  github.com/sirupsen/logrus                        https://github.com/sirupsen/logrus/compare/v1.8.1...v1.9.0
    Updated  github.com/vishvananda/netns                      https://github.com/vishvananda/netns/compare/50045581ed74...v0.0.1
    Updated  golang.org/x/crypto                               https://github.com/golang/crypto/compare/ae2d96664a29...v0.3.0
    Updated  golang.org/x/net                                  https://github.com/golang/net/compare/749bd193bc2b...v0.2.0
    Updated  golang.org/x/sys                                  https://github.com/golang/sys/compare/289d7a0edf71...v0.2.0
    Updated  golang.org/x/term                                 https://github.com/golang/term/compare/03fcf44c2211...v0.2.0
    Updated  google.golang.org/protobuf                        v1.28.0...v1.28.1

* test that mergo merges like we expect
2022-11-23 10:46:41 -05:00
brad-defined
9a8892c526 Fix 756 SSH command line parsing error to write to user instead of stderr (#757) 2022-11-22 20:55:27 -06:00
brad-defined
813b64ffb1 Remove unused variables from connection manager (#677) 2022-11-15 20:33:09 -06:00
John Maguire
85f5849d0b Fix a hang when shutting down Android (#772) 2022-11-11 10:18:43 -06:00
Wade Simmons
9af242dc47 switch to new sync/atomic helpers in go1.19 (#728)
These new helpers make the code a lot cleaner. I confirmed that the
simple helpers like `atomic.Int64` don't add any extra overhead as they
get inlined by the compiler. `atomic.Pointer` adds an extra method call
as it no longer gets inlined, but we aren't using these on the hot path
so it is probably okay.
2022-10-31 13:37:41 -04:00
Wade Simmons
a800a48857 v1.6.1 (#752)
Update CHANGELOG for Nebula v1.6.1
2022-09-26 13:38:18 -04:00
Nate Brown
4c0ae3df5e Refuse to process double encrypted packets (#741) 2022-09-19 12:47:48 -05:00
Nate Brown
feb3e1317f Add a simple benchmark to e2e tests (#739) 2022-09-01 09:44:58 -05:00
Jon Rafkind
c2259f14a7 explicitly reload config from ssh command (#725) 2022-08-08 12:44:09 -05:00
Nate Brown
b1eeb5f3b8 Support unsafe_routes on mobile again (#729) 2022-08-05 09:58:10 -05:00
Nate Brown
2adf0ca1d1 Use issue templates to improve bug reports (#726) 2022-07-29 12:57:05 -05:00
Nate Brown
92dfccf01a v1.6.0 (#701)
Update CHANGELOG for Nebula v1.6.0

Co-authored-by: Wade Simmons <wsimmons@slack-corp.com>
Co-authored-by: brad-defined <77982333+brad-defined@users.noreply.github.com>
2022-06-30 16:15:18 -04:00
brad-defined
38e495e0d2 Remove EXPERIMENTAL text from routines example config. (#702) 2022-06-30 11:20:41 -04:00
brad-defined
78a0255c91 typeos (#700) 2022-06-29 11:19:20 -04:00
brad-defined
169cdbbd35 Immediately forward packets received on the nebula TUN device from self to self (#501)
* Immediately forward packets received on the nebula TUN device with a destination of our Nebula VPN IP right back out that same TUN device on MacOS.
2022-06-27 14:36:10 -04:00
Nate Brown
0d1ee4214a Add relay e2e tests and output some mermaid sequence diagrams (#691) 2022-06-27 12:33:29 -05:00
Wade Simmons
7b9287709c add listen.send_recv_error config option (#670)
By default, Nebula replies to packets it has no tunnel for with a `recv_error` packet. This packet helps speed up re-connection
in the case that Nebula on either side did not shut down cleanly. This response can be abused as a way to discover if Nebula is running
on a host though. This option lets you configure if you want to send `recv_error` packets always, never, or only to private network remotes.
valid values: always, never, private

This setting is reloadable with SIGHUP.
2022-06-27 12:37:54 -04:00
Wade Simmons
85ec807b7e reserve NebulaHandshakeDetails fields for multiport (#674)
We are currently testing changes for multiport (related to #497) that
use fields 6 and 7 in the protobuf. Reserved these fields so that when
we eventually open the PR we are backwards compatible with any future
changes.
2022-06-27 12:07:05 -04:00
John Maguire
a0b280621d Remove firewall.conntrack.max_connections from examples (#684) 2022-06-23 10:29:54 -05:00
Caleb Jasik
527f953c2c Remove x509 config loading code (#685) 2022-06-23 10:27:34 -05:00
brad-defined
1a7c575011 Relay (#678)
Co-authored-by: Wade Simmons <wsimmons@slack-corp.com>
2022-06-21 13:35:23 -05:00
Don Stephan
332fa2b825 fix panic in handleInvalidCertificate (#675)
* fix panic in handleInvalidCertificate

when HandleMonitorTick fires, the hostmap can be nil which causes a panic to occur when trying to clean up the hostmap in handleInvalidCertificate. This fix just stops the invalidation from continuing if the hostmap doesn't exist.

* removed conditional for disconnectInvalid in HandleDeletionTick
2022-05-16 13:29:57 -04:00
Wade Simmons
45d1d2b6c6 Update dependencies - 2022-04 (#664)
Updated  github.com/kardianos/service         https://github.com/kardianos/service/compare/v1.2.0...v1.2.1
    Updated  github.com/miekg/dns                 https://github.com/miekg/dns/compare/v1.1.43...v1.1.48
    Updated  github.com/prometheus/client_golang  https://github.com/prometheus/client_golang/compare/v1.11.0...v1.12.1
    Updated  github.com/prometheus/common         https://github.com/prometheus/common/compare/v0.32.1...v0.33.0
    Updated  github.com/stretchr/testify          https://github.com/stretchr/testify/compare/v1.7.0...v1.7.1
    Updated  golang.org/x/crypto                  5770296d90...ae2d96664a
    Updated  golang.org/x/net                     69e39bad7d...749bd193bc
    Updated  golang.org/x/sys                     7861aae155...289d7a0edf
    Updated  golang.zx2c4.com/wireguard/windows   v0.5.1...v0.5.3
    Updated  google.golang.org/protobuf           v1.27.1...v1.28.0
2022-04-18 12:12:25 -04:00
Wade Simmons
3913062c43 build and test with go1.18 (#656)
- https://go.dev/doc/go1.18
2022-04-05 17:08:00 -04:00
Wade Simmons
b38bd36766 fix connection manager check when disconnect_invalid set (#658)
This restores the hostMap.QueryVpnIP block to how it looked before #370
was merged. I'm not sure why the patch from #370 wanted to continue on
if there was no match found in the hostmap, since there isn't anything
to do at that point (the tunnel has already been closed).

This was causing a crash because the handleInvalidCertificate check
expects the hostinfo to be passed in (but it is nil since there was no
hostinfo in the hostmap).

Fixes: #657
2022-04-04 13:38:36 -04:00
Nate Brown
d85e24f49f Allow for self reported ips to the lighthouse (#650) 2022-04-04 12:35:23 -05:00
bitshop
7672c7087a Add to build all windows-arm64 / bin-windows-arm64 build option (#638)
* Add to build all windows-arm64 / bin-winarm64 builds

* update release to build for windows-arm64

* cleanup

Co-authored-by: Wade Simmons <wsimmons@slack-corp.com>
2022-03-18 13:23:10 -04:00
Caleb Jasik
730a5c4a23 Update link to nebula docs (#655) 2022-03-18 11:15:16 -04:00
brad-defined
03498a0cb2 Make nebula advertise its dynamic port to lighthouses (#653) 2022-03-15 18:03:56 -05:00
Nate Brown
312a01dc09 Lighthouse reload support (#649)
Co-authored-by: John Maguire <contact@johnmaguire.me>
2022-03-14 12:35:13 -05:00
Nate Brown
bbe0a032bb Fix windows unsafe_routes regression (#648) 2022-03-09 13:23:29 -06:00
Wade Simmons
b5b9d33ee7 v1.5.2 (#612)
Update CHANGELOG for Nebula v1.5.2
2021-12-14 16:48:56 -05:00
Wade Simmons
e434ba6523 fix unsafe routes darwin (#610)
With Nebula 1.4.0, if you create an unsafe_route that has a collision with an existing route on the system, the unsafe_route will be silently dropped (and the existing system route remains).

With Nebula 1.5.0, this same situation will cause Nebula to fail to start with an error (EEXIST).

This change restores the Nebula 1.4.0 behavior (but with a WARN log as well).
2021-12-14 11:52:49 -05:00
Wade Simmons
068a93d1f4 fix makeRouteTree allowMTU (#611)
With the previous implementation, we check if route.MTU is greater than zero,
but it will always be because we set it to the default MTU in
parseUnsafeRoutes. This change leaves it as zero in parseUnsafeRoutes so
it can be examined later.
2021-12-14 11:52:28 -05:00
Nate Brown
15fdabc3ab v1.5.1 (#606)
Update CHANGELOG for Nebula v1.5.1
2021-12-13 20:43:25 -05:00
forfuncsake
1110756f0f Allow setup of a CA pool from bytes that contain expired certs (#599)
Co-authored-by: Nate Brown <nbrown.us@gmail.com>
2021-12-09 21:24:56 -06:00
Nate Brown
e31006d546 Be more clear about ipv4 in nebula-cert (#604) 2021-12-07 21:40:30 -06:00
Wade Simmons
949ec78653 don't set ConnectionState to nil (#590)
* don't set ConnectionState to nil

We might have packets processing in another thread, so we can't safely
just set this to nil. Since we removed it from the hostmaps, the next
packets to process should start the handshake over again.

I believe this comment is outdated or incorrect, since the next
handshake will start over with a new HostInfo, I don't think there is
any way a counter reuse could happen:

> We must null the connectionstate or a counter reuse may happen

Here is a panic we saw that I think is related:

    panic: runtime error: invalid memory address or nil pointer dereference
    [signal SIGSEGV: segmentation violation code=0x1 addr=0x20 pc=0x93a037]
    goroutine 59 [running, locked to thread]:
    github.com/slackhq/nebula.(*Firewall).Drop(...)
            github.com/slackhq/nebula/firewall.go:380
    github.com/slackhq/nebula.(*Interface).consumeInsidePacket(...)
            github.com/slackhq/nebula/inside.go:59
    github.com/slackhq/nebula.(*Interface).listenIn(...)
            github.com/slackhq/nebula/interface.go:233
    created by github.com/slackhq/nebula.(*Interface).run
            github.com/slackhq/nebula/interface.go:191

* use closeTunnel
2021-12-06 14:09:05 -05:00
Wade Simmons
127a116bfd update golang.org/x/crypto (#603)
> Version v0.0.0-20211202192323-5770296d904e of golang.org/x/crypto fixes a vulnerability in the golang.org/x/crypto/ssh package which allowed unauthenticated clients to cause a panic in SSH servers.
>
> This issue was discovered and reported by Rod Hynes, Psiphon Inc., and is tracked as CVE-2021-43565 and Issue golang/go#49932.

    Updated  golang.org/x/crypto  089bfa5675...5770296d90
    Updated  golang.org/x/net     4a448f8816...69e39bad7d
2021-12-06 14:07:05 -05:00
Wade Simmons
befce3f990 fix crash with -test (#602)
When running in `-test` mode, `tun` is set to nil. So we should move the
defer into the `!configTest` if block.

    panic: runtime error: invalid memory address or nil pointer dereference
    [signal SIGSEGV: segmentation violation code=0x1 addr=0x28 pc=0x54855c]

    goroutine 1 [running]:
    github.com/slackhq/nebula.Main.func3(0x4000135e80, {0x0, 0x0})
            github.com/slackhq/nebula/main.go:176 +0x2c
    github.com/slackhq/nebula.Main(0x400022e060, 0x1, {0x76faa0, 0x5}, 0x4000230000, 0x0)
            github.com/slackhq/nebula/main.go:316 +0x2414
    main.main()
            github.com/slackhq/nebula/cmd/nebula/main.go:54 +0x540
2021-12-06 14:06:16 -05:00
Wade Simmons
f60ed2b36d overlay: fix tun.RouteFor getting *net.IP (#595)
tun.RouteFor expects the routeTree to have an iputil.VpnIp inside of it
instead of a *net.IP.
2021-12-06 09:35:31 -05:00
Nate Brown
48c47f5841 Warn if no lighthouses were configured on a non lighthouse node (#587) 2021-11-30 10:31:33 -06:00
Wade Simmons
75306487c5 fix wintun package to have // +build comments (#598)
Without these comments, gofmt 1.16.9 will complain. Since we otherwise
still support building with go1.16, lets add the comments to make it
easier to compile and gofmt.

Related: #588
2021-11-30 11:14:15 -05:00
Nate Brown
78d0d46bae Remove WriteRaw, cidrTree -> routeTree to better describe its purpose, remove redundancy from field names (#582) 2021-11-12 12:47:09 -06:00
Nate Brown
467e605d5e Push route handling into overlay, a few more nits fixed (#581) 2021-11-12 11:19:28 -06:00
Nate Brown
2f1f0d602f Cleanup most of the remaining nits (#578) 2021-11-12 10:47:36 -06:00
Nate Brown
e07524a654 Move all of tun into overlay (#577) 2021-11-11 16:37:29 -06:00
Nate Brown
88ce0edf76 Start the overlay package with the old Inside interface (#576) 2021-11-10 21:52:26 -06:00
Nate Brown
4453964e34 Move util to test, contextual errors to util (#575) 2021-11-10 21:47:38 -06:00
Wade Simmons
19a9a4221e v1.5.0 (#574)
Update CHANGELOG for Nebula v1.5.0
2021-11-10 22:32:26 -05:00
Chad Harp
1915fab619 tun_darwin (#163)
- Remove water and replace with syscalls for tun setup
- Support named interfaces
- Set up routes with syscalls instead of os/exec

Co-authored-by: Wade Simmons <wade@wades.im>
2021-11-09 20:24:24 -05:00
Nate Brown
7801b589b6 Sign and notarize darwin universal binaries (#571) 2021-11-09 10:49:54 -06:00
Nate Brown
b6391292d1 Move wintun distributable into release zip for windows (#572) 2021-11-08 21:55:10 -06:00
Terry Wang
999efdb2e8 docs: improve grammar and readability for README.md (#225) 2021-11-08 17:32:31 -06:00
Wade Simmons
304b12f63f create ConnectionState before adding to HostMap (#535)
We have a few small race conditions with creating the HostInfo.ConnectionState
since we add the host info to the pendingHostMap before we set this
field. We can make everything a lot easier if we just add an "init"
function so that we can set this field in the hostinfo before we add it
to the hostmap.
2021-11-08 14:46:22 -05:00
CzBiX
16be0ce566 Add Wintun support (#289) 2021-11-08 12:36:31 -06:00
John Maguire
0577c097fb Fix flaky test (#567) 2021-11-04 14:49:56 -05:00
Jake Howard
eb66e13dc4 Use CGO_ENABLED=0 (#421)
Set `CGO_ENABLED` to 0 when building
2021-11-04 14:20:44 -04:00
218 changed files with 18468 additions and 7851 deletions

69
.github/ISSUE_TEMPLATE/bug-report.yml vendored Normal file
View File

@@ -0,0 +1,69 @@
name: "\U0001F41B Bug Report"
description: Report an issue or possible bug
title: "\U0001F41B BUG:"
labels: []
assignees: []
body:
- type: markdown
attributes:
value: |
### Thank you for taking the time to file a bug report!
Please fill out this form as completely as possible.
- type: input
id: version
attributes:
label: What version of `nebula` are you using? (`nebula -version`)
placeholder: 0.0.0
validations:
required: true
- type: input
id: os
attributes:
label: What operating system are you using?
description: iOS and Android specific issues belong in the [mobile_nebula](https://github.com/DefinedNet/mobile_nebula) repo.
placeholder: Linux, Mac, Windows
validations:
required: true
- type: textarea
id: description
attributes:
label: Describe the Bug
description: A clear and concise description of what the bug is.
validations:
required: true
- type: textarea
id: logs
attributes:
label: Logs from affected hosts
description: |
Please provide logs from ALL affected hosts during the time of the issue. If you do not provide logs we will be unable to assist you!
[Learn how to find Nebula logs here.](https://nebula.defined.net/docs/guides/viewing-nebula-logs/)
Improve formatting by using <code>```</code> at the beginning and end of each log block.
value: |
```
```
validations:
required: true
- type: textarea
id: configs
attributes:
label: Config files from affected hosts
description: |
Provide config files for all affected hosts.
Improve formatting by using <code>```</code> at the beginning and end of each config file.
value: |
```
```
validations:
required: true

13
.github/ISSUE_TEMPLATE/config.yml vendored Normal file
View File

@@ -0,0 +1,13 @@
blank_issues_enabled: true
contact_links:
- name: 📘 Documentation
url: https://nebula.defined.net/docs/
about: Review documentation.
- name: 💁 Support/Chat
url: https://join.slack.com/t/nebulaoss/shared_invite/enQtOTA5MDI4NDg3MTg4LTkwY2EwNTI4NzQyMzc0M2ZlODBjNWI3NTY1MzhiOThiMmZlZjVkMTI0NGY4YTMyNjUwMWEyNzNkZTJmYzQxOGU
about: 'This issue tracker is not for support questions. Join us on Slack for assistance!'
- name: 📱 Mobile Nebula
url: https://github.com/definednet/mobile_nebula
about: 'This issue tracker is not for mobile support. Try the Mobile Nebula repo instead!'

22
.github/dependabot.yml vendored Normal file
View File

@@ -0,0 +1,22 @@
version: 2
updates:
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "weekly"
- package-ecosystem: "gomod"
directory: "/"
schedule:
interval: "weekly"
groups:
golang-x-dependencies:
patterns:
- "golang.org/x/*"
zx2c4-dependencies:
patterns:
- "golang.zx2c4.com/*"
protobuf-dependencies:
patterns:
- "github.com/golang/protobuf"
- "google.golang.org/protobuf"

View File

@@ -14,31 +14,21 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Set up Go 1.17
uses: actions/setup-go@v2
with:
go-version: 1.17
id: go
- uses: actions/checkout@v4
- name: Check out code into the Go module directory
uses: actions/checkout@v2
- uses: actions/cache@v2
- uses: actions/setup-go@v5
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-gofmt1.17-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-gofmt1.17-
go-version: '1.22'
check-latest: true
- name: Install goimports
run: |
go get golang.org/x/tools/cmd/goimports
go build golang.org/x/tools/cmd/goimports
go install golang.org/x/tools/cmd/goimports@latest
- name: gofmt
run: |
if [ "$(find . -iname '*.go' | grep -v '\.pb\.go$' | xargs ./goimports -l)" ]
if [ "$(find . -iname '*.go' | grep -v '\.pb\.go$' | xargs goimports -l)" ]
then
find . -iname '*.go' | grep -v '\.pb\.go$' | xargs ./goimports -d
find . -iname '*.go' | grep -v '\.pb\.go$' | xargs goimports -d
exit 1
fi

View File

@@ -7,315 +7,212 @@ name: Create release and upload binaries
jobs:
build-linux:
name: Build Linux All
name: Build Linux/BSD All
runs-on: ubuntu-latest
steps:
- name: Set up Go 1.17
uses: actions/setup-go@v2
with:
go-version: 1.17
- uses: actions/checkout@v4
- name: Checkout code
uses: actions/checkout@v2
- uses: actions/setup-go@v5
with:
go-version: '1.22'
check-latest: true
- name: Build
run: |
make BUILD_NUMBER="${GITHUB_REF#refs/tags/v}" release-linux release-freebsd
make BUILD_NUMBER="${GITHUB_REF#refs/tags/v}" release-linux release-freebsd release-openbsd release-netbsd
mkdir release
mv build/*.tar.gz release
- name: Upload artifacts
uses: actions/upload-artifact@v1
uses: actions/upload-artifact@v4
with:
name: linux-latest
path: release
build-windows:
name: Build Windows amd64
name: Build Windows
runs-on: windows-latest
steps:
- name: Set up Go 1.17
uses: actions/setup-go@v2
with:
go-version: 1.17
- uses: actions/checkout@v4
- name: Checkout code
uses: actions/checkout@v2
- uses: actions/setup-go@v5
with:
go-version: '1.22'
check-latest: true
- name: Build
run: |
echo $Env:GITHUB_REF.Substring(11)
go build -trimpath -ldflags "-X main.Build=$($Env:GITHUB_REF.Substring(11))" -o build\nebula.exe ./cmd/nebula-service
go build -trimpath -ldflags "-X main.Build=$($Env:GITHUB_REF.Substring(11))" -o build\nebula-cert.exe ./cmd/nebula-cert
mkdir build\windows-amd64
$Env:GOARCH = "amd64"
go build -trimpath -ldflags "-X main.Build=$($Env:GITHUB_REF.Substring(11))" -o build\windows-amd64\nebula.exe ./cmd/nebula-service
go build -trimpath -ldflags "-X main.Build=$($Env:GITHUB_REF.Substring(11))" -o build\windows-amd64\nebula-cert.exe ./cmd/nebula-cert
mkdir build\windows-arm64
$Env:GOARCH = "arm64"
go build -trimpath -ldflags "-X main.Build=$($Env:GITHUB_REF.Substring(11))" -o build\windows-arm64\nebula.exe ./cmd/nebula-service
go build -trimpath -ldflags "-X main.Build=$($Env:GITHUB_REF.Substring(11))" -o build\windows-arm64\nebula-cert.exe ./cmd/nebula-cert
mkdir build\dist\windows
mv dist\windows\wintun build\dist\windows\
- name: Upload artifacts
uses: actions/upload-artifact@v1
uses: actions/upload-artifact@v4
with:
name: windows-latest
path: build
build-darwin:
name: Build Darwin amd64
runs-on: macOS-latest
name: Build Universal Darwin
env:
HAS_SIGNING_CREDS: ${{ secrets.AC_USERNAME != '' }}
runs-on: macos-latest
steps:
- name: Set up Go 1.17
uses: actions/setup-go@v2
- uses: actions/checkout@v4
- uses: actions/setup-go@v5
with:
go-version: 1.17
go-version: '1.22'
check-latest: true
- name: Checkout code
uses: actions/checkout@v2
- name: Import certificates
if: env.HAS_SIGNING_CREDS == 'true'
uses: Apple-Actions/import-codesign-certs@v3
with:
p12-file-base64: ${{ secrets.APPLE_DEVELOPER_CERTIFICATE_P12_BASE64 }}
p12-password: ${{ secrets.APPLE_DEVELOPER_CERTIFICATE_PASSWORD }}
- name: Build
- name: Build, sign, and notarize
env:
AC_USERNAME: ${{ secrets.AC_USERNAME }}
AC_PASSWORD: ${{ secrets.AC_PASSWORD }}
run: |
make BUILD_NUMBER="${GITHUB_REF#refs/tags/v}" service build/nebula-darwin-amd64.tar.gz
make BUILD_NUMBER="${GITHUB_REF#refs/tags/v}" service build/nebula-darwin-arm64.tar.gz
rm -rf release
mkdir release
mv build/*.tar.gz release
make BUILD_NUMBER="${GITHUB_REF#refs/tags/v}" service build/darwin-amd64/nebula build/darwin-amd64/nebula-cert
make BUILD_NUMBER="${GITHUB_REF#refs/tags/v}" service build/darwin-arm64/nebula build/darwin-arm64/nebula-cert
lipo -create -output ./release/nebula ./build/darwin-amd64/nebula ./build/darwin-arm64/nebula
lipo -create -output ./release/nebula-cert ./build/darwin-amd64/nebula-cert ./build/darwin-arm64/nebula-cert
if [ -n "$AC_USERNAME" ]; then
codesign -s "10BC1FDDEB6CE753550156C0669109FAC49E4D1E" -f -v --timestamp --options=runtime -i "net.defined.nebula" ./release/nebula
codesign -s "10BC1FDDEB6CE753550156C0669109FAC49E4D1E" -f -v --timestamp --options=runtime -i "net.defined.nebula-cert" ./release/nebula-cert
fi
zip -j release/nebula-darwin.zip release/nebula-cert release/nebula
if [ -n "$AC_USERNAME" ]; then
xcrun notarytool submit ./release/nebula-darwin.zip --team-id "576H3XS7FP" --apple-id "$AC_USERNAME" --password "$AC_PASSWORD" --wait
fi
- name: Upload artifacts
uses: actions/upload-artifact@v1
uses: actions/upload-artifact@v4
with:
name: darwin-latest
path: release
path: ./release/*
build-docker:
name: Create and Upload Docker Images
# Technically we only need build-linux to succeed, but if any platforms fail we'll
# want to investigate and restart the build
needs: [build-linux, build-darwin, build-windows]
runs-on: ubuntu-latest
env:
HAS_DOCKER_CREDS: ${{ vars.DOCKERHUB_USERNAME != '' && secrets.DOCKERHUB_TOKEN != '' }}
# XXX It's not possible to write a conditional here, so instead we do it on every step
#if: ${{ env.HAS_DOCKER_CREDS == 'true' }}
steps:
# Be sure to checkout the code before downloading artifacts, or they will
# be overwritten
- name: Checkout code
if: ${{ env.HAS_DOCKER_CREDS == 'true' }}
uses: actions/checkout@v4
- name: Download artifacts
if: ${{ env.HAS_DOCKER_CREDS == 'true' }}
uses: actions/download-artifact@v4
with:
name: linux-latest
path: artifacts
- name: Login to Docker Hub
if: ${{ env.HAS_DOCKER_CREDS == 'true' }}
uses: docker/login-action@v3
with:
username: ${{ vars.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Set up Docker Buildx
if: ${{ env.HAS_DOCKER_CREDS == 'true' }}
uses: docker/setup-buildx-action@v3
- name: Build and push images
if: ${{ env.HAS_DOCKER_CREDS == 'true' }}
env:
DOCKER_IMAGE_REPO: ${{ vars.DOCKER_IMAGE_REPO || 'nebulaoss/nebula' }}
DOCKER_IMAGE_TAG: ${{ vars.DOCKER_IMAGE_TAG || 'latest' }}
run: |
mkdir -p build/linux-{amd64,arm64}
tar -zxvf artifacts/nebula-linux-amd64.tar.gz -C build/linux-amd64/
tar -zxvf artifacts/nebula-linux-arm64.tar.gz -C build/linux-arm64/
docker buildx build . --push -f docker/Dockerfile --platform linux/amd64,linux/arm64 --tag "${DOCKER_IMAGE_REPO}:${DOCKER_IMAGE_TAG}" --tag "${DOCKER_IMAGE_REPO}:${GITHUB_REF#refs/tags/v}"
release:
name: Create and Upload Release
needs: [build-linux, build-darwin, build-windows]
runs-on: ubuntu-latest
steps:
- name: Download Linux artifacts
uses: actions/download-artifact@v1
with:
name: linux-latest
- uses: actions/checkout@v4
- name: Download Darwin artifacts
uses: actions/download-artifact@v1
- name: Download artifacts
uses: actions/download-artifact@v4
with:
name: darwin-latest
- name: Download Windows artifacts
uses: actions/download-artifact@v1
with:
name: windows-latest
path: artifacts
- name: Zip Windows
run: |
cd windows-latest
zip nebula-windows-amd64.zip nebula.exe nebula-cert.exe
cd artifacts/windows-latest
cp windows-amd64/* .
zip -r nebula-windows-amd64.zip nebula.exe nebula-cert.exe dist
cp windows-arm64/* .
zip -r nebula-windows-arm64.zip nebula.exe nebula-cert.exe dist
- name: Create sha256sum
run: |
cd artifacts
for dir in linux-latest darwin-latest windows-latest
do
(
cd $dir
if [ "$dir" = windows-latest ]
then
sha256sum <nebula.exe | sed 's=-$=nebula-windows-amd64.zip/nebula.exe='
sha256sum <nebula-cert.exe | sed 's=-$=nebula-windows-amd64.zip/nebula-cert.exe='
sha256sum <windows-amd64/nebula.exe | sed 's=-$=nebula-windows-amd64.zip/nebula.exe='
sha256sum <windows-amd64/nebula-cert.exe | sed 's=-$=nebula-windows-amd64.zip/nebula-cert.exe='
sha256sum <windows-arm64/nebula.exe | sed 's=-$=nebula-windows-arm64.zip/nebula.exe='
sha256sum <windows-arm64/nebula-cert.exe | sed 's=-$=nebula-windows-arm64.zip/nebula-cert.exe='
sha256sum nebula-windows-amd64.zip
sha256sum nebula-windows-arm64.zip
elif [ "$dir" = darwin-latest ]
then
sha256sum <nebula-darwin.zip | sed 's=-$=nebula-darwin.zip='
sha256sum <nebula | sed 's=-$=nebula-darwin.zip/nebula='
sha256sum <nebula-cert | sed 's=-$=nebula-darwin.zip/nebula-cert='
else
for v in *.tar.gz
do
sha256sum $v
tar zxf $v --to-command='sh -c "sha256sum | sed s=-$='$v'/$TAR_FILENAME="'
done
for v in *.tar.gz
do
sha256sum $v
tar zxf $v --to-command='sh -c "sha256sum | sed s=-$='$v'/$TAR_FILENAME="'
done
fi
)
done | sort -k 2 >SHASUM256.txt
- name: Create Release
id: create_release
uses: actions/create-release@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
tag_name: ${{ github.ref }}
release_name: Release ${{ github.ref }}
draft: false
prerelease: false
##
## Upload assets (I wish we could just upload the whole folder at once...
##
- name: Upload SHASUM256.txt
uses: actions/upload-release-asset@v1.0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: ./SHASUM256.txt
asset_name: SHASUM256.txt
asset_content_type: text/plain
- name: Upload darwin-amd64
uses: actions/upload-release-asset@v1.0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: ./darwin-latest/nebula-darwin-amd64.tar.gz
asset_name: nebula-darwin-amd64.tar.gz
asset_content_type: application/gzip
- name: Upload darwin-arm64
uses: actions/upload-release-asset@v1.0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: ./darwin-latest/nebula-darwin-arm64.tar.gz
asset_name: nebula-darwin-arm64.tar.gz
asset_content_type: application/gzip
- name: Upload windows-amd64
uses: actions/upload-release-asset@v1.0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: ./windows-latest/nebula-windows-amd64.zip
asset_name: nebula-windows-amd64.zip
asset_content_type: application/zip
- name: Upload linux-amd64
uses: actions/upload-release-asset@v1.0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: ./linux-latest/nebula-linux-amd64.tar.gz
asset_name: nebula-linux-amd64.tar.gz
asset_content_type: application/gzip
- name: Upload linux-386
uses: actions/upload-release-asset@v1.0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: ./linux-latest/nebula-linux-386.tar.gz
asset_name: nebula-linux-386.tar.gz
asset_content_type: application/gzip
- name: Upload linux-ppc64le
uses: actions/upload-release-asset@v1.0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: ./linux-latest/nebula-linux-ppc64le.tar.gz
asset_name: nebula-linux-ppc64le.tar.gz
asset_content_type: application/gzip
- name: Upload linux-arm-5
uses: actions/upload-release-asset@v1.0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: ./linux-latest/nebula-linux-arm-5.tar.gz
asset_name: nebula-linux-arm-5.tar.gz
asset_content_type: application/gzip
- name: Upload linux-arm-6
uses: actions/upload-release-asset@v1.0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: ./linux-latest/nebula-linux-arm-6.tar.gz
asset_name: nebula-linux-arm-6.tar.gz
asset_content_type: application/gzip
- name: Upload linux-arm-7
uses: actions/upload-release-asset@v1.0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: ./linux-latest/nebula-linux-arm-7.tar.gz
asset_name: nebula-linux-arm-7.tar.gz
asset_content_type: application/gzip
- name: Upload linux-arm64
uses: actions/upload-release-asset@v1.0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: ./linux-latest/nebula-linux-arm64.tar.gz
asset_name: nebula-linux-arm64.tar.gz
asset_content_type: application/gzip
- name: Upload linux-mips
uses: actions/upload-release-asset@v1.0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: ./linux-latest/nebula-linux-mips.tar.gz
asset_name: nebula-linux-mips.tar.gz
asset_content_type: application/gzip
- name: Upload linux-mipsle
uses: actions/upload-release-asset@v1.0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: ./linux-latest/nebula-linux-mipsle.tar.gz
asset_name: nebula-linux-mipsle.tar.gz
asset_content_type: application/gzip
- name: Upload linux-mips64
uses: actions/upload-release-asset@v1.0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: ./linux-latest/nebula-linux-mips64.tar.gz
asset_name: nebula-linux-mips64.tar.gz
asset_content_type: application/gzip
- name: Upload linux-mips64le
uses: actions/upload-release-asset@v1.0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: ./linux-latest/nebula-linux-mips64le.tar.gz
asset_name: nebula-linux-mips64le.tar.gz
asset_content_type: application/gzip
- name: Upload linux-mips-softfloat
uses: actions/upload-release-asset@v1.0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: ./linux-latest/nebula-linux-mips-softfloat.tar.gz
asset_name: nebula-linux-mips-softfloat.tar.gz
asset_content_type: application/gzip
- name: Upload linux-riscv64
uses: actions/upload-release-asset@v1.0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: ./linux-latest/nebula-linux-riscv64.tar.gz
asset_name: nebula-linux-riscv64.tar.gz
asset_content_type: application/gzip
- name: Upload freebsd-amd64
uses: actions/upload-release-asset@v1.0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: ./linux-latest/nebula-freebsd-amd64.tar.gz
asset_name: nebula-freebsd-amd64.tar.gz
asset_content_type: application/gzip
run: |
cd artifacts
gh release create \
--verify-tag \
--title "Release ${{ github.ref_name }}" \
"${{ github.ref_name }}" \
SHASUM256.txt *-latest/*.zip *-latest/*.tar.gz

48
.github/workflows/smoke-extra.yml vendored Normal file
View File

@@ -0,0 +1,48 @@
name: smoke-extra
on:
push:
branches:
- master
pull_request:
types: [opened, synchronize, labeled, reopened]
paths:
- '.github/workflows/smoke**'
- '**Makefile'
- '**.go'
- '**.proto'
- 'go.mod'
- 'go.sum'
jobs:
smoke-extra:
if: github.ref == 'refs/heads/master' || contains(github.event.pull_request.labels.*.name, 'smoke-test-extra')
name: Run extra smoke tests
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-go@v5
with:
go-version-file: 'go.mod'
check-latest: true
- name: install vagrant
run: sudo apt-get update && sudo apt-get install -y vagrant virtualbox
- name: freebsd-amd64
run: make smoke-vagrant/freebsd-amd64
- name: openbsd-amd64
run: make smoke-vagrant/openbsd-amd64
- name: netbsd-amd64
run: make smoke-vagrant/netbsd-amd64
- name: linux-386
run: make smoke-vagrant/linux-386
- name: linux-amd64-ipv6disable
run: make smoke-vagrant/linux-amd64-ipv6disable
timeout-minutes: 30

View File

@@ -18,24 +18,15 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Set up Go 1.17
uses: actions/setup-go@v2
with:
go-version: 1.17
id: go
- uses: actions/checkout@v4
- name: Check out code into the Go module directory
uses: actions/checkout@v2
- uses: actions/cache@v2
- uses: actions/setup-go@v5
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go1.17-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go1.17-
go-version: '1.22'
check-latest: true
- name: build
run: make bin-docker
run: make bin-docker CGO_ENABLED=1 BUILD_ARGS=-race
- name: setup docker image
working-directory: ./.github/workflows/smoke
@@ -45,4 +36,20 @@ jobs:
working-directory: ./.github/workflows/smoke
run: ./smoke.sh
- name: setup relay docker image
working-directory: ./.github/workflows/smoke
run: ./build-relay.sh
- name: run smoke relay
working-directory: ./.github/workflows/smoke
run: ./smoke-relay.sh
- name: setup docker image for P256
working-directory: ./.github/workflows/smoke
run: NAME="smoke-p256" CURVE=P256 ./build.sh
- name: run smoke-p256
working-directory: ./.github/workflows/smoke
run: NAME="smoke-p256" ./smoke.sh
timeout-minutes: 10

View File

@@ -1,4 +1,6 @@
FROM debian:buster
FROM ubuntu:jammy
RUN apt-get update && apt-get install -y iputils-ping ncat tcpdump
ADD ./build /nebula

44
.github/workflows/smoke/build-relay.sh vendored Executable file
View File

@@ -0,0 +1,44 @@
#!/bin/sh
set -e -x
rm -rf ./build
mkdir ./build
(
cd build
cp ../../../../build/linux-amd64/nebula .
cp ../../../../build/linux-amd64/nebula-cert .
HOST="lighthouse1" AM_LIGHTHOUSE=true ../genconfig.sh >lighthouse1.yml <<EOF
relay:
am_relay: true
EOF
export LIGHTHOUSES="192.168.100.1 172.17.0.2:4242"
export REMOTE_ALLOW_LIST='{"172.17.0.4/32": false, "172.17.0.5/32": false}'
HOST="host2" ../genconfig.sh >host2.yml <<EOF
relay:
relays:
- 192.168.100.1
EOF
export REMOTE_ALLOW_LIST='{"172.17.0.3/32": false}'
HOST="host3" ../genconfig.sh >host3.yml
HOST="host4" ../genconfig.sh >host4.yml <<EOF
relay:
use_relays: false
EOF
../../../../nebula-cert ca -name "Smoke Test"
../../../../nebula-cert sign -name "lighthouse1" -groups "lighthouse,lighthouse1" -ip "192.168.100.1/24"
../../../../nebula-cert sign -name "host2" -groups "host,host2" -ip "192.168.100.2/24"
../../../../nebula-cert sign -name "host3" -groups "host,host3" -ip "192.168.100.3/24"
../../../../nebula-cert sign -name "host4" -groups "host,host4" -ip "192.168.100.4/24"
)
docker build -t nebula:smoke-relay .

View File

@@ -11,6 +11,11 @@ mkdir ./build
cp ../../../../build/linux-amd64/nebula .
cp ../../../../build/linux-amd64/nebula-cert .
if [ "$1" ]
then
cp "../../../../build/$1/nebula" "$1-nebula"
fi
HOST="lighthouse1" \
AM_LIGHTHOUSE=true \
../genconfig.sh >lighthouse1.yml
@@ -29,11 +34,11 @@ mkdir ./build
OUTBOUND='[{"port": "any", "proto": "icmp", "group": "lighthouse"}]' \
../genconfig.sh >host4.yml
../../../../nebula-cert ca -name "Smoke Test"
../../../../nebula-cert ca -curve "${CURVE:-25519}" -name "Smoke Test"
../../../../nebula-cert sign -name "lighthouse1" -groups "lighthouse,lighthouse1" -ip "192.168.100.1/24"
../../../../nebula-cert sign -name "host2" -groups "host,host2" -ip "192.168.100.2/24"
../../../../nebula-cert sign -name "host3" -groups "host,host3" -ip "192.168.100.3/24"
../../../../nebula-cert sign -name "host4" -groups "host,host4" -ip "192.168.100.4/24"
)
sudo docker build -t nebula:smoke .
docker build -t "nebula:${NAME:-smoke}" .

View File

@@ -40,15 +40,20 @@ pki:
lighthouse:
am_lighthouse: ${AM_LIGHTHOUSE:-false}
hosts: $(lighthouse_hosts)
remote_allow_list: ${REMOTE_ALLOW_LIST}
listen:
host: 0.0.0.0
port: ${LISTEN_PORT:-4242}
tun:
dev: ${TUN_DEV:-nebula1}
dev: ${TUN_DEV:-tun0}
firewall:
inbound_action: reject
outbound_action: reject
outbound: ${OUTBOUND:-$FIREWALL_ALL}
inbound: ${INBOUND:-$FIREWALL_ALL}
$(test -t 0 || cat)
EOF

85
.github/workflows/smoke/smoke-relay.sh vendored Executable file
View File

@@ -0,0 +1,85 @@
#!/bin/bash
set -e -x
set -o pipefail
mkdir -p logs
cleanup() {
echo
echo " *** cleanup"
echo
set +e
if [ "$(jobs -r)" ]
then
docker kill lighthouse1 host2 host3 host4
fi
}
trap cleanup EXIT
docker run --name lighthouse1 --rm nebula:smoke-relay -config lighthouse1.yml -test
docker run --name host2 --rm nebula:smoke-relay -config host2.yml -test
docker run --name host3 --rm nebula:smoke-relay -config host3.yml -test
docker run --name host4 --rm nebula:smoke-relay -config host4.yml -test
docker run --name lighthouse1 --device /dev/net/tun:/dev/net/tun --cap-add NET_ADMIN --rm nebula:smoke-relay -config lighthouse1.yml 2>&1 | tee logs/lighthouse1 | sed -u 's/^/ [lighthouse1] /' &
sleep 1
docker run --name host2 --device /dev/net/tun:/dev/net/tun --cap-add NET_ADMIN --rm nebula:smoke-relay -config host2.yml 2>&1 | tee logs/host2 | sed -u 's/^/ [host2] /' &
sleep 1
docker run --name host3 --device /dev/net/tun:/dev/net/tun --cap-add NET_ADMIN --rm nebula:smoke-relay -config host3.yml 2>&1 | tee logs/host3 | sed -u 's/^/ [host3] /' &
sleep 1
docker run --name host4 --device /dev/net/tun:/dev/net/tun --cap-add NET_ADMIN --rm nebula:smoke-relay -config host4.yml 2>&1 | tee logs/host4 | sed -u 's/^/ [host4] /' &
sleep 1
set +x
echo
echo " *** Testing ping from lighthouse1"
echo
set -x
docker exec lighthouse1 ping -c1 192.168.100.2
docker exec lighthouse1 ping -c1 192.168.100.3
docker exec lighthouse1 ping -c1 192.168.100.4
set +x
echo
echo " *** Testing ping from host2"
echo
set -x
docker exec host2 ping -c1 192.168.100.1
# Should fail because no relay configured in this direction
! docker exec host2 ping -c1 192.168.100.3 -w5 || exit 1
! docker exec host2 ping -c1 192.168.100.4 -w5 || exit 1
set +x
echo
echo " *** Testing ping from host3"
echo
set -x
docker exec host3 ping -c1 192.168.100.1
docker exec host3 ping -c1 192.168.100.2
docker exec host3 ping -c1 192.168.100.4
set +x
echo
echo " *** Testing ping from host4"
echo
set -x
docker exec host4 ping -c1 192.168.100.1
# Should fail because relays not allowed
! docker exec host4 ping -c1 192.168.100.2 -w5 || exit 1
docker exec host4 ping -c1 192.168.100.3
docker exec host4 sh -c 'kill 1'
docker exec host3 sh -c 'kill 1'
docker exec host2 sh -c 'kill 1'
docker exec lighthouse1 sh -c 'kill 1'
sleep 5
if [ "$(jobs -r)" ]
then
echo "nebula still running after SIGTERM sent" >&2
exit 1
fi

105
.github/workflows/smoke/smoke-vagrant.sh vendored Executable file
View File

@@ -0,0 +1,105 @@
#!/bin/bash
set -e -x
set -o pipefail
export VAGRANT_CWD="$PWD/vagrant-$1"
mkdir -p logs
cleanup() {
echo
echo " *** cleanup"
echo
set +e
if [ "$(jobs -r)" ]
then
docker kill lighthouse1 host2
fi
vagrant destroy -f
}
trap cleanup EXIT
CONTAINER="nebula:${NAME:-smoke}"
docker run --name lighthouse1 --rm "$CONTAINER" -config lighthouse1.yml -test
docker run --name host2 --rm "$CONTAINER" -config host2.yml -test
vagrant up
vagrant ssh -c "cd /nebula && /nebula/$1-nebula -config host3.yml -test"
docker run --name lighthouse1 --device /dev/net/tun:/dev/net/tun --cap-add NET_ADMIN --rm "$CONTAINER" -config lighthouse1.yml 2>&1 | tee logs/lighthouse1 | sed -u 's/^/ [lighthouse1] /' &
sleep 1
docker run --name host2 --device /dev/net/tun:/dev/net/tun --cap-add NET_ADMIN --rm "$CONTAINER" -config host2.yml 2>&1 | tee logs/host2 | sed -u 's/^/ [host2] /' &
sleep 1
vagrant ssh -c "cd /nebula && sudo sh -c 'echo \$\$ >/nebula/pid && exec /nebula/$1-nebula -config host3.yml'" &
sleep 15
# grab tcpdump pcaps for debugging
docker exec lighthouse1 tcpdump -i nebula1 -q -w - -U 2>logs/lighthouse1.inside.log >logs/lighthouse1.inside.pcap &
docker exec lighthouse1 tcpdump -i eth0 -q -w - -U 2>logs/lighthouse1.outside.log >logs/lighthouse1.outside.pcap &
docker exec host2 tcpdump -i nebula1 -q -w - -U 2>logs/host2.inside.log >logs/host2.inside.pcap &
docker exec host2 tcpdump -i eth0 -q -w - -U 2>logs/host2.outside.log >logs/host2.outside.pcap &
# vagrant ssh -c "tcpdump -i nebula1 -q -w - -U" 2>logs/host3.inside.log >logs/host3.inside.pcap &
# vagrant ssh -c "tcpdump -i eth0 -q -w - -U" 2>logs/host3.outside.log >logs/host3.outside.pcap &
docker exec host2 ncat -nklv 0.0.0.0 2000 &
vagrant ssh -c "ncat -nklv 0.0.0.0 2000" &
#docker exec host2 ncat -e '/usr/bin/echo host2' -nkluv 0.0.0.0 3000 &
#vagrant ssh -c "ncat -e '/usr/bin/echo host3' -nkluv 0.0.0.0 3000" &
set +x
echo
echo " *** Testing ping from lighthouse1"
echo
set -x
docker exec lighthouse1 ping -c1 192.168.100.2
docker exec lighthouse1 ping -c1 192.168.100.3
set +x
echo
echo " *** Testing ping from host2"
echo
set -x
docker exec host2 ping -c1 192.168.100.1
# Should fail because not allowed by host3 inbound firewall
! docker exec host2 ping -c1 192.168.100.3 -w5 || exit 1
set +x
echo
echo " *** Testing ncat from host2"
echo
set -x
# Should fail because not allowed by host3 inbound firewall
#! docker exec host2 ncat -nzv -w5 192.168.100.3 2000 || exit 1
#! docker exec host2 ncat -nzuv -w5 192.168.100.3 3000 | grep -q host3 || exit 1
set +x
echo
echo " *** Testing ping from host3"
echo
set -x
vagrant ssh -c "ping -c1 192.168.100.1"
vagrant ssh -c "ping -c1 192.168.100.2"
set +x
echo
echo " *** Testing ncat from host3"
echo
set -x
#vagrant ssh -c "ncat -nzv -w5 192.168.100.2 2000"
#vagrant ssh -c "ncat -nzuv -w5 192.168.100.2 3000" | grep -q host2
vagrant ssh -c "sudo xargs kill </nebula/pid"
docker exec host2 sh -c 'kill 1'
docker exec lighthouse1 sh -c 'kill 1'
sleep 1
if [ "$(jobs -r)" ]
then
echo "nebula still running after SIGTERM sent" >&2
exit 1
fi

View File

@@ -7,63 +7,112 @@ set -o pipefail
mkdir -p logs
cleanup() {
echo
echo " *** cleanup"
echo
set +e
if [ "$(jobs -r)" ]
then
sudo docker kill lighthouse1 host2 host3 host4
docker kill lighthouse1 host2 host3 host4
fi
}
trap cleanup EXIT
sudo docker run --name lighthouse1 --rm nebula:smoke -config lighthouse1.yml -test
sudo docker run --name host2 --rm nebula:smoke -config host2.yml -test
sudo docker run --name host3 --rm nebula:smoke -config host3.yml -test
sudo docker run --name host4 --rm nebula:smoke -config host4.yml -test
CONTAINER="nebula:${NAME:-smoke}"
sudo docker run --name lighthouse1 --device /dev/net/tun:/dev/net/tun --cap-add NET_ADMIN --rm nebula:smoke -config lighthouse1.yml 2>&1 | tee logs/lighthouse1 &
docker run --name lighthouse1 --rm "$CONTAINER" -config lighthouse1.yml -test
docker run --name host2 --rm "$CONTAINER" -config host2.yml -test
docker run --name host3 --rm "$CONTAINER" -config host3.yml -test
docker run --name host4 --rm "$CONTAINER" -config host4.yml -test
docker run --name lighthouse1 --device /dev/net/tun:/dev/net/tun --cap-add NET_ADMIN --rm "$CONTAINER" -config lighthouse1.yml 2>&1 | tee logs/lighthouse1 | sed -u 's/^/ [lighthouse1] /' &
sleep 1
sudo docker run --name host2 --device /dev/net/tun:/dev/net/tun --cap-add NET_ADMIN --rm nebula:smoke -config host2.yml 2>&1 | tee logs/host2 &
docker run --name host2 --device /dev/net/tun:/dev/net/tun --cap-add NET_ADMIN --rm "$CONTAINER" -config host2.yml 2>&1 | tee logs/host2 | sed -u 's/^/ [host2] /' &
sleep 1
sudo docker run --name host3 --device /dev/net/tun:/dev/net/tun --cap-add NET_ADMIN --rm nebula:smoke -config host3.yml 2>&1 | tee logs/host3 &
docker run --name host3 --device /dev/net/tun:/dev/net/tun --cap-add NET_ADMIN --rm "$CONTAINER" -config host3.yml 2>&1 | tee logs/host3 | sed -u 's/^/ [host3] /' &
sleep 1
sudo docker run --name host4 --device /dev/net/tun:/dev/net/tun --cap-add NET_ADMIN --rm nebula:smoke -config host4.yml 2>&1 | tee logs/host4 &
docker run --name host4 --device /dev/net/tun:/dev/net/tun --cap-add NET_ADMIN --rm "$CONTAINER" -config host4.yml 2>&1 | tee logs/host4 | sed -u 's/^/ [host4] /' &
sleep 1
# grab tcpdump pcaps for debugging
docker exec lighthouse1 tcpdump -i nebula1 -q -w - -U 2>logs/lighthouse1.inside.log >logs/lighthouse1.inside.pcap &
docker exec lighthouse1 tcpdump -i eth0 -q -w - -U 2>logs/lighthouse1.outside.log >logs/lighthouse1.outside.pcap &
docker exec host2 tcpdump -i nebula1 -q -w - -U 2>logs/host2.inside.log >logs/host2.inside.pcap &
docker exec host2 tcpdump -i eth0 -q -w - -U 2>logs/host2.outside.log >logs/host2.outside.pcap &
docker exec host3 tcpdump -i nebula1 -q -w - -U 2>logs/host3.inside.log >logs/host3.inside.pcap &
docker exec host3 tcpdump -i eth0 -q -w - -U 2>logs/host3.outside.log >logs/host3.outside.pcap &
docker exec host4 tcpdump -i nebula1 -q -w - -U 2>logs/host4.inside.log >logs/host4.inside.pcap &
docker exec host4 tcpdump -i eth0 -q -w - -U 2>logs/host4.outside.log >logs/host4.outside.pcap &
docker exec host2 ncat -nklv 0.0.0.0 2000 &
docker exec host3 ncat -nklv 0.0.0.0 2000 &
docker exec host2 ncat -e '/usr/bin/echo host2' -nkluv 0.0.0.0 3000 &
docker exec host3 ncat -e '/usr/bin/echo host3' -nkluv 0.0.0.0 3000 &
set +x
echo
echo " *** Testing ping from lighthouse1"
echo
set -x
sudo docker exec lighthouse1 ping -c1 192.168.100.2
sudo docker exec lighthouse1 ping -c1 192.168.100.3
docker exec lighthouse1 ping -c1 192.168.100.2
docker exec lighthouse1 ping -c1 192.168.100.3
set +x
echo
echo " *** Testing ping from host2"
echo
set -x
sudo docker exec host2 ping -c1 192.168.100.1
docker exec host2 ping -c1 192.168.100.1
# Should fail because not allowed by host3 inbound firewall
! sudo docker exec host2 ping -c1 192.168.100.3 -w5 || exit 1
! docker exec host2 ping -c1 192.168.100.3 -w5 || exit 1
set +x
echo
echo " *** Testing ncat from host2"
echo
set -x
# Should fail because not allowed by host3 inbound firewall
! docker exec host2 ncat -nzv -w5 192.168.100.3 2000 || exit 1
! docker exec host2 ncat -nzuv -w5 192.168.100.3 3000 | grep -q host3 || exit 1
set +x
echo
echo " *** Testing ping from host3"
echo
set -x
sudo docker exec host3 ping -c1 192.168.100.1
sudo docker exec host3 ping -c1 192.168.100.2
docker exec host3 ping -c1 192.168.100.1
docker exec host3 ping -c1 192.168.100.2
set +x
echo
echo " *** Testing ncat from host3"
echo
set -x
docker exec host3 ncat -nzv -w5 192.168.100.2 2000
docker exec host3 ncat -nzuv -w5 192.168.100.2 3000 | grep -q host2
set +x
echo
echo " *** Testing ping from host4"
echo
set -x
sudo docker exec host4 ping -c1 192.168.100.1
docker exec host4 ping -c1 192.168.100.1
# Should fail because not allowed by host4 outbound firewall
! sudo docker exec host4 ping -c1 192.168.100.2 -w5 || exit 1
! sudo docker exec host4 ping -c1 192.168.100.3 -w5 || exit 1
! docker exec host4 ping -c1 192.168.100.2 -w5 || exit 1
! docker exec host4 ping -c1 192.168.100.3 -w5 || exit 1
set +x
echo
echo " *** Testing ncat from host4"
echo
set -x
# Should fail because not allowed by host4 outbound firewall
! docker exec host4 ncat -nzv -w5 192.168.100.2 2000 || exit 1
! docker exec host4 ncat -nzv -w5 192.168.100.3 2000 || exit 1
! docker exec host4 ncat -nzuv -w5 192.168.100.2 3000 | grep -q host2 || exit 1
! docker exec host4 ncat -nzuv -w5 192.168.100.3 3000 | grep -q host3 || exit 1
set +x
echo
@@ -71,13 +120,19 @@ echo " *** Testing conntrack"
echo
set -x
# host2 can ping host3 now that host3 pinged it first
sudo docker exec host2 ping -c1 192.168.100.3
docker exec host2 ping -c1 192.168.100.3
# host4 can ping host2 once conntrack established
sudo docker exec host2 ping -c1 192.168.100.4
sudo docker exec host4 ping -c1 192.168.100.2
docker exec host2 ping -c1 192.168.100.4
docker exec host4 ping -c1 192.168.100.2
sudo docker exec host4 sh -c 'kill 1'
sudo docker exec host3 sh -c 'kill 1'
sudo docker exec host2 sh -c 'kill 1'
sudo docker exec lighthouse1 sh -c 'kill 1'
sleep 1
docker exec host4 sh -c 'kill 1'
docker exec host3 sh -c 'kill 1'
docker exec host2 sh -c 'kill 1'
docker exec lighthouse1 sh -c 'kill 1'
sleep 5
if [ "$(jobs -r)" ]
then
echo "nebula still running after SIGTERM sent" >&2
exit 1
fi

View File

@@ -0,0 +1,7 @@
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure("2") do |config|
config.vm.box = "generic/freebsd14"
config.vm.synced_folder "../build", "/nebula", type: "rsync"
end

View File

@@ -0,0 +1,7 @@
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure("2") do |config|
config.vm.box = "ubuntu/xenial32"
config.vm.synced_folder "../build", "/nebula"
end

View File

@@ -0,0 +1,16 @@
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure("2") do |config|
config.vm.box = "ubuntu/jammy64"
config.vm.synced_folder "../build", "/nebula"
config.vm.provision :shell do |shell|
shell.inline = <<-EOF
sed -i 's/GRUB_CMDLINE_LINUX=""/GRUB_CMDLINE_LINUX="ipv6.disable=1"/' /etc/default/grub
update-grub
EOF
shell.privileged = true
shell.reboot = true
end
end

View File

@@ -0,0 +1,7 @@
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure("2") do |config|
config.vm.box = "generic/netbsd9"
config.vm.synced_folder "../build", "/nebula", type: "rsync"
end

View File

@@ -0,0 +1,7 @@
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure("2") do |config|
config.vm.box = "generic/openbsd7"
config.vm.synced_folder "../build", "/nebula", type: "rsync"
end

View File

@@ -18,54 +18,69 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Set up Go 1.17
uses: actions/setup-go@v2
with:
go-version: 1.17
id: go
- uses: actions/checkout@v4
- name: Check out code into the Go module directory
uses: actions/checkout@v2
- uses: actions/cache@v2
- uses: actions/setup-go@v5
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go1.17-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go1.17-
go-version: '1.22'
check-latest: true
- name: Build
run: make all
- name: Vet
run: make vet
- name: Test
run: make test
- name: End 2 end
run: make e2evv
- name: Build test mobile
run: make build-test-mobile
- uses: actions/upload-artifact@v4
with:
name: e2e packet flow linux-latest
path: e2e/mermaid/linux-latest
if-no-files-found: warn
test-linux-boringcrypto:
name: Build and test on linux with boringcrypto
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-go@v5
with:
go-version: '1.22'
check-latest: true
- name: Build
run: make bin-boringcrypto
- name: Test
run: make test-boringcrypto
- name: End 2 end
run: make e2evv GOEXPERIMENT=boringcrypto CGO_ENABLED=1
test:
name: Build and test on ${{ matrix.os }}
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [windows-latest, macOS-latest]
os: [windows-latest, macos-latest]
steps:
- name: Set up Go 1.17
uses: actions/setup-go@v2
with:
go-version: 1.17
id: go
- uses: actions/checkout@v4
- name: Check out code into the Go module directory
uses: actions/checkout@v2
- uses: actions/cache@v2
- uses: actions/setup-go@v5
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go1.17-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go1.17-
go-version: '1.22'
check-latest: true
- name: Build nebula
run: go build ./cmd/nebula
@@ -73,8 +88,17 @@ jobs:
- name: Build nebula-cert
run: go build ./cmd/nebula-cert
- name: Vet
run: make vet
- name: Test
run: go test -v ./...
run: make test
- name: End 2 end
run: make e2evv
- uses: actions/upload-artifact@v4
with:
name: e2e packet flow ${{ matrix.os }}
path: e2e/mermaid/${{ matrix.os }}
if-no-files-found: warn

9
.gitignore vendored
View File

@@ -4,9 +4,14 @@
/nebula-arm6
/nebula-darwin
/nebula.exe
/cert/*.crt
/cert/*.key
/nebula-cert.exe
/coverage.out
/cpu.pprof
/build
/*.tar.gz
/e2e/mermaid/
**.crt
**.key
**.pem
!/examples/quickstart-vagrant/ansible/roles/nebula/files/vagrant-test-ca.key
!/examples/quickstart-vagrant/ansible/roles/nebula/files/vagrant-test-ca.crt

View File

@@ -7,6 +7,346 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
## [Unreleased]
## [1.9.3] - 2024-06-06
### Fixed
- Initialize messageCounter to 2 instead of verifying later. (#1156)
## [1.9.2] - 2024-06-03
### Fixed
- Ensure messageCounter is set before handshake is complete. (#1154)
## [1.9.1] - 2024-05-29
### Fixed
- Fixed a potential deadlock in GetOrHandshake. (#1151)
## [1.9.0] - 2024-05-07
### Deprecated
- This release adds a new setting `default_local_cidr_any` that defaults to
true to match previous behavior, but will default to false in the next
release (1.10). When set to false, `local_cidr` is matched correctly for
firewall rules on hosts acting as unsafe routers, and should be set for any
firewall rules you want to allow unsafe route hosts to access. See the issue
and example config for more details. (#1071, #1099)
### Added
- Nebula now has an official Docker image `nebulaoss/nebula` that is
distroless and contains just the `nebula` and `nebula-cert` binaries. You
can find it here: https://hub.docker.com/r/nebulaoss/nebula (#1037)
- Experimental binaries for `loong64` are now provided. (#1003)
- Added example service script for OpenRC. (#711)
- The SSH daemon now supports inlined host keys. (#1054)
- The SSH daemon now supports certificates with `sshd.trusted_cas`. (#1098)
### Changed
- Config setting `tun.unsafe_routes` is now reloadable. (#1083)
- Small documentation and internal improvements. (#1065, #1067, #1069, #1108,
#1109, #1111, #1135)
- Various dependency updates. (#1139, #1138, #1134, #1133, #1126, #1123, #1110,
#1094, #1092, #1087, #1086, #1085, #1072, #1063, #1059, #1055, #1053, #1047,
#1046, #1034, #1022)
### Removed
- Support for the deprecated `local_range` option has been removed. Please
change to `preferred_ranges` (which is also now reloadable). (#1043)
- We are now building with go1.22, which means that for Windows you need at
least Windows 10 or Windows Server 2016. This is because support for earlier
versions was removed in Go 1.21. See https://go.dev/doc/go1.21#windows (#981)
- Removed vagrant example, as it was unmaintained. (#1129)
- Removed Fedora and Arch nebula.service files, as they are maintained in the
upstream repos. (#1128, #1132)
- Remove the TCP round trip tracking metrics, as they never had correct data
and were an experiment to begin with. (#1114)
### Fixed
- Fixed a potential deadlock introduced in 1.8.1. (#1112)
- Fixed support for Linux when IPv6 has been disabled at the OS level. (#787)
- DNS will return NXDOMAIN now when there are no results. (#845)
- Allow `::` in `lighthouse.dns.host`. (#1115)
- Capitalization of `NotAfter` fixed in DNS TXT response. (#1127)
- Don't log invalid certificates. It is untrusted data and can cause a large
volume of logs. (#1116)
## [1.8.2] - 2024-01-08
### Fixed
- Fix multiple routines when listen.port is zero. This was a regression
introduced in v1.6.0. (#1057)
### Changed
- Small dependency update for Noise. (#1038)
## [1.8.1] - 2023-12-19
### Security
- Update `golang.org/x/crypto`, which includes a fix for CVE-2023-48795. (#1048)
### Fixed
- Fix a deadlock introduced in v1.8.0 that could occur during handshakes. (#1044)
- Fix mobile builds. (#1035)
## [1.8.0] - 2023-12-06
### Deprecated
- The next minor release of Nebula, 1.9.0, will require at least Windows 10 or
Windows Server 2016. This is because support for earlier versions was removed
in Go 1.21. See https://go.dev/doc/go1.21#windows
### Added
- Linux: Notify systemd of service readiness. This should resolve timing issues
with services that depend on Nebula being active. For an example of how to
enable this, see: `examples/service_scripts/nebula.service`. (#929)
- Windows: Use Registered IO (RIO) when possible. Testing on a Windows 11
machine shows ~50x improvement in throughput. (#905)
- NetBSD, OpenBSD: Added rudimentary support. (#916, #812)
- FreeBSD: Add support for naming tun devices. (#903)
### Changed
- `pki.disconnect_invalid` will now default to true. This means that once a
certificate expires, the tunnel will be disconnected. If you use SIGHUP to
reload certificates without restarting Nebula, you should ensure all of your
clients are on 1.7.0 or newer before you enable this feature. (#859)
- Limit how often a busy tunnel can requery the lighthouse. The new config
option `timers.requery_wait_duration` defaults to `60s`. (#940)
- The internal structures for hostmaps were refactored to reduce memory usage
and the potential for subtle bugs. (#843, #938, #953, #954, #955)
- Lots of dependency updates.
### Fixed
- Windows: Retry wintun device creation if it fails the first time. (#985)
- Fix issues with firewall reject packets that could cause panics. (#957)
- Fix relay migration during re-handshakes. (#964)
- Various other refactors and fixes. (#935, #952, #972, #961, #996, #1002,
#987, #1004, #1030, #1032, ...)
## [1.7.2] - 2023-06-01
### Fixed
- Fix a freeze during config reload if the `static_host_map` config was changed. (#886)
## [1.7.1] - 2023-05-18
### Fixed
- Fix IPv4 addresses returned by `static_host_map` DNS lookup queries being
treated as IPv6 addresses. (#877)
## [1.7.0] - 2023-05-17
### Added
- `nebula-cert ca` now supports encrypting the CA's private key with a
passphrase. Pass `-encrypt` in order to be prompted for a passphrase.
Encryption is performed using AES-256-GCM and Argon2id for KDF. KDF
parameters default to RFC recommendations, but can be overridden via CLI
flags `-argon-memory`, `-argon-parallelism`, and `-argon-iterations`. (#386)
- Support for curve P256 and BoringCrypto has been added. See README section
"Curve P256 and BoringCrypto" for more details. (#865, #861, #769, #856, #803)
- New firewall rule `local_cidr`. This could be used to filter destinations
when using `unsafe_routes`. (#507)
- Add `unsafe_route` option `install`. This controls whether the route is
installed in the systems routing table. (#831)
- Add `tun.use_system_route_table` option. Set to true to manage unsafe routes
directly on the system route table with gateway routes instead of in Nebula
configuration files. This is only supported on Linux. (#839)
- The metric `certificate.ttl_seconds` is now exposed via stats. (#782)
- Add `punchy.respond_delay` option. This allows you to change the delay
before attempting punchy.respond. Default is 5 seconds. (#721)
- Added SSH commands to allow the capture of a mutex profile. (#737)
- You can now set `lighthouse.calculated_remotes` to make it possible to do
handshakes without a lighthouse in certain configurations. (#759)
- The firewall can be configured to send REJECT replies instead of the default
DROP behavior. (#738)
- For macOS, an example launchd configuration file is now provided. (#762)
### Changed
- Lighthouses and other `static_host_map` entries that use DNS names will now
be automatically refreshed to detect when the IP address changes. (#796)
- Lighthouses send ACK replies back to clients so that they do not fall into
connection testing as often by clients. (#851, #408)
- Allow the `listen.host` option to contain a hostname. (#825)
- When Nebula switches to a new certificate (such as via SIGHUP), we now
rehandshake with all existing tunnels. This allows firewall groups to be
updated and `pki.disconnect_invalid` to know about the new certificate
expiration time. (#838, #857, #842, #840, #835, #828, #820, #807)
### Fixed
- Always disconnect blocklisted hosts, even if `pki.disconnect_invalid` is
not set. (#858)
- Dependencies updated and go1.20 required. (#780, #824, #855, #854)
- Fix possible race condition with relays. (#827)
- FreeBSD: Fix connection to the localhost's own Nebula IP. (#808)
- Normalize and document some common log field values. (#837, #811)
- Fix crash if you set unlucky values for the firewall timeout configuration
options. (#802)
- Make DNS queries case insensitive. (#793)
- Update example systemd configurations to want `nss-lookup`. (#791)
- Errors with SSH commands now go to the SSH tunnel instead of stderr. (#757)
- Fix a hang when shutting down Android. (#772)
## [1.6.1] - 2022-09-26
### Fixed
- Refuse to process underlay packets received from overlay IPs. This prevents
confusion on hosts that have unsafe routes configured. (#741)
- The ssh `reload` command did not work on Windows, since it relied on sending
a SIGHUP signal internally. This has been fixed. (#725)
- A regression in v1.5.2 that broke unsafe routes on Mobile clients has been
fixed. (#729)
## [1.6.0] - 2022-06-30
### Added
- Experimental: nebula clients can be configured to act as relays for other nebula clients.
Primarily useful when stubborn NATs make a direct tunnel impossible. (#678)
- Configuration option to report manually specified `ip:port`s to lighthouses. (#650)
- Windows arm64 build. (#638)
- `punchy` and most `lighthouse` config options now support hot reloading. (#649)
### Changed
- Build against go 1.18. (#656)
- Promoted `routines` config from experimental to supported feature. (#702)
- Dependencies updated. (#664)
### Fixed
- Packets destined for the same host that sent it will be returned on MacOS.
This matches the default behavior of other operating systems. (#501)
- `unsafe_route` configuration will no longer crash on Windows. (#648)
- A few panics that were introduced in 1.5.x. (#657, #658, #675)
### Security
- You can set `listen.send_recv_error` to control the conditions in which
`recv_error` messages are sent. Sending these messages can expose the fact
that Nebula is running on a host, but it speeds up re-handshaking. (#670)
### Removed
- `x509` config stanza support has been removed. (#685)
## [1.5.2] - 2021-12-14
### Added
- Warn when a non lighthouse node does not have lighthouse hosts configured. (#587)
### Changed
- No longer fatals if expired CA certificates are present in `pki.ca`, as long as 1 valid CA is present. (#599)
- `nebula-cert` will now enforce ipv4 addresses. (#604)
- Warn on macOS if an unsafe route cannot be created due to a collision with an
existing route. (#610)
- Warn if you set a route MTU on platforms where we don't support it. (#611)
### Fixed
- Rare race condition when tearing down a tunnel due to `recv_error` and sending packets on another thread. (#590)
- Bug in `routes` and `unsafe_routes` handling that was introduced in 1.5.0. (#595)
- `-test` mode no longer results in a crash. (#602)
### Removed
- `x509.ca` config alias for `pki.ca`. (#604)
### Security
- Upgraded `golang.org/x/crypto` to address an issue which allowed unauthenticated clients to cause a panic in SSH
servers. (#603)
## 1.5.1 - 2021-12-13
(This release was skipped due to discovering #610 and #611 after the tag was
created.)
## [1.5.0] - 2021-11-11
### Added
- SSH `print-cert` has a new `-raw` flag to get the PEM representation of a certificate. (#483)
@@ -20,12 +360,27 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
certificates since the last handshake. (#370)
- New config option `unsafe_routes.<route>.metric` will set a metric for a specific unsafe route. It's useful if you have
more than one identical route and want to prefer one against the other.
more than one identical route and want to prefer one against the other. (#353)
### Changed
- Build against go 1.17. (#553)
- Build with `CGO_ENABLED=0` set, to create more portable binaries. This could
have an effect on DNS resolution if you rely on anything non-standard. (#421)
- Windows now uses the [wintun](https://www.wintun.net/) driver which does not require installation. This driver
is a large improvement over the TAP driver that was used in previous versions. If you had a previous version
of `nebula` running, you will want to disable the tap driver in Control Panel, or uninstall the `tap0901` driver
before running this version. (#289)
- Darwin binaries are now universal (works on both amd64 and arm64), signed, and shipped in a notarized zip file.
`nebula-darwin.zip` will be the only darwin release artifact. (#571)
- Darwin uses syscalls and AF_ROUTE to configure the routing table, instead of
using `/sbin/route`. Setting `tun.dev` is now allowed on Darwin as well, it
must be in the format `utun[0-9]+` or it will be ignored. (#163)
### Deprecated
- The `preferred_ranges` option has been supported as a replacement for
@@ -45,10 +400,12 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
will immediately switch to a preferred remote address after the reception of
a handshake packet (instead of waiting until 1,000 packets have been sent).
(#532)
- A race condition when `punchy.respond` is enabled and ensures the correct
vpn ip is sent a punch back response in highly queried node. (#566)
- Fix a rare crash during handshake due to a race condition. (#535)
## [1.4.0] - 2021-05-11
### Added
@@ -287,7 +644,21 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- Initial public release.
[Unreleased]: https://github.com/slackhq/nebula/compare/v1.4.0...HEAD
[Unreleased]: https://github.com/slackhq/nebula/compare/v1.9.3...HEAD
[1.9.3]: https://github.com/slackhq/nebula/releases/tag/v1.9.3
[1.9.2]: https://github.com/slackhq/nebula/releases/tag/v1.9.2
[1.9.1]: https://github.com/slackhq/nebula/releases/tag/v1.9.1
[1.9.0]: https://github.com/slackhq/nebula/releases/tag/v1.9.0
[1.8.2]: https://github.com/slackhq/nebula/releases/tag/v1.8.2
[1.8.1]: https://github.com/slackhq/nebula/releases/tag/v1.8.1
[1.8.0]: https://github.com/slackhq/nebula/releases/tag/v1.8.0
[1.7.2]: https://github.com/slackhq/nebula/releases/tag/v1.7.2
[1.7.1]: https://github.com/slackhq/nebula/releases/tag/v1.7.1
[1.7.0]: https://github.com/slackhq/nebula/releases/tag/v1.7.0
[1.6.1]: https://github.com/slackhq/nebula/releases/tag/v1.6.1
[1.6.0]: https://github.com/slackhq/nebula/releases/tag/v1.6.0
[1.5.2]: https://github.com/slackhq/nebula/releases/tag/v1.5.2
[1.5.0]: https://github.com/slackhq/nebula/releases/tag/v1.5.0
[1.4.0]: https://github.com/slackhq/nebula/releases/tag/v1.4.0
[1.3.0]: https://github.com/slackhq/nebula/releases/tag/v1.3.0
[1.2.0]: https://github.com/slackhq/nebula/releases/tag/v1.2.0

37
LOGGING.md Normal file
View File

@@ -0,0 +1,37 @@
### Logging conventions
A log message (the string/format passed to `Info`, `Error`, `Debug` etc, as well as their `Sprintf` counterparts) should
be a descriptive message about the event and may contain specific identifying characteristics. Regardless of the
level of detail in the message identifying characteristics should always be included via `WithField`, `WithFields` or
`WithError`
If an error is being logged use `l.WithError(err)` so that there is better discoverability about the event as well
as the specific error condition.
#### Common fields
- `cert` - a `cert.NebulaCertificate` object, do not `.String()` this manually, `logrus` will marshal objects properly
for the formatter it is using.
- `fingerprint` - a single `NebeulaCertificate` hex encoded fingerprint
- `fingerprints` - an array of `NebulaCertificate` hex encoded fingerprints
- `fwPacket` - a FirewallPacket object
- `handshake` - an object containing:
- `stage` - the current stage counter
- `style` - noise handshake style `ix_psk0`, `xx`, etc
- `header` - a nebula header object
- `udpAddr` - a `net.UDPAddr` object
- `udpIp` - a udp ip address
- `vpnIp` - vpn ip of the host (remote or local)
- `relay` - the vpnIp of the relay host that is or should be handling the relay packet
- `relayFrom` - The vpnIp of the initial sender of the relayed packet
- `relayTo` - The vpnIp of the final destination of a relayed packet
#### Example:
```
l.WithError(err).
WithField("vpnIp", IntIp(hostinfo.hostId)).
WithField("udpAddr", addr).
WithField("handshake", m{"stage": 1, "style": "ix"}).
Info("Invalid certificate from host")
```

View File

@@ -1,18 +1,14 @@
GOMINVERSION = 1.17
NEBULA_CMD_PATH = "./cmd/nebula"
GO111MODULE = on
export GO111MODULE
CGO_ENABLED = 0
export CGO_ENABLED
# Set up OS specific bits
ifeq ($(OS),Windows_NT)
#TODO: we should be able to ditch awk as well
GOVERSION := $(shell go version | awk "{print substr($$3, 3)}")
GOISMIN := $(shell IF "$(GOVERSION)" GEQ "$(GOMINVERSION)" ECHO 1)
NEBULA_CMD_SUFFIX = .exe
NULL_FILE = nul
# RIO on windows does pointer stuff that makes go vet angry
VET_FLAGS = -unsafeptr=false
else
GOVERSION := $(shell go version | awk '{print substr($$3, 3)}')
GOISMIN := $(shell expr "$(GOVERSION)" ">=" "$(GOMINVERSION)")
NEBULA_CMD_SUFFIX =
NULL_FILE = /dev/null
endif
@@ -26,6 +22,9 @@ ifndef BUILD_NUMBER
endif
endif
DOCKER_IMAGE_REPO ?= nebulaoss/nebula
DOCKER_IMAGE_TAG ?= latest
LDFLAGS = -X main.Build=$(BUILD_NUMBER)
ALL_LINUX = linux-amd64 \
@@ -40,13 +39,26 @@ ALL_LINUX = linux-amd64 \
linux-mips64 \
linux-mips64le \
linux-mips-softfloat \
linux-riscv64
linux-riscv64 \
linux-loong64
ALL_FREEBSD = freebsd-amd64 \
freebsd-arm64
ALL_OPENBSD = openbsd-amd64 \
openbsd-arm64
ALL_NETBSD = netbsd-amd64 \
netbsd-arm64
ALL = $(ALL_LINUX) \
$(ALL_FREEBSD) \
$(ALL_OPENBSD) \
$(ALL_NETBSD) \
darwin-amd64 \
darwin-arm64 \
freebsd-amd64 \
windows-amd64
windows-amd64 \
windows-arm64
e2e:
$(TEST_ENV) go test -tags=e2e_testing -count=1 $(TEST_FLAGS) ./e2e
@@ -63,25 +75,47 @@ e2evvv: e2ev
e2evvvv: TEST_ENV += TEST_LOGS=3
e2evvvv: e2ev
e2e-bench: TEST_FLAGS = -bench=. -benchmem -run=^$
e2e-bench: e2e
DOCKER_BIN = build/linux-amd64/nebula build/linux-amd64/nebula-cert
all: $(ALL:%=build/%/nebula) $(ALL:%=build/%/nebula-cert)
docker: docker/linux-$(shell go env GOARCH)
release: $(ALL:%=build/nebula-%.tar.gz)
release-linux: $(ALL_LINUX:%=build/nebula-%.tar.gz)
release-freebsd: build/nebula-freebsd-amd64.tar.gz
release-freebsd: $(ALL_FREEBSD:%=build/nebula-%.tar.gz)
release-openbsd: $(ALL_OPENBSD:%=build/nebula-%.tar.gz)
release-netbsd: $(ALL_NETBSD:%=build/nebula-%.tar.gz)
release-boringcrypto: build/nebula-linux-$(shell go env GOARCH)-boringcrypto.tar.gz
BUILD_ARGS = -trimpath
bin-windows: build/windows-amd64/nebula.exe build/windows-amd64/nebula-cert.exe
mv $? .
bin-windows-arm64: build/windows-arm64/nebula.exe build/windows-arm64/nebula-cert.exe
mv $? .
bin-darwin: build/darwin-amd64/nebula build/darwin-amd64/nebula-cert
mv $? .
bin-freebsd: build/freebsd-amd64/nebula build/freebsd-amd64/nebula-cert
mv $? .
bin-freebsd-arm64: build/freebsd-arm64/nebula build/freebsd-arm64/nebula-cert
mv $? .
bin-boringcrypto: build/linux-$(shell go env GOARCH)-boringcrypto/nebula build/linux-$(shell go env GOARCH)-boringcrypto/nebula-cert
mv $? .
bin:
go build $(BUILD_ARGS) -ldflags "$(LDFLAGS)" -o ./nebula${NEBULA_CMD_SUFFIX} ${NEBULA_CMD_PATH}
go build $(BUILD_ARGS) -ldflags "$(LDFLAGS)" -o ./nebula-cert${NEBULA_CMD_SUFFIX} ./cmd/nebula-cert
@@ -96,6 +130,12 @@ build/linux-mips-%: GOENV += GOMIPS=$(word 3, $(subst -, ,$*))
# Build an extra small binary for mips-softfloat
build/linux-mips-softfloat/%: LDFLAGS += -s -w
# boringcrypto
build/linux-amd64-boringcrypto/%: GOENV += GOEXPERIMENT=boringcrypto CGO_ENABLED=1
build/linux-arm64-boringcrypto/%: GOENV += GOEXPERIMENT=boringcrypto CGO_ENABLED=1
build/linux-amd64-boringcrypto/%: LDFLAGS += -checklinkname=0
build/linux-arm64-boringcrypto/%: LDFLAGS += -checklinkname=0
build/%/nebula: .FORCE
GOOS=$(firstword $(subst -, , $*)) \
GOARCH=$(word 2, $(subst -, ,$*)) $(GOENV) \
@@ -118,16 +158,28 @@ build/nebula-%.tar.gz: build/%/nebula build/%/nebula-cert
build/nebula-%.zip: build/%/nebula.exe build/%/nebula-cert.exe
cd build/$* && zip ../nebula-$*.zip nebula.exe nebula-cert.exe
docker/%: build/%/nebula build/%/nebula-cert
docker build . $(DOCKER_BUILD_ARGS) -f docker/Dockerfile --platform "$(subst -,/,$*)" --tag "${DOCKER_IMAGE_REPO}:${DOCKER_IMAGE_TAG}" --tag "${DOCKER_IMAGE_REPO}:$(BUILD_NUMBER)"
vet:
go vet -v ./...
go vet $(VET_FLAGS) -v ./...
test:
go test -v ./...
test-boringcrypto:
GOEXPERIMENT=boringcrypto CGO_ENABLED=1 go test -v ./...
test-cov-html:
go test -coverprofile=coverage.out
go tool cover -html=coverage.out
build-test-mobile:
GOARCH=amd64 GOOS=ios go build $(shell go list ./... | grep -v '/cmd/\|/examples/')
GOARCH=arm64 GOOS=ios go build $(shell go list ./... | grep -v '/cmd/\|/examples/')
GOARCH=amd64 GOOS=android go build $(shell go list ./... | grep -v '/cmd/\|/examples/')
GOARCH=arm64 GOOS=android go build $(shell go list ./... | grep -v '/cmd/\|/examples/')
bench:
go test -bench=.
@@ -161,10 +213,21 @@ bin-docker: bin build/linux-amd64/nebula build/linux-amd64/nebula-cert
smoke-docker: bin-docker
cd .github/workflows/smoke/ && ./build.sh
cd .github/workflows/smoke/ && ./smoke.sh
cd .github/workflows/smoke/ && NAME="smoke-p256" CURVE="P256" ./build.sh
cd .github/workflows/smoke/ && NAME="smoke-p256" ./smoke.sh
smoke-relay-docker: bin-docker
cd .github/workflows/smoke/ && ./build-relay.sh
cd .github/workflows/smoke/ && ./smoke-relay.sh
smoke-docker-race: BUILD_ARGS = -race
smoke-docker-race: CGO_ENABLED = 1
smoke-docker-race: smoke-docker
smoke-vagrant/%: bin-docker build/%/nebula
cd .github/workflows/smoke/ && ./build.sh $*
cd .github/workflows/smoke/ && ./smoke-vagrant.sh $*
.FORCE:
.PHONY: e2e e2ev e2evv e2evvv e2evvvv test test-cov-html bench bench-cpu bench-cpu-long bin proto release service smoke-docker smoke-docker-race
.PHONY: bench bench-cpu bench-cpu-long bin build-test-mobile e2e e2ev e2evv e2evvv e2evvvv proto release service smoke-docker smoke-docker-race test test-cov-html smoke-vagrant/%
.DEFAULT_GOAL := bin

View File

@@ -8,7 +8,7 @@ and tunneling, and each of those individual pieces existed before Nebula in vari
What makes Nebula different to existing offerings is that it brings all of these ideas together,
resulting in a sum that is greater than its individual parts.
Further documentation can be found [here](https://www.defined.net/nebula/introduction/).
Further documentation can be found [here](https://nebula.defined.net/docs/).
You can read more about Nebula [here](https://medium.com/p/884110a5579).
@@ -27,16 +27,36 @@ Check the [releases](https://github.com/slackhq/nebula/releases/latest) page for
#### Distribution Packages
- [Arch Linux](https://archlinux.org/packages/community/x86_64/nebula/)
- [Arch Linux](https://archlinux.org/packages/extra/x86_64/nebula/)
```
$ sudo pacman -S nebula
```
- [Fedora Linux](https://copr.fedorainfracloud.org/coprs/jdoss/nebula/)
- [Fedora Linux](https://src.fedoraproject.org/rpms/nebula)
```
$ sudo dnf copr enable jdoss/nebula
$ sudo dnf install nebula
```
- [Debian Linux](https://packages.debian.org/source/stable/nebula)
```
$ sudo apt install nebula
```
- [Alpine Linux](https://pkgs.alpinelinux.org/packages?name=nebula)
```
$ sudo apk add nebula
```
- [macOS Homebrew](https://github.com/Homebrew/homebrew-core/blob/HEAD/Formula/nebula.rb)
```
$ brew install nebula
```
- [Docker](https://hub.docker.com/r/nebulaoss/nebula)
```
$ docker pull nebulaoss/nebula
```
#### Mobile
- [iOS](https://apps.apple.com/us/app/mobile-nebula/id1509587936?itsct=apps_box&amp;itscg=30200)
@@ -50,9 +70,9 @@ Nebula's user-defined groups allow for provider agnostic traffic filtering betwe
Discovery nodes allow individual peers to find each other and optionally use UDP hole punching to establish connections from behind most firewalls or NATs.
Users can move data between nodes in any number of cloud service providers, datacenters, and endpoints, without needing to maintain a particular addressing scheme.
Nebula uses elliptic curve Diffie-Hellman key exchange, and AES-256-GCM in its default configuration.
Nebula uses Elliptic-curve Diffie-Hellman (`ECDH`) key exchange and `AES-256-GCM` in its default configuration.
Nebula was created to provide a mechanism for groups hosts to communicate securely, even across the internet, while enabling expressive firewall definitions similar in style to cloud security groups.
Nebula was created to provide a mechanism for groups of hosts to communicate securely, even across the internet, while enabling expressive firewall definitions similar in style to cloud security groups.
## Getting started (quickly)
@@ -93,18 +113,18 @@ Download a copy of the nebula [example configuration](https://github.com/slackhq
#### 6. Copy nebula credentials, configuration, and binaries to each host
For each host, copy the nebula binary to the host, along with `config.yaml` from step 5, and the files `ca.crt`, `{host}.crt`, and `{host}.key` from step 4.
For each host, copy the nebula binary to the host, along with `config.yml` from step 5, and the files `ca.crt`, `{host}.crt`, and `{host}.key` from step 4.
**DO NOT COPY `ca.key` TO INDIVIDUAL NODES.**
#### 7. Run nebula on each host
```
./nebula -config /path/to/config.yaml
./nebula -config /path/to/config.yml
```
## Building Nebula from source
Download go and clone this repo. Change to the nebula directory.
Make sure you have [go](https://go.dev/doc/install) installed and clone this repo. Change to the nebula directory.
To build nebula for all platforms:
`make all`
@@ -114,6 +134,17 @@ To build nebula for a specific platform (ex, Windows):
See the [Makefile](Makefile) for more details on build targets
## Curve P256 and BoringCrypto
The default curve used for cryptographic handshakes and signatures is Curve25519. This is the recommended setting for most users. If your deployment has certain compliance requirements, you have the option of creating your CA using `nebula-cert ca -curve P256` to use NIST Curve P256. The CA will then sign certificates using ECDSA P256, and any hosts using these certificates will use P256 for ECDH handshakes.
In addition, Nebula can be built using the [BoringCrypto GOEXPERIMENT](https://github.com/golang/go/blob/go1.20/src/crypto/internal/boring/README.md) by running either of the following make targets:
make bin-boringcrypto
make release-boringcrypto
This is not the recommended default deployment, but may be useful based on your compliance requirements.
## Credits
Nebula was created at Slack Technologies, Inc by Nate Brown and Ryan Huber, with contributions from Oliver Fross, Alan Lam, Wade Simmons, and Lining Wang.

12
SECURITY.md Normal file
View File

@@ -0,0 +1,12 @@
Security Policy
===============
Reporting a Vulnerability
-------------------------
If you believe you have found a security vulnerability with Nebula, please let
us know right away. We will investigate all reports and do our best to quickly
fix valid issues.
You can submit your report on [HackerOne](https://hackerone.com/slack) and our
security team will respond as soon as possible.

View File

@@ -2,17 +2,16 @@ package nebula
import (
"fmt"
"net"
"net/netip"
"regexp"
"github.com/slackhq/nebula/cidr"
"github.com/gaissmai/bart"
"github.com/slackhq/nebula/config"
"github.com/slackhq/nebula/iputil"
)
type AllowList struct {
// The values of this cidrTree are `bool`, signifying allow/deny
cidrTree *cidr.Tree6
cidrTree *bart.Table[bool]
}
type RemoteAllowList struct {
@@ -20,7 +19,7 @@ type RemoteAllowList struct {
// Inside Range Specific, keys of this tree are inside CIDRs and values
// are *AllowList
insideAllowLists *cidr.Tree6
insideAllowLists *bart.Table[*AllowList]
}
type LocalAllowList struct {
@@ -88,7 +87,7 @@ func newAllowList(k string, raw interface{}, handleKey func(key string, value in
return nil, fmt.Errorf("config `%s` has invalid type: %T", k, raw)
}
tree := cidr.NewTree6()
tree := new(bart.Table[bool])
// Keep track of the rules we have added for both ipv4 and ipv6
type allowListRules struct {
@@ -122,18 +121,20 @@ func newAllowList(k string, raw interface{}, handleKey func(key string, value in
return nil, fmt.Errorf("config `%s` has invalid value (type %T): %v", k, rawValue, rawValue)
}
_, ipNet, err := net.ParseCIDR(rawCIDR)
ipNet, err := netip.ParsePrefix(rawCIDR)
if err != nil {
return nil, fmt.Errorf("config `%s` has invalid CIDR: %s", k, rawCIDR)
return nil, fmt.Errorf("config `%s` has invalid CIDR: %s. %w", k, rawCIDR, err)
}
// TODO: should we error on duplicate CIDRs in the config?
tree.AddCIDR(ipNet, value)
ipNet = netip.PrefixFrom(ipNet.Addr().Unmap(), ipNet.Bits())
maskBits, maskSize := ipNet.Mask.Size()
// TODO: should we error on duplicate CIDRs in the config?
tree.Insert(ipNet, value)
maskBits := ipNet.Bits()
var rules *allowListRules
if maskSize == 32 {
if ipNet.Addr().Is4() {
rules = &rules4
} else {
rules = &rules6
@@ -156,8 +157,7 @@ func newAllowList(k string, raw interface{}, handleKey func(key string, value in
if !rules4.defaultSet {
if rules4.allValuesMatch {
_, zeroCIDR, _ := net.ParseCIDR("0.0.0.0/0")
tree.AddCIDR(zeroCIDR, !rules4.allValues)
tree.Insert(netip.PrefixFrom(netip.IPv4Unspecified(), 0), !rules4.allValues)
} else {
return nil, fmt.Errorf("config `%s` contains both true and false rules, but no default set for 0.0.0.0/0", k)
}
@@ -165,8 +165,7 @@ func newAllowList(k string, raw interface{}, handleKey func(key string, value in
if !rules6.defaultSet {
if rules6.allValuesMatch {
_, zeroCIDR, _ := net.ParseCIDR("::/0")
tree.AddCIDR(zeroCIDR, !rules6.allValues)
tree.Insert(netip.PrefixFrom(netip.IPv6Unspecified(), 0), !rules6.allValues)
} else {
return nil, fmt.Errorf("config `%s` contains both true and false rules, but no default set for ::/0", k)
}
@@ -218,13 +217,13 @@ func getAllowListInterfaces(k string, v interface{}) ([]AllowListNameRule, error
return nameRules, nil
}
func getRemoteAllowRanges(c *config.C, k string) (*cidr.Tree6, error) {
func getRemoteAllowRanges(c *config.C, k string) (*bart.Table[*AllowList], error) {
value := c.Get(k)
if value == nil {
return nil, nil
}
remoteAllowRanges := cidr.NewTree6()
remoteAllowRanges := new(bart.Table[*AllowList])
rawMap, ok := value.(map[interface{}]interface{})
if !ok {
@@ -241,60 +240,27 @@ func getRemoteAllowRanges(c *config.C, k string) (*cidr.Tree6, error) {
return nil, err
}
_, ipNet, err := net.ParseCIDR(rawCIDR)
ipNet, err := netip.ParsePrefix(rawCIDR)
if err != nil {
return nil, fmt.Errorf("config `%s` has invalid CIDR: %s", k, rawCIDR)
return nil, fmt.Errorf("config `%s` has invalid CIDR: %s. %w", k, rawCIDR, err)
}
remoteAllowRanges.AddCIDR(ipNet, allowList)
remoteAllowRanges.Insert(netip.PrefixFrom(ipNet.Addr().Unmap(), ipNet.Bits()), allowList)
}
return remoteAllowRanges, nil
}
func (al *AllowList) Allow(ip net.IP) bool {
func (al *AllowList) Allow(ip netip.Addr) bool {
if al == nil {
return true
}
result := al.cidrTree.MostSpecificContains(ip)
switch v := result.(type) {
case bool:
return v
default:
panic(fmt.Errorf("invalid state, allowlist returned: %T %v", result, result))
}
result, _ := al.cidrTree.Lookup(ip)
return result
}
func (al *AllowList) AllowIpV4(ip iputil.VpnIp) bool {
if al == nil {
return true
}
result := al.cidrTree.MostSpecificContainsIpV4(ip)
switch v := result.(type) {
case bool:
return v
default:
panic(fmt.Errorf("invalid state, allowlist returned: %T %v", result, result))
}
}
func (al *AllowList) AllowIpV6(hi, lo uint64) bool {
if al == nil {
return true
}
result := al.cidrTree.MostSpecificContainsIpV6(hi, lo)
switch v := result.(type) {
case bool:
return v
default:
panic(fmt.Errorf("invalid state, allowlist returned: %T %v", result, result))
}
}
func (al *LocalAllowList) Allow(ip net.IP) bool {
func (al *LocalAllowList) Allow(ip netip.Addr) bool {
if al == nil {
return true
}
@@ -316,45 +282,25 @@ func (al *LocalAllowList) AllowName(name string) bool {
return !al.nameRules[0].Allow
}
func (al *RemoteAllowList) AllowUnknownVpnIp(ip net.IP) bool {
func (al *RemoteAllowList) AllowUnknownVpnIp(ip netip.Addr) bool {
if al == nil {
return true
}
return al.AllowList.Allow(ip)
}
func (al *RemoteAllowList) Allow(vpnIp iputil.VpnIp, ip net.IP) bool {
func (al *RemoteAllowList) Allow(vpnIp netip.Addr, ip netip.Addr) bool {
if !al.getInsideAllowList(vpnIp).Allow(ip) {
return false
}
return al.AllowList.Allow(ip)
}
func (al *RemoteAllowList) AllowIpV4(vpnIp iputil.VpnIp, ip iputil.VpnIp) bool {
if al == nil {
return true
}
if !al.getInsideAllowList(vpnIp).AllowIpV4(ip) {
return false
}
return al.AllowList.AllowIpV4(ip)
}
func (al *RemoteAllowList) AllowIpV6(vpnIp iputil.VpnIp, hi, lo uint64) bool {
if al == nil {
return true
}
if !al.getInsideAllowList(vpnIp).AllowIpV6(hi, lo) {
return false
}
return al.AllowList.AllowIpV6(hi, lo)
}
func (al *RemoteAllowList) getInsideAllowList(vpnIp iputil.VpnIp) *AllowList {
func (al *RemoteAllowList) getInsideAllowList(vpnIp netip.Addr) *AllowList {
if al.insideAllowLists != nil {
inside := al.insideAllowLists.MostSpecificContainsIpV4(vpnIp)
if inside != nil {
return inside.(*AllowList)
inside, ok := al.insideAllowLists.Lookup(vpnIp)
if ok {
return inside
}
}
return nil

View File

@@ -1,24 +1,24 @@
package nebula
import (
"net"
"net/netip"
"regexp"
"testing"
"github.com/slackhq/nebula/cidr"
"github.com/gaissmai/bart"
"github.com/slackhq/nebula/config"
"github.com/slackhq/nebula/util"
"github.com/slackhq/nebula/test"
"github.com/stretchr/testify/assert"
)
func TestNewAllowListFromConfig(t *testing.T) {
l := util.NewTestLogger()
l := test.NewLogger()
c := config.NewC(l)
c.Settings["allowlist"] = map[interface{}]interface{}{
"192.168.0.0": true,
}
r, err := newAllowListFromConfig(c, "allowlist", nil)
assert.EqualError(t, err, "config `allowlist` has invalid CIDR: 192.168.0.0")
assert.EqualError(t, err, "config `allowlist` has invalid CIDR: 192.168.0.0. netip.ParsePrefix(\"192.168.0.0\"): no '/'")
assert.Nil(t, r)
c.Settings["allowlist"] = map[interface{}]interface{}{
@@ -98,26 +98,26 @@ func TestNewAllowListFromConfig(t *testing.T) {
}
func TestAllowList_Allow(t *testing.T) {
assert.Equal(t, true, ((*AllowList)(nil)).Allow(net.ParseIP("1.1.1.1")))
assert.Equal(t, true, ((*AllowList)(nil)).Allow(netip.MustParseAddr("1.1.1.1")))
tree := cidr.NewTree6()
tree.AddCIDR(cidr.Parse("0.0.0.0/0"), true)
tree.AddCIDR(cidr.Parse("10.0.0.0/8"), false)
tree.AddCIDR(cidr.Parse("10.42.42.42/32"), true)
tree.AddCIDR(cidr.Parse("10.42.0.0/16"), true)
tree.AddCIDR(cidr.Parse("10.42.42.0/24"), true)
tree.AddCIDR(cidr.Parse("10.42.42.0/24"), false)
tree.AddCIDR(cidr.Parse("::1/128"), true)
tree.AddCIDR(cidr.Parse("::2/128"), false)
tree := new(bart.Table[bool])
tree.Insert(netip.MustParsePrefix("0.0.0.0/0"), true)
tree.Insert(netip.MustParsePrefix("10.0.0.0/8"), false)
tree.Insert(netip.MustParsePrefix("10.42.42.42/32"), true)
tree.Insert(netip.MustParsePrefix("10.42.0.0/16"), true)
tree.Insert(netip.MustParsePrefix("10.42.42.0/24"), true)
tree.Insert(netip.MustParsePrefix("10.42.42.0/24"), false)
tree.Insert(netip.MustParsePrefix("::1/128"), true)
tree.Insert(netip.MustParsePrefix("::2/128"), false)
al := &AllowList{cidrTree: tree}
assert.Equal(t, true, al.Allow(net.ParseIP("1.1.1.1")))
assert.Equal(t, false, al.Allow(net.ParseIP("10.0.0.4")))
assert.Equal(t, true, al.Allow(net.ParseIP("10.42.42.42")))
assert.Equal(t, false, al.Allow(net.ParseIP("10.42.42.41")))
assert.Equal(t, true, al.Allow(net.ParseIP("10.42.0.1")))
assert.Equal(t, true, al.Allow(net.ParseIP("::1")))
assert.Equal(t, false, al.Allow(net.ParseIP("::2")))
assert.Equal(t, true, al.Allow(netip.MustParseAddr("1.1.1.1")))
assert.Equal(t, false, al.Allow(netip.MustParseAddr("10.0.0.4")))
assert.Equal(t, true, al.Allow(netip.MustParseAddr("10.42.42.42")))
assert.Equal(t, false, al.Allow(netip.MustParseAddr("10.42.42.41")))
assert.Equal(t, true, al.Allow(netip.MustParseAddr("10.42.0.1")))
assert.Equal(t, true, al.Allow(netip.MustParseAddr("::1")))
assert.Equal(t, false, al.Allow(netip.MustParseAddr("::2")))
}
func TestLocalAllowList_AllowName(t *testing.T) {

View File

@@ -3,12 +3,12 @@ package nebula
import (
"testing"
"github.com/slackhq/nebula/util"
"github.com/slackhq/nebula/test"
"github.com/stretchr/testify/assert"
)
func TestBits(t *testing.T) {
l := util.NewTestLogger()
l := test.NewLogger()
b := NewBits(10)
// make sure it is the right size
@@ -76,7 +76,7 @@ func TestBits(t *testing.T) {
}
func TestBitsDupeCounter(t *testing.T) {
l := util.NewTestLogger()
l := test.NewLogger()
b := NewBits(10)
b.lostCounter.Clear()
b.dupeCounter.Clear()
@@ -101,7 +101,7 @@ func TestBitsDupeCounter(t *testing.T) {
}
func TestBitsOutOfWindowCounter(t *testing.T) {
l := util.NewTestLogger()
l := test.NewLogger()
b := NewBits(10)
b.lostCounter.Clear()
b.dupeCounter.Clear()
@@ -131,7 +131,7 @@ func TestBitsOutOfWindowCounter(t *testing.T) {
}
func TestBitsLostCounter(t *testing.T) {
l := util.NewTestLogger()
l := test.NewLogger()
b := NewBits(10)
b.lostCounter.Clear()
b.dupeCounter.Clear()

8
boring.go Normal file
View File

@@ -0,0 +1,8 @@
//go:build boringcrypto
// +build boringcrypto
package nebula
import "crypto/boring"
var boringEnabled = boring.Enabled

159
calculated_remote.go Normal file
View File

@@ -0,0 +1,159 @@
package nebula
import (
"encoding/binary"
"fmt"
"math"
"net"
"net/netip"
"strconv"
"github.com/gaissmai/bart"
"github.com/slackhq/nebula/config"
)
// This allows us to "guess" what the remote might be for a host while we wait
// for the lighthouse response. See "lighthouse.calculated_remotes" in the
// example config file.
type calculatedRemote struct {
ipNet netip.Prefix
mask netip.Prefix
port uint32
}
func newCalculatedRemote(maskCidr netip.Prefix, port int) (*calculatedRemote, error) {
masked := maskCidr.Masked()
if port < 0 || port > math.MaxUint16 {
return nil, fmt.Errorf("invalid port: %d", port)
}
return &calculatedRemote{
ipNet: maskCidr,
mask: masked,
port: uint32(port),
}, nil
}
func (c *calculatedRemote) String() string {
return fmt.Sprintf("CalculatedRemote(mask=%v port=%d)", c.ipNet, c.port)
}
func (c *calculatedRemote) Apply(ip netip.Addr) *Ip4AndPort {
// Combine the masked bytes of the "mask" IP with the unmasked bytes
// of the overlay IP
if c.ipNet.Addr().Is4() {
return c.apply4(ip)
}
return c.apply6(ip)
}
func (c *calculatedRemote) apply4(ip netip.Addr) *Ip4AndPort {
//TODO: IPV6-WORK this can be less crappy
maskb := net.CIDRMask(c.mask.Bits(), c.mask.Addr().BitLen())
mask := binary.BigEndian.Uint32(maskb[:])
b := c.mask.Addr().As4()
maskIp := binary.BigEndian.Uint32(b[:])
b = ip.As4()
intIp := binary.BigEndian.Uint32(b[:])
return &Ip4AndPort{(maskIp & mask) | (intIp & ^mask), c.port}
}
func (c *calculatedRemote) apply6(ip netip.Addr) *Ip4AndPort {
//TODO: IPV6-WORK
panic("Can not calculate ipv6 remote addresses")
}
func NewCalculatedRemotesFromConfig(c *config.C, k string) (*bart.Table[[]*calculatedRemote], error) {
value := c.Get(k)
if value == nil {
return nil, nil
}
calculatedRemotes := new(bart.Table[[]*calculatedRemote])
rawMap, ok := value.(map[any]any)
if !ok {
return nil, fmt.Errorf("config `%s` has invalid type: %T", k, value)
}
for rawKey, rawValue := range rawMap {
rawCIDR, ok := rawKey.(string)
if !ok {
return nil, fmt.Errorf("config `%s` has invalid key (type %T): %v", k, rawKey, rawKey)
}
cidr, err := netip.ParsePrefix(rawCIDR)
if err != nil {
return nil, fmt.Errorf("config `%s` has invalid CIDR: %s", k, rawCIDR)
}
//TODO: IPV6-WORK this does not verify that rawValue contains the same bits as cidr here
entry, err := newCalculatedRemotesListFromConfig(rawValue)
if err != nil {
return nil, fmt.Errorf("config '%s.%s': %w", k, rawCIDR, err)
}
calculatedRemotes.Insert(cidr, entry)
}
return calculatedRemotes, nil
}
func newCalculatedRemotesListFromConfig(raw any) ([]*calculatedRemote, error) {
rawList, ok := raw.([]any)
if !ok {
return nil, fmt.Errorf("calculated_remotes entry has invalid type: %T", raw)
}
var l []*calculatedRemote
for _, e := range rawList {
c, err := newCalculatedRemotesEntryFromConfig(e)
if err != nil {
return nil, fmt.Errorf("calculated_remotes entry: %w", err)
}
l = append(l, c)
}
return l, nil
}
func newCalculatedRemotesEntryFromConfig(raw any) (*calculatedRemote, error) {
rawMap, ok := raw.(map[any]any)
if !ok {
return nil, fmt.Errorf("invalid type: %T", raw)
}
rawValue := rawMap["mask"]
if rawValue == nil {
return nil, fmt.Errorf("missing mask: %v", rawMap)
}
rawMask, ok := rawValue.(string)
if !ok {
return nil, fmt.Errorf("invalid mask (type %T): %v", rawValue, rawValue)
}
maskCidr, err := netip.ParsePrefix(rawMask)
if err != nil {
return nil, fmt.Errorf("invalid mask: %s", rawMask)
}
var port int
rawValue = rawMap["port"]
if rawValue == nil {
return nil, fmt.Errorf("missing port: %v", rawMap)
}
switch v := rawValue.(type) {
case int:
port = v
case string:
port, err = strconv.Atoi(v)
if err != nil {
return nil, fmt.Errorf("invalid port: %s: %w", v, err)
}
default:
return nil, fmt.Errorf("invalid port (type %T): %v", rawValue, rawValue)
}
return newCalculatedRemote(maskCidr, port)
}

25
calculated_remote_test.go Normal file
View File

@@ -0,0 +1,25 @@
package nebula
import (
"net/netip"
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestCalculatedRemoteApply(t *testing.T) {
ipNet, err := netip.ParsePrefix("192.168.1.0/24")
require.NoError(t, err)
c, err := newCalculatedRemote(ipNet, 4242)
require.NoError(t, err)
input, err := netip.ParseAddr("10.0.10.182")
assert.NoError(t, err)
expected, err := netip.ParseAddr("192.168.1.182")
assert.NoError(t, err)
assert.Equal(t, NewIp4AndPortFromNetIP(expected, 4242), c.Apply(input))
}

165
cert.go
View File

@@ -1,165 +0,0 @@
package nebula
import (
"errors"
"fmt"
"io/ioutil"
"strings"
"time"
"github.com/sirupsen/logrus"
"github.com/slackhq/nebula/cert"
"github.com/slackhq/nebula/config"
)
type CertState struct {
certificate *cert.NebulaCertificate
rawCertificate []byte
rawCertificateNoKey []byte
publicKey []byte
privateKey []byte
}
func NewCertState(certificate *cert.NebulaCertificate, privateKey []byte) (*CertState, error) {
// Marshal the certificate to ensure it is valid
rawCertificate, err := certificate.Marshal()
if err != nil {
return nil, fmt.Errorf("invalid nebula certificate on interface: %s", err)
}
publicKey := certificate.Details.PublicKey
cs := &CertState{
rawCertificate: rawCertificate,
certificate: certificate, // PublicKey has been set to nil above
privateKey: privateKey,
publicKey: publicKey,
}
cs.certificate.Details.PublicKey = nil
rawCertNoKey, err := cs.certificate.Marshal()
if err != nil {
return nil, fmt.Errorf("error marshalling certificate no key: %s", err)
}
cs.rawCertificateNoKey = rawCertNoKey
// put public key back
cs.certificate.Details.PublicKey = cs.publicKey
return cs, nil
}
func NewCertStateFromConfig(c *config.C) (*CertState, error) {
var pemPrivateKey []byte
var err error
privPathOrPEM := c.GetString("pki.key", "")
if privPathOrPEM == "" {
// Support backwards compat with the old x509
//TODO: remove after this is rolled out everywhere - NB 2018/02/23
privPathOrPEM = c.GetString("x509.key", "")
}
if privPathOrPEM == "" {
return nil, errors.New("no pki.key path or PEM data provided")
}
if strings.Contains(privPathOrPEM, "-----BEGIN") {
pemPrivateKey = []byte(privPathOrPEM)
privPathOrPEM = "<inline>"
} else {
pemPrivateKey, err = ioutil.ReadFile(privPathOrPEM)
if err != nil {
return nil, fmt.Errorf("unable to read pki.key file %s: %s", privPathOrPEM, err)
}
}
rawKey, _, err := cert.UnmarshalX25519PrivateKey(pemPrivateKey)
if err != nil {
return nil, fmt.Errorf("error while unmarshaling pki.key %s: %s", privPathOrPEM, err)
}
var rawCert []byte
pubPathOrPEM := c.GetString("pki.cert", "")
if pubPathOrPEM == "" {
// Support backwards compat with the old x509
//TODO: remove after this is rolled out everywhere - NB 2018/02/23
pubPathOrPEM = c.GetString("x509.cert", "")
}
if pubPathOrPEM == "" {
return nil, errors.New("no pki.cert path or PEM data provided")
}
if strings.Contains(pubPathOrPEM, "-----BEGIN") {
rawCert = []byte(pubPathOrPEM)
pubPathOrPEM = "<inline>"
} else {
rawCert, err = ioutil.ReadFile(pubPathOrPEM)
if err != nil {
return nil, fmt.Errorf("unable to read pki.cert file %s: %s", pubPathOrPEM, err)
}
}
nebulaCert, _, err := cert.UnmarshalNebulaCertificateFromPEM(rawCert)
if err != nil {
return nil, fmt.Errorf("error while unmarshaling pki.cert %s: %s", pubPathOrPEM, err)
}
if nebulaCert.Expired(time.Now()) {
return nil, fmt.Errorf("nebula certificate for this host is expired")
}
if len(nebulaCert.Details.Ips) == 0 {
return nil, fmt.Errorf("no IPs encoded in certificate")
}
if err = nebulaCert.VerifyPrivateKey(rawKey); err != nil {
return nil, fmt.Errorf("private key is not a pair with public key in nebula cert")
}
return NewCertState(nebulaCert, rawKey)
}
func loadCAFromConfig(l *logrus.Logger, c *config.C) (*cert.NebulaCAPool, error) {
var rawCA []byte
var err error
caPathOrPEM := c.GetString("pki.ca", "")
if caPathOrPEM == "" {
// Support backwards compat with the old x509
//TODO: remove after this is rolled out everywhere - NB 2018/02/23
caPathOrPEM = c.GetString("x509.ca", "")
}
if caPathOrPEM == "" {
return nil, errors.New("no pki.ca path or PEM data provided")
}
if strings.Contains(caPathOrPEM, "-----BEGIN") {
rawCA = []byte(caPathOrPEM)
caPathOrPEM = "<inline>"
} else {
rawCA, err = ioutil.ReadFile(caPathOrPEM)
if err != nil {
return nil, fmt.Errorf("unable to read pki.ca file %s: %s", caPathOrPEM, err)
}
}
CAs, err := cert.NewCAPoolFromBytes(rawCA)
if err != nil {
return nil, fmt.Errorf("error while adding CA certificate to CA trust store: %s", err)
}
for _, fp := range c.GetStringSlice("pki.blocklist", []string{}) {
l.WithField("fingerprint", fp).Infof("Blocklisting cert")
CAs.BlocklistFingerprint(fp)
}
// Support deprecated config for at leaast one minor release to allow for migrations
for _, fp := range c.GetStringSlice("pki.blacklist", []string{}) {
l.WithField("fingerprint", fp).Infof("Blocklisting cert")
l.Warn("pki.blacklist is deprecated and will not be supported in a future release. Please migrate your config to use pki.blocklist")
CAs.BlocklistFingerprint(fp)
}
return CAs, nil
}

View File

@@ -1,6 +1,7 @@
package cert
import (
"errors"
"fmt"
"strings"
"time"
@@ -21,19 +22,32 @@ func NewCAPool() *NebulaCAPool {
return &ca
}
// NewCAPoolFromBytes will create a new CA pool from the provided
// input bytes, which must be a PEM-encoded set of nebula certificates.
// If the pool contains any expired certificates, an ErrExpired will be
// returned along with the pool. The caller must handle any such errors.
func NewCAPoolFromBytes(caPEMs []byte) (*NebulaCAPool, error) {
pool := NewCAPool()
var err error
var expired bool
for {
caPEMs, err = pool.AddCACertificate(caPEMs)
if errors.Is(err, ErrExpired) {
expired = true
err = nil
}
if err != nil {
return nil, err
}
if caPEMs == nil || len(caPEMs) == 0 || strings.TrimSpace(string(caPEMs)) == "" {
if len(caPEMs) == 0 || strings.TrimSpace(string(caPEMs)) == "" {
break
}
}
if expired {
return pool, ErrExpired
}
return pool, nil
}
@@ -47,15 +61,11 @@ func (ncp *NebulaCAPool) AddCACertificate(pemBytes []byte) ([]byte, error) {
}
if !c.Details.IsCA {
return pemBytes, fmt.Errorf("provided certificate was not a CA; %s", c.Details.Name)
return pemBytes, fmt.Errorf("%s: %w", c.Details.Name, ErrNotCA)
}
if !c.CheckSignature(c.Details.PublicKey) {
return pemBytes, fmt.Errorf("provided certificate was not self signed; %s", c.Details.Name)
}
if c.Expired(time.Now()) {
return pemBytes, fmt.Errorf("provided CA certificate is expired; %s", c.Details.Name)
return pemBytes, fmt.Errorf("%s: %w", c.Details.Name, ErrNotSelfSigned)
}
sum, err := c.Sha256Sum()
@@ -64,6 +74,10 @@ func (ncp *NebulaCAPool) AddCACertificate(pemBytes []byte) ([]byte, error) {
}
ncp.CAs[sum] = c
if c.Expired(time.Now()) {
return pemBytes, fmt.Errorf("%s: %w", c.Details.Name, ErrExpired)
}
return pemBytes, nil
}
@@ -77,9 +91,15 @@ func (ncp *NebulaCAPool) ResetCertBlocklist() {
ncp.certBlocklist = make(map[string]struct{})
}
// IsBlocklisted returns true if the fingerprint fails to generate or has been explicitly blocklisted
// NOTE: This uses an internal cache for Sha256Sum() that will not be invalidated
// automatically if you manually change any fields in the NebulaCertificate.
func (ncp *NebulaCAPool) IsBlocklisted(c *NebulaCertificate) bool {
h, err := c.Sha256Sum()
return ncp.isBlocklistedWithCache(c, false)
}
// IsBlocklisted returns true if the fingerprint fails to generate or has been explicitly blocklisted
func (ncp *NebulaCAPool) isBlocklistedWithCache(c *NebulaCertificate, useCache bool) bool {
h, err := c.sha256SumWithCache(useCache)
if err != nil {
return true
}

View File

@@ -2,35 +2,55 @@ package cert
import (
"bytes"
"crypto"
"crypto/ecdh"
"crypto/ecdsa"
"crypto/ed25519"
"crypto/elliptic"
"crypto/rand"
"crypto/sha256"
"encoding/binary"
"encoding/hex"
"encoding/json"
"encoding/pem"
"errors"
"fmt"
"math"
"math/big"
"net"
"sync/atomic"
"time"
"github.com/golang/protobuf/proto"
"golang.org/x/crypto/curve25519"
"golang.org/x/crypto/ed25519"
"google.golang.org/protobuf/proto"
)
const publicKeyLen = 32
const (
CertBanner = "NEBULA CERTIFICATE"
X25519PrivateKeyBanner = "NEBULA X25519 PRIVATE KEY"
X25519PublicKeyBanner = "NEBULA X25519 PUBLIC KEY"
Ed25519PrivateKeyBanner = "NEBULA ED25519 PRIVATE KEY"
Ed25519PublicKeyBanner = "NEBULA ED25519 PUBLIC KEY"
CertBanner = "NEBULA CERTIFICATE"
X25519PrivateKeyBanner = "NEBULA X25519 PRIVATE KEY"
X25519PublicKeyBanner = "NEBULA X25519 PUBLIC KEY"
EncryptedEd25519PrivateKeyBanner = "NEBULA ED25519 ENCRYPTED PRIVATE KEY"
Ed25519PrivateKeyBanner = "NEBULA ED25519 PRIVATE KEY"
Ed25519PublicKeyBanner = "NEBULA ED25519 PUBLIC KEY"
P256PrivateKeyBanner = "NEBULA P256 PRIVATE KEY"
P256PublicKeyBanner = "NEBULA P256 PUBLIC KEY"
EncryptedECDSAP256PrivateKeyBanner = "NEBULA ECDSA P256 ENCRYPTED PRIVATE KEY"
ECDSAP256PrivateKeyBanner = "NEBULA ECDSA P256 PRIVATE KEY"
)
type NebulaCertificate struct {
Details NebulaCertificateDetails
Signature []byte
// the cached hex string of the calculated sha256sum
// for VerifyWithCache
sha256sum atomic.Pointer[string]
// the cached public key bytes if they were verified as the signer
// for VerifyWithCache
signatureVerified atomic.Pointer[[]byte]
}
type NebulaCertificateDetails struct {
@@ -46,10 +66,25 @@ type NebulaCertificateDetails struct {
// Map of groups for faster lookup
InvertedGroups map[string]struct{}
Curve Curve
}
type NebulaEncryptedData struct {
EncryptionMetadata NebulaEncryptionMetadata
Ciphertext []byte
}
type NebulaEncryptionMetadata struct {
EncryptionAlgorithm string
Argon2Parameters Argon2Parameters
}
type m map[string]interface{}
// Returned if we try to unmarshal an encrypted private key without a passphrase
var ErrPrivateKeyEncrypted = errors.New("private key must be decrypted")
// UnmarshalNebulaCertificate will unmarshal a protobuf byte representation of a nebula cert
func UnmarshalNebulaCertificate(b []byte) (*NebulaCertificate, error) {
if len(b) == 0 {
@@ -84,6 +119,7 @@ func UnmarshalNebulaCertificate(b []byte) (*NebulaCertificate, error) {
PublicKey: make([]byte, len(rc.Details.PublicKey)),
IsCA: rc.Details.IsCA,
InvertedGroups: make(map[string]struct{}),
Curve: rc.Details.Curve,
},
Signature: make([]byte, len(rc.Signature)),
}
@@ -134,6 +170,28 @@ func UnmarshalNebulaCertificateFromPEM(b []byte) (*NebulaCertificate, []byte, er
return nc, r, err
}
func MarshalPrivateKey(curve Curve, b []byte) []byte {
switch curve {
case Curve_CURVE25519:
return pem.EncodeToMemory(&pem.Block{Type: X25519PrivateKeyBanner, Bytes: b})
case Curve_P256:
return pem.EncodeToMemory(&pem.Block{Type: P256PrivateKeyBanner, Bytes: b})
default:
return nil
}
}
func MarshalSigningPrivateKey(curve Curve, b []byte) []byte {
switch curve {
case Curve_CURVE25519:
return pem.EncodeToMemory(&pem.Block{Type: Ed25519PrivateKeyBanner, Bytes: b})
case Curve_P256:
return pem.EncodeToMemory(&pem.Block{Type: ECDSAP256PrivateKeyBanner, Bytes: b})
default:
return nil
}
}
// MarshalX25519PrivateKey is a simple helper to PEM encode an X25519 private key
func MarshalX25519PrivateKey(b []byte) []byte {
return pem.EncodeToMemory(&pem.Block{Type: X25519PrivateKeyBanner, Bytes: b})
@@ -144,6 +202,90 @@ func MarshalEd25519PrivateKey(key ed25519.PrivateKey) []byte {
return pem.EncodeToMemory(&pem.Block{Type: Ed25519PrivateKeyBanner, Bytes: key})
}
func UnmarshalPrivateKey(b []byte) ([]byte, []byte, Curve, error) {
k, r := pem.Decode(b)
if k == nil {
return nil, r, 0, fmt.Errorf("input did not contain a valid PEM encoded block")
}
var expectedLen int
var curve Curve
switch k.Type {
case X25519PrivateKeyBanner:
expectedLen = 32
curve = Curve_CURVE25519
case P256PrivateKeyBanner:
expectedLen = 32
curve = Curve_P256
default:
return nil, r, 0, fmt.Errorf("bytes did not contain a proper nebula private key banner")
}
if len(k.Bytes) != expectedLen {
return nil, r, 0, fmt.Errorf("key was not %d bytes, is invalid %s private key", expectedLen, curve)
}
return k.Bytes, r, curve, nil
}
func UnmarshalSigningPrivateKey(b []byte) ([]byte, []byte, Curve, error) {
k, r := pem.Decode(b)
if k == nil {
return nil, r, 0, fmt.Errorf("input did not contain a valid PEM encoded block")
}
var curve Curve
switch k.Type {
case EncryptedEd25519PrivateKeyBanner:
return nil, nil, Curve_CURVE25519, ErrPrivateKeyEncrypted
case EncryptedECDSAP256PrivateKeyBanner:
return nil, nil, Curve_P256, ErrPrivateKeyEncrypted
case Ed25519PrivateKeyBanner:
curve = Curve_CURVE25519
if len(k.Bytes) != ed25519.PrivateKeySize {
return nil, r, 0, fmt.Errorf("key was not %d bytes, is invalid Ed25519 private key", ed25519.PrivateKeySize)
}
case ECDSAP256PrivateKeyBanner:
curve = Curve_P256
if len(k.Bytes) != 32 {
return nil, r, 0, fmt.Errorf("key was not 32 bytes, is invalid ECDSA P256 private key")
}
default:
return nil, r, 0, fmt.Errorf("bytes did not contain a proper nebula Ed25519/ECDSA private key banner")
}
return k.Bytes, r, curve, nil
}
// EncryptAndMarshalSigningPrivateKey is a simple helper to encrypt and PEM encode a private key
func EncryptAndMarshalSigningPrivateKey(curve Curve, b []byte, passphrase []byte, kdfParams *Argon2Parameters) ([]byte, error) {
ciphertext, err := aes256Encrypt(passphrase, kdfParams, b)
if err != nil {
return nil, err
}
b, err = proto.Marshal(&RawNebulaEncryptedData{
EncryptionMetadata: &RawNebulaEncryptionMetadata{
EncryptionAlgorithm: "AES-256-GCM",
Argon2Parameters: &RawNebulaArgon2Parameters{
Version: kdfParams.version,
Memory: kdfParams.Memory,
Parallelism: uint32(kdfParams.Parallelism),
Iterations: kdfParams.Iterations,
Salt: kdfParams.salt,
},
},
Ciphertext: ciphertext,
})
if err != nil {
return nil, err
}
switch curve {
case Curve_CURVE25519:
return pem.EncodeToMemory(&pem.Block{Type: EncryptedEd25519PrivateKeyBanner, Bytes: b}), nil
case Curve_P256:
return pem.EncodeToMemory(&pem.Block{Type: EncryptedECDSAP256PrivateKeyBanner, Bytes: b}), nil
default:
return nil, fmt.Errorf("invalid curve: %v", curve)
}
}
// UnmarshalX25519PrivateKey will try to pem decode an X25519 private key, returning any other bytes b
// or an error on failure
func UnmarshalX25519PrivateKey(b []byte) ([]byte, []byte, error) {
@@ -168,9 +310,13 @@ func UnmarshalEd25519PrivateKey(b []byte) (ed25519.PrivateKey, []byte, error) {
if k == nil {
return nil, r, fmt.Errorf("input did not contain a valid PEM encoded block")
}
if k.Type != Ed25519PrivateKeyBanner {
if k.Type == EncryptedEd25519PrivateKeyBanner {
return nil, r, ErrPrivateKeyEncrypted
} else if k.Type != Ed25519PrivateKeyBanner {
return nil, r, fmt.Errorf("bytes did not contain a proper nebula Ed25519 private key banner")
}
if len(k.Bytes) != ed25519.PrivateKeySize {
return nil, r, fmt.Errorf("key was not 64 bytes, is invalid ed25519 private key")
}
@@ -178,6 +324,126 @@ func UnmarshalEd25519PrivateKey(b []byte) (ed25519.PrivateKey, []byte, error) {
return k.Bytes, r, nil
}
// UnmarshalNebulaEncryptedData will unmarshal a protobuf byte representation of a nebula cert into its
// protobuf-generated struct.
func UnmarshalNebulaEncryptedData(b []byte) (*NebulaEncryptedData, error) {
if len(b) == 0 {
return nil, fmt.Errorf("nil byte array")
}
var rned RawNebulaEncryptedData
err := proto.Unmarshal(b, &rned)
if err != nil {
return nil, err
}
if rned.EncryptionMetadata == nil {
return nil, fmt.Errorf("encoded EncryptionMetadata was nil")
}
if rned.EncryptionMetadata.Argon2Parameters == nil {
return nil, fmt.Errorf("encoded Argon2Parameters was nil")
}
params, err := unmarshalArgon2Parameters(rned.EncryptionMetadata.Argon2Parameters)
if err != nil {
return nil, err
}
ned := NebulaEncryptedData{
EncryptionMetadata: NebulaEncryptionMetadata{
EncryptionAlgorithm: rned.EncryptionMetadata.EncryptionAlgorithm,
Argon2Parameters: *params,
},
Ciphertext: rned.Ciphertext,
}
return &ned, nil
}
func unmarshalArgon2Parameters(params *RawNebulaArgon2Parameters) (*Argon2Parameters, error) {
if params.Version < math.MinInt32 || params.Version > math.MaxInt32 {
return nil, fmt.Errorf("Argon2Parameters Version must be at least %d and no more than %d", math.MinInt32, math.MaxInt32)
}
if params.Memory <= 0 || params.Memory > math.MaxUint32 {
return nil, fmt.Errorf("Argon2Parameters Memory must be be greater than 0 and no more than %d KiB", uint32(math.MaxUint32))
}
if params.Parallelism <= 0 || params.Parallelism > math.MaxUint8 {
return nil, fmt.Errorf("Argon2Parameters Parallelism must be be greater than 0 and no more than %d", math.MaxUint8)
}
if params.Iterations <= 0 || params.Iterations > math.MaxUint32 {
return nil, fmt.Errorf("-argon-iterations must be be greater than 0 and no more than %d", uint32(math.MaxUint32))
}
return &Argon2Parameters{
version: rune(params.Version),
Memory: uint32(params.Memory),
Parallelism: uint8(params.Parallelism),
Iterations: uint32(params.Iterations),
salt: params.Salt,
}, nil
}
// DecryptAndUnmarshalSigningPrivateKey will try to pem decode and decrypt an Ed25519/ECDSA private key with
// the given passphrase, returning any other bytes b or an error on failure
func DecryptAndUnmarshalSigningPrivateKey(passphrase, b []byte) (Curve, []byte, []byte, error) {
var curve Curve
k, r := pem.Decode(b)
if k == nil {
return curve, nil, r, fmt.Errorf("input did not contain a valid PEM encoded block")
}
switch k.Type {
case EncryptedEd25519PrivateKeyBanner:
curve = Curve_CURVE25519
case EncryptedECDSAP256PrivateKeyBanner:
curve = Curve_P256
default:
return curve, nil, r, fmt.Errorf("bytes did not contain a proper nebula encrypted Ed25519/ECDSA private key banner")
}
ned, err := UnmarshalNebulaEncryptedData(k.Bytes)
if err != nil {
return curve, nil, r, err
}
var bytes []byte
switch ned.EncryptionMetadata.EncryptionAlgorithm {
case "AES-256-GCM":
bytes, err = aes256Decrypt(passphrase, &ned.EncryptionMetadata.Argon2Parameters, ned.Ciphertext)
if err != nil {
return curve, nil, r, err
}
default:
return curve, nil, r, fmt.Errorf("unsupported encryption algorithm: %s", ned.EncryptionMetadata.EncryptionAlgorithm)
}
switch curve {
case Curve_CURVE25519:
if len(bytes) != ed25519.PrivateKeySize {
return curve, nil, r, fmt.Errorf("key was not %d bytes, is invalid ed25519 private key", ed25519.PrivateKeySize)
}
case Curve_P256:
if len(bytes) != 32 {
return curve, nil, r, fmt.Errorf("key was not 32 bytes, is invalid ECDSA P256 private key")
}
}
return curve, bytes, r, nil
}
func MarshalPublicKey(curve Curve, b []byte) []byte {
switch curve {
case Curve_CURVE25519:
return pem.EncodeToMemory(&pem.Block{Type: X25519PublicKeyBanner, Bytes: b})
case Curve_P256:
return pem.EncodeToMemory(&pem.Block{Type: P256PublicKeyBanner, Bytes: b})
default:
return nil
}
}
// MarshalX25519PublicKey is a simple helper to PEM encode an X25519 public key
func MarshalX25519PublicKey(b []byte) []byte {
return pem.EncodeToMemory(&pem.Block{Type: X25519PublicKeyBanner, Bytes: b})
@@ -188,6 +454,30 @@ func MarshalEd25519PublicKey(key ed25519.PublicKey) []byte {
return pem.EncodeToMemory(&pem.Block{Type: Ed25519PublicKeyBanner, Bytes: key})
}
func UnmarshalPublicKey(b []byte) ([]byte, []byte, Curve, error) {
k, r := pem.Decode(b)
if k == nil {
return nil, r, 0, fmt.Errorf("input did not contain a valid PEM encoded block")
}
var expectedLen int
var curve Curve
switch k.Type {
case X25519PublicKeyBanner:
expectedLen = 32
curve = Curve_CURVE25519
case P256PublicKeyBanner:
// Uncompressed
expectedLen = 65
curve = Curve_P256
default:
return nil, r, 0, fmt.Errorf("bytes did not contain a proper nebula public key banner")
}
if len(k.Bytes) != expectedLen {
return nil, r, 0, fmt.Errorf("key was not %d bytes, is invalid %s public key", expectedLen, curve)
}
return k.Bytes, r, curve, nil
}
// UnmarshalX25519PublicKey will try to pem decode an X25519 public key, returning any other bytes b
// or an error on failure
func UnmarshalX25519PublicKey(b []byte) ([]byte, []byte, error) {
@@ -223,27 +513,86 @@ func UnmarshalEd25519PublicKey(b []byte) (ed25519.PublicKey, []byte, error) {
}
// Sign signs a nebula cert with the provided private key
func (nc *NebulaCertificate) Sign(key ed25519.PrivateKey) error {
func (nc *NebulaCertificate) Sign(curve Curve, key []byte) error {
if curve != nc.Details.Curve {
return fmt.Errorf("curve in cert and private key supplied don't match")
}
b, err := proto.Marshal(nc.getRawDetails())
if err != nil {
return err
}
sig, err := key.Sign(rand.Reader, b, crypto.Hash(0))
if err != nil {
return err
var sig []byte
switch curve {
case Curve_CURVE25519:
signer := ed25519.PrivateKey(key)
sig = ed25519.Sign(signer, b)
case Curve_P256:
signer := &ecdsa.PrivateKey{
PublicKey: ecdsa.PublicKey{
Curve: elliptic.P256(),
},
// ref: https://github.com/golang/go/blob/go1.19/src/crypto/x509/sec1.go#L95
D: new(big.Int).SetBytes(key),
}
// ref: https://github.com/golang/go/blob/go1.19/src/crypto/x509/sec1.go#L119
signer.X, signer.Y = signer.Curve.ScalarBaseMult(key)
// We need to hash first for ECDSA
// - https://pkg.go.dev/crypto/ecdsa#SignASN1
hashed := sha256.Sum256(b)
sig, err = ecdsa.SignASN1(rand.Reader, signer, hashed[:])
if err != nil {
return err
}
default:
return fmt.Errorf("invalid curve: %s", nc.Details.Curve)
}
nc.Signature = sig
return nil
}
// CheckSignature verifies the signature against the provided public key
func (nc *NebulaCertificate) CheckSignature(key ed25519.PublicKey) bool {
func (nc *NebulaCertificate) CheckSignature(key []byte) bool {
b, err := proto.Marshal(nc.getRawDetails())
if err != nil {
return false
}
return ed25519.Verify(key, b, nc.Signature)
switch nc.Details.Curve {
case Curve_CURVE25519:
return ed25519.Verify(ed25519.PublicKey(key), b, nc.Signature)
case Curve_P256:
x, y := elliptic.Unmarshal(elliptic.P256(), key)
pubKey := &ecdsa.PublicKey{Curve: elliptic.P256(), X: x, Y: y}
hashed := sha256.Sum256(b)
return ecdsa.VerifyASN1(pubKey, hashed[:], nc.Signature)
default:
return false
}
}
// NOTE: This uses an internal cache that will not be invalidated automatically
// if you manually change any fields in the NebulaCertificate.
func (nc *NebulaCertificate) checkSignatureWithCache(key []byte, useCache bool) bool {
if !useCache {
return nc.CheckSignature(key)
}
if v := nc.signatureVerified.Load(); v != nil {
return bytes.Equal(*v, key)
}
verified := nc.CheckSignature(key)
if verified {
keyCopy := make([]byte, len(key))
copy(keyCopy, key)
nc.signatureVerified.Store(&keyCopy)
}
return verified
}
// Expired will return true if the nebula cert is too young or too old compared to the provided time, otherwise false
@@ -253,8 +602,27 @@ func (nc *NebulaCertificate) Expired(t time.Time) bool {
// Verify will ensure a certificate is good in all respects (expiry, group membership, signature, cert blocklist, etc)
func (nc *NebulaCertificate) Verify(t time.Time, ncp *NebulaCAPool) (bool, error) {
if ncp.IsBlocklisted(nc) {
return false, fmt.Errorf("certificate has been blocked")
return nc.verify(t, ncp, false)
}
// VerifyWithCache will ensure a certificate is good in all respects (expiry, group membership, signature, cert blocklist, etc)
//
// NOTE: This uses an internal cache that will not be invalidated automatically
// if you manually change any fields in the NebulaCertificate.
func (nc *NebulaCertificate) VerifyWithCache(t time.Time, ncp *NebulaCAPool) (bool, error) {
return nc.verify(t, ncp, true)
}
// ResetCache resets the cache used by VerifyWithCache.
func (nc *NebulaCertificate) ResetCache() {
nc.sha256sum.Store(nil)
nc.signatureVerified.Store(nil)
}
// Verify will ensure a certificate is good in all respects (expiry, group membership, signature, cert blocklist, etc)
func (nc *NebulaCertificate) verify(t time.Time, ncp *NebulaCAPool, useCache bool) (bool, error) {
if ncp.isBlocklistedWithCache(nc, useCache) {
return false, ErrBlockListed
}
signer, err := ncp.GetCAForCert(nc)
@@ -263,15 +631,15 @@ func (nc *NebulaCertificate) Verify(t time.Time, ncp *NebulaCAPool) (bool, error
}
if signer.Expired(t) {
return false, fmt.Errorf("root certificate is expired")
return false, ErrRootExpired
}
if nc.Expired(t) {
return false, fmt.Errorf("certificate is expired")
return false, ErrExpired
}
if !nc.CheckSignature(signer.Details.PublicKey) {
return false, fmt.Errorf("certificate signature did not match")
if !nc.checkSignatureWithCache(signer.Details.PublicKey, useCache) {
return false, ErrSignatureMismatch
}
if err := nc.CheckRootConstrains(signer); err != nil {
@@ -324,22 +692,52 @@ func (nc *NebulaCertificate) CheckRootConstrains(signer *NebulaCertificate) erro
}
// VerifyPrivateKey checks that the public key in the Nebula certificate and a supplied private key match
func (nc *NebulaCertificate) VerifyPrivateKey(key []byte) error {
func (nc *NebulaCertificate) VerifyPrivateKey(curve Curve, key []byte) error {
if curve != nc.Details.Curve {
return fmt.Errorf("curve in cert and private key supplied don't match")
}
if nc.Details.IsCA {
// the call to PublicKey below will panic slice bounds out of range otherwise
if len(key) != ed25519.PrivateKeySize {
return fmt.Errorf("key was not 64 bytes, is invalid ed25519 private key")
}
switch curve {
case Curve_CURVE25519:
// the call to PublicKey below will panic slice bounds out of range otherwise
if len(key) != ed25519.PrivateKeySize {
return fmt.Errorf("key was not 64 bytes, is invalid ed25519 private key")
}
if !ed25519.PublicKey(nc.Details.PublicKey).Equal(ed25519.PrivateKey(key).Public()) {
return fmt.Errorf("public key in cert and private key supplied don't match")
if !ed25519.PublicKey(nc.Details.PublicKey).Equal(ed25519.PrivateKey(key).Public()) {
return fmt.Errorf("public key in cert and private key supplied don't match")
}
case Curve_P256:
privkey, err := ecdh.P256().NewPrivateKey(key)
if err != nil {
return fmt.Errorf("cannot parse private key as P256")
}
pub := privkey.PublicKey().Bytes()
if !bytes.Equal(pub, nc.Details.PublicKey) {
return fmt.Errorf("public key in cert and private key supplied don't match")
}
default:
return fmt.Errorf("invalid curve: %s", curve)
}
return nil
}
pub, err := curve25519.X25519(key, curve25519.Basepoint)
if err != nil {
return err
var pub []byte
switch curve {
case Curve_CURVE25519:
var err error
pub, err = curve25519.X25519(key, curve25519.Basepoint)
if err != nil {
return err
}
case Curve_P256:
privkey, err := ecdh.P256().NewPrivateKey(key)
if err != nil {
return err
}
pub = privkey.PublicKey().Bytes()
default:
return fmt.Errorf("invalid curve: %s", curve)
}
if !bytes.Equal(pub, nc.Details.PublicKey) {
return fmt.Errorf("public key in cert and private key supplied don't match")
@@ -393,6 +791,7 @@ func (nc *NebulaCertificate) String() string {
s += fmt.Sprintf("\t\tIs CA: %v\n", nc.Details.IsCA)
s += fmt.Sprintf("\t\tIssuer: %s\n", nc.Details.Issuer)
s += fmt.Sprintf("\t\tPublic key: %x\n", nc.Details.PublicKey)
s += fmt.Sprintf("\t\tCurve: %s\n", nc.Details.Curve)
s += "\t}\n"
fp, err := nc.Sha256Sum()
if err == nil {
@@ -413,6 +812,7 @@ func (nc *NebulaCertificate) getRawDetails() *RawNebulaCertificateDetails {
NotAfter: nc.Details.NotAfter.Unix(),
PublicKey: make([]byte, len(nc.Details.PublicKey)),
IsCA: nc.Details.IsCA,
Curve: nc.Details.Curve,
}
for _, ipNet := range nc.Details.Ips {
@@ -461,6 +861,25 @@ func (nc *NebulaCertificate) Sha256Sum() (string, error) {
return hex.EncodeToString(sum[:]), nil
}
// NOTE: This uses an internal cache that will not be invalidated automatically
// if you manually change any fields in the NebulaCertificate.
func (nc *NebulaCertificate) sha256SumWithCache(useCache bool) (string, error) {
if !useCache {
return nc.Sha256Sum()
}
if s := nc.sha256sum.Load(); s != nil {
return *s, nil
}
s, err := nc.Sha256Sum()
if err != nil {
return s, err
}
nc.sha256sum.Store(&s)
return s, nil
}
func (nc *NebulaCertificate) MarshalJSON() ([]byte, error) {
toString := func(ips []*net.IPNet) []string {
s := []string{}
@@ -482,6 +901,7 @@ func (nc *NebulaCertificate) MarshalJSON() ([]byte, error) {
"publicKey": fmt.Sprintf("%x", nc.Details.PublicKey),
"isCa": nc.Details.IsCA,
"issuer": nc.Details.Issuer,
"curve": nc.Details.Curve.String(),
},
"fingerprint": fp,
"signature": fmt.Sprintf("%x", nc.Signature),

View File

@@ -1,7 +1,7 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.26.0
// protoc v3.14.0
// protoc-gen-go v1.30.0
// protoc v3.21.5
// source: cert.proto
package cert
@@ -20,6 +20,52 @@ const (
_ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
)
type Curve int32
const (
Curve_CURVE25519 Curve = 0
Curve_P256 Curve = 1
)
// Enum value maps for Curve.
var (
Curve_name = map[int32]string{
0: "CURVE25519",
1: "P256",
}
Curve_value = map[string]int32{
"CURVE25519": 0,
"P256": 1,
}
)
func (x Curve) Enum() *Curve {
p := new(Curve)
*p = x
return p
}
func (x Curve) String() string {
return protoimpl.X.EnumStringOf(x.Descriptor(), protoreflect.EnumNumber(x))
}
func (Curve) Descriptor() protoreflect.EnumDescriptor {
return file_cert_proto_enumTypes[0].Descriptor()
}
func (Curve) Type() protoreflect.EnumType {
return &file_cert_proto_enumTypes[0]
}
func (x Curve) Number() protoreflect.EnumNumber {
return protoreflect.EnumNumber(x)
}
// Deprecated: Use Curve.Descriptor instead.
func (Curve) EnumDescriptor() ([]byte, []int) {
return file_cert_proto_rawDescGZIP(), []int{0}
}
type RawNebulaCertificate struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
@@ -91,6 +137,7 @@ type RawNebulaCertificateDetails struct {
IsCA bool `protobuf:"varint,8,opt,name=IsCA,proto3" json:"IsCA,omitempty"`
// sha-256 of the issuer certificate, if this field is blank the cert is self-signed
Issuer []byte `protobuf:"bytes,9,opt,name=Issuer,proto3" json:"Issuer,omitempty"`
Curve Curve `protobuf:"varint,100,opt,name=curve,proto3,enum=cert.Curve" json:"curve,omitempty"`
}
func (x *RawNebulaCertificateDetails) Reset() {
@@ -188,6 +235,202 @@ func (x *RawNebulaCertificateDetails) GetIssuer() []byte {
return nil
}
func (x *RawNebulaCertificateDetails) GetCurve() Curve {
if x != nil {
return x.Curve
}
return Curve_CURVE25519
}
type RawNebulaEncryptedData struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
EncryptionMetadata *RawNebulaEncryptionMetadata `protobuf:"bytes,1,opt,name=EncryptionMetadata,proto3" json:"EncryptionMetadata,omitempty"`
Ciphertext []byte `protobuf:"bytes,2,opt,name=Ciphertext,proto3" json:"Ciphertext,omitempty"`
}
func (x *RawNebulaEncryptedData) Reset() {
*x = RawNebulaEncryptedData{}
if protoimpl.UnsafeEnabled {
mi := &file_cert_proto_msgTypes[2]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *RawNebulaEncryptedData) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*RawNebulaEncryptedData) ProtoMessage() {}
func (x *RawNebulaEncryptedData) ProtoReflect() protoreflect.Message {
mi := &file_cert_proto_msgTypes[2]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use RawNebulaEncryptedData.ProtoReflect.Descriptor instead.
func (*RawNebulaEncryptedData) Descriptor() ([]byte, []int) {
return file_cert_proto_rawDescGZIP(), []int{2}
}
func (x *RawNebulaEncryptedData) GetEncryptionMetadata() *RawNebulaEncryptionMetadata {
if x != nil {
return x.EncryptionMetadata
}
return nil
}
func (x *RawNebulaEncryptedData) GetCiphertext() []byte {
if x != nil {
return x.Ciphertext
}
return nil
}
type RawNebulaEncryptionMetadata struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
EncryptionAlgorithm string `protobuf:"bytes,1,opt,name=EncryptionAlgorithm,proto3" json:"EncryptionAlgorithm,omitempty"`
Argon2Parameters *RawNebulaArgon2Parameters `protobuf:"bytes,2,opt,name=Argon2Parameters,proto3" json:"Argon2Parameters,omitempty"`
}
func (x *RawNebulaEncryptionMetadata) Reset() {
*x = RawNebulaEncryptionMetadata{}
if protoimpl.UnsafeEnabled {
mi := &file_cert_proto_msgTypes[3]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *RawNebulaEncryptionMetadata) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*RawNebulaEncryptionMetadata) ProtoMessage() {}
func (x *RawNebulaEncryptionMetadata) ProtoReflect() protoreflect.Message {
mi := &file_cert_proto_msgTypes[3]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use RawNebulaEncryptionMetadata.ProtoReflect.Descriptor instead.
func (*RawNebulaEncryptionMetadata) Descriptor() ([]byte, []int) {
return file_cert_proto_rawDescGZIP(), []int{3}
}
func (x *RawNebulaEncryptionMetadata) GetEncryptionAlgorithm() string {
if x != nil {
return x.EncryptionAlgorithm
}
return ""
}
func (x *RawNebulaEncryptionMetadata) GetArgon2Parameters() *RawNebulaArgon2Parameters {
if x != nil {
return x.Argon2Parameters
}
return nil
}
type RawNebulaArgon2Parameters struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Version int32 `protobuf:"varint,1,opt,name=version,proto3" json:"version,omitempty"` // rune in Go
Memory uint32 `protobuf:"varint,2,opt,name=memory,proto3" json:"memory,omitempty"`
Parallelism uint32 `protobuf:"varint,4,opt,name=parallelism,proto3" json:"parallelism,omitempty"` // uint8 in Go
Iterations uint32 `protobuf:"varint,3,opt,name=iterations,proto3" json:"iterations,omitempty"`
Salt []byte `protobuf:"bytes,5,opt,name=salt,proto3" json:"salt,omitempty"`
}
func (x *RawNebulaArgon2Parameters) Reset() {
*x = RawNebulaArgon2Parameters{}
if protoimpl.UnsafeEnabled {
mi := &file_cert_proto_msgTypes[4]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *RawNebulaArgon2Parameters) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*RawNebulaArgon2Parameters) ProtoMessage() {}
func (x *RawNebulaArgon2Parameters) ProtoReflect() protoreflect.Message {
mi := &file_cert_proto_msgTypes[4]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use RawNebulaArgon2Parameters.ProtoReflect.Descriptor instead.
func (*RawNebulaArgon2Parameters) Descriptor() ([]byte, []int) {
return file_cert_proto_rawDescGZIP(), []int{4}
}
func (x *RawNebulaArgon2Parameters) GetVersion() int32 {
if x != nil {
return x.Version
}
return 0
}
func (x *RawNebulaArgon2Parameters) GetMemory() uint32 {
if x != nil {
return x.Memory
}
return 0
}
func (x *RawNebulaArgon2Parameters) GetParallelism() uint32 {
if x != nil {
return x.Parallelism
}
return 0
}
func (x *RawNebulaArgon2Parameters) GetIterations() uint32 {
if x != nil {
return x.Iterations
}
return 0
}
func (x *RawNebulaArgon2Parameters) GetSalt() []byte {
if x != nil {
return x.Salt
}
return nil
}
var File_cert_proto protoreflect.FileDescriptor
var file_cert_proto_rawDesc = []byte{
@@ -199,7 +442,7 @@ var file_cert_proto_rawDesc = []byte{
0x69, 0x66, 0x69, 0x63, 0x61, 0x74, 0x65, 0x44, 0x65, 0x74, 0x61, 0x69, 0x6c, 0x73, 0x52, 0x07,
0x44, 0x65, 0x74, 0x61, 0x69, 0x6c, 0x73, 0x12, 0x1c, 0x0a, 0x09, 0x53, 0x69, 0x67, 0x6e, 0x61,
0x74, 0x75, 0x72, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0c, 0x52, 0x09, 0x53, 0x69, 0x67, 0x6e,
0x61, 0x74, 0x75, 0x72, 0x65, 0x22, 0xf9, 0x01, 0x0a, 0x1b, 0x52, 0x61, 0x77, 0x4e, 0x65, 0x62,
0x61, 0x74, 0x75, 0x72, 0x65, 0x22, 0x9c, 0x02, 0x0a, 0x1b, 0x52, 0x61, 0x77, 0x4e, 0x65, 0x62,
0x75, 0x6c, 0x61, 0x43, 0x65, 0x72, 0x74, 0x69, 0x66, 0x69, 0x63, 0x61, 0x74, 0x65, 0x44, 0x65,
0x74, 0x61, 0x69, 0x6c, 0x73, 0x12, 0x12, 0x0a, 0x04, 0x4e, 0x61, 0x6d, 0x65, 0x18, 0x01, 0x20,
0x01, 0x28, 0x09, 0x52, 0x04, 0x4e, 0x61, 0x6d, 0x65, 0x12, 0x10, 0x0a, 0x03, 0x49, 0x70, 0x73,
@@ -215,9 +458,43 @@ var file_cert_proto_rawDesc = []byte{
0x69, 0x63, 0x4b, 0x65, 0x79, 0x12, 0x12, 0x0a, 0x04, 0x49, 0x73, 0x43, 0x41, 0x18, 0x08, 0x20,
0x01, 0x28, 0x08, 0x52, 0x04, 0x49, 0x73, 0x43, 0x41, 0x12, 0x16, 0x0a, 0x06, 0x49, 0x73, 0x73,
0x75, 0x65, 0x72, 0x18, 0x09, 0x20, 0x01, 0x28, 0x0c, 0x52, 0x06, 0x49, 0x73, 0x73, 0x75, 0x65,
0x72, 0x42, 0x20, 0x5a, 0x1e, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f,
0x73, 0x6c, 0x61, 0x63, 0x6b, 0x68, 0x71, 0x2f, 0x6e, 0x65, 0x62, 0x75, 0x6c, 0x61, 0x2f, 0x63,
0x65, 0x72, 0x74, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33,
0x72, 0x12, 0x21, 0x0a, 0x05, 0x63, 0x75, 0x72, 0x76, 0x65, 0x18, 0x64, 0x20, 0x01, 0x28, 0x0e,
0x32, 0x0b, 0x2e, 0x63, 0x65, 0x72, 0x74, 0x2e, 0x43, 0x75, 0x72, 0x76, 0x65, 0x52, 0x05, 0x63,
0x75, 0x72, 0x76, 0x65, 0x22, 0x8b, 0x01, 0x0a, 0x16, 0x52, 0x61, 0x77, 0x4e, 0x65, 0x62, 0x75,
0x6c, 0x61, 0x45, 0x6e, 0x63, 0x72, 0x79, 0x70, 0x74, 0x65, 0x64, 0x44, 0x61, 0x74, 0x61, 0x12,
0x51, 0x0a, 0x12, 0x45, 0x6e, 0x63, 0x72, 0x79, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x4d, 0x65, 0x74,
0x61, 0x64, 0x61, 0x74, 0x61, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x21, 0x2e, 0x63, 0x65,
0x72, 0x74, 0x2e, 0x52, 0x61, 0x77, 0x4e, 0x65, 0x62, 0x75, 0x6c, 0x61, 0x45, 0x6e, 0x63, 0x72,
0x79, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x4d, 0x65, 0x74, 0x61, 0x64, 0x61, 0x74, 0x61, 0x52, 0x12,
0x45, 0x6e, 0x63, 0x72, 0x79, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x4d, 0x65, 0x74, 0x61, 0x64, 0x61,
0x74, 0x61, 0x12, 0x1e, 0x0a, 0x0a, 0x43, 0x69, 0x70, 0x68, 0x65, 0x72, 0x74, 0x65, 0x78, 0x74,
0x18, 0x02, 0x20, 0x01, 0x28, 0x0c, 0x52, 0x0a, 0x43, 0x69, 0x70, 0x68, 0x65, 0x72, 0x74, 0x65,
0x78, 0x74, 0x22, 0x9c, 0x01, 0x0a, 0x1b, 0x52, 0x61, 0x77, 0x4e, 0x65, 0x62, 0x75, 0x6c, 0x61,
0x45, 0x6e, 0x63, 0x72, 0x79, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x4d, 0x65, 0x74, 0x61, 0x64, 0x61,
0x74, 0x61, 0x12, 0x30, 0x0a, 0x13, 0x45, 0x6e, 0x63, 0x72, 0x79, 0x70, 0x74, 0x69, 0x6f, 0x6e,
0x41, 0x6c, 0x67, 0x6f, 0x72, 0x69, 0x74, 0x68, 0x6d, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52,
0x13, 0x45, 0x6e, 0x63, 0x72, 0x79, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x41, 0x6c, 0x67, 0x6f, 0x72,
0x69, 0x74, 0x68, 0x6d, 0x12, 0x4b, 0x0a, 0x10, 0x41, 0x72, 0x67, 0x6f, 0x6e, 0x32, 0x50, 0x61,
0x72, 0x61, 0x6d, 0x65, 0x74, 0x65, 0x72, 0x73, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x1f,
0x2e, 0x63, 0x65, 0x72, 0x74, 0x2e, 0x52, 0x61, 0x77, 0x4e, 0x65, 0x62, 0x75, 0x6c, 0x61, 0x41,
0x72, 0x67, 0x6f, 0x6e, 0x32, 0x50, 0x61, 0x72, 0x61, 0x6d, 0x65, 0x74, 0x65, 0x72, 0x73, 0x52,
0x10, 0x41, 0x72, 0x67, 0x6f, 0x6e, 0x32, 0x50, 0x61, 0x72, 0x61, 0x6d, 0x65, 0x74, 0x65, 0x72,
0x73, 0x22, 0xa3, 0x01, 0x0a, 0x19, 0x52, 0x61, 0x77, 0x4e, 0x65, 0x62, 0x75, 0x6c, 0x61, 0x41,
0x72, 0x67, 0x6f, 0x6e, 0x32, 0x50, 0x61, 0x72, 0x61, 0x6d, 0x65, 0x74, 0x65, 0x72, 0x73, 0x12,
0x18, 0x0a, 0x07, 0x76, 0x65, 0x72, 0x73, 0x69, 0x6f, 0x6e, 0x18, 0x01, 0x20, 0x01, 0x28, 0x05,
0x52, 0x07, 0x76, 0x65, 0x72, 0x73, 0x69, 0x6f, 0x6e, 0x12, 0x16, 0x0a, 0x06, 0x6d, 0x65, 0x6d,
0x6f, 0x72, 0x79, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0d, 0x52, 0x06, 0x6d, 0x65, 0x6d, 0x6f, 0x72,
0x79, 0x12, 0x20, 0x0a, 0x0b, 0x70, 0x61, 0x72, 0x61, 0x6c, 0x6c, 0x65, 0x6c, 0x69, 0x73, 0x6d,
0x18, 0x04, 0x20, 0x01, 0x28, 0x0d, 0x52, 0x0b, 0x70, 0x61, 0x72, 0x61, 0x6c, 0x6c, 0x65, 0x6c,
0x69, 0x73, 0x6d, 0x12, 0x1e, 0x0a, 0x0a, 0x69, 0x74, 0x65, 0x72, 0x61, 0x74, 0x69, 0x6f, 0x6e,
0x73, 0x18, 0x03, 0x20, 0x01, 0x28, 0x0d, 0x52, 0x0a, 0x69, 0x74, 0x65, 0x72, 0x61, 0x74, 0x69,
0x6f, 0x6e, 0x73, 0x12, 0x12, 0x0a, 0x04, 0x73, 0x61, 0x6c, 0x74, 0x18, 0x05, 0x20, 0x01, 0x28,
0x0c, 0x52, 0x04, 0x73, 0x61, 0x6c, 0x74, 0x2a, 0x21, 0x0a, 0x05, 0x43, 0x75, 0x72, 0x76, 0x65,
0x12, 0x0e, 0x0a, 0x0a, 0x43, 0x55, 0x52, 0x56, 0x45, 0x32, 0x35, 0x35, 0x31, 0x39, 0x10, 0x00,
0x12, 0x08, 0x0a, 0x04, 0x50, 0x32, 0x35, 0x36, 0x10, 0x01, 0x42, 0x20, 0x5a, 0x1e, 0x67, 0x69,
0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x73, 0x6c, 0x61, 0x63, 0x6b, 0x68, 0x71,
0x2f, 0x6e, 0x65, 0x62, 0x75, 0x6c, 0x61, 0x2f, 0x63, 0x65, 0x72, 0x74, 0x62, 0x06, 0x70, 0x72,
0x6f, 0x74, 0x6f, 0x33,
}
var (
@@ -232,18 +509,26 @@ func file_cert_proto_rawDescGZIP() []byte {
return file_cert_proto_rawDescData
}
var file_cert_proto_msgTypes = make([]protoimpl.MessageInfo, 2)
var file_cert_proto_enumTypes = make([]protoimpl.EnumInfo, 1)
var file_cert_proto_msgTypes = make([]protoimpl.MessageInfo, 5)
var file_cert_proto_goTypes = []interface{}{
(*RawNebulaCertificate)(nil), // 0: cert.RawNebulaCertificate
(*RawNebulaCertificateDetails)(nil), // 1: cert.RawNebulaCertificateDetails
(Curve)(0), // 0: cert.Curve
(*RawNebulaCertificate)(nil), // 1: cert.RawNebulaCertificate
(*RawNebulaCertificateDetails)(nil), // 2: cert.RawNebulaCertificateDetails
(*RawNebulaEncryptedData)(nil), // 3: cert.RawNebulaEncryptedData
(*RawNebulaEncryptionMetadata)(nil), // 4: cert.RawNebulaEncryptionMetadata
(*RawNebulaArgon2Parameters)(nil), // 5: cert.RawNebulaArgon2Parameters
}
var file_cert_proto_depIdxs = []int32{
1, // 0: cert.RawNebulaCertificate.Details:type_name -> cert.RawNebulaCertificateDetails
1, // [1:1] is the sub-list for method output_type
1, // [1:1] is the sub-list for method input_type
1, // [1:1] is the sub-list for extension type_name
1, // [1:1] is the sub-list for extension extendee
0, // [0:1] is the sub-list for field type_name
2, // 0: cert.RawNebulaCertificate.Details:type_name -> cert.RawNebulaCertificateDetails
0, // 1: cert.RawNebulaCertificateDetails.curve:type_name -> cert.Curve
4, // 2: cert.RawNebulaEncryptedData.EncryptionMetadata:type_name -> cert.RawNebulaEncryptionMetadata
5, // 3: cert.RawNebulaEncryptionMetadata.Argon2Parameters:type_name -> cert.RawNebulaArgon2Parameters
4, // [4:4] is the sub-list for method output_type
4, // [4:4] is the sub-list for method input_type
4, // [4:4] is the sub-list for extension type_name
4, // [4:4] is the sub-list for extension extendee
0, // [0:4] is the sub-list for field type_name
}
func init() { file_cert_proto_init() }
@@ -276,19 +561,56 @@ func file_cert_proto_init() {
return nil
}
}
file_cert_proto_msgTypes[2].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*RawNebulaEncryptedData); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_cert_proto_msgTypes[3].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*RawNebulaEncryptionMetadata); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_cert_proto_msgTypes[4].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*RawNebulaArgon2Parameters); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
}
type x struct{}
out := protoimpl.TypeBuilder{
File: protoimpl.DescBuilder{
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
RawDescriptor: file_cert_proto_rawDesc,
NumEnums: 0,
NumMessages: 2,
NumEnums: 1,
NumMessages: 5,
NumExtensions: 0,
NumServices: 0,
},
GoTypes: file_cert_proto_goTypes,
DependencyIndexes: file_cert_proto_depIdxs,
EnumInfos: file_cert_proto_enumTypes,
MessageInfos: file_cert_proto_msgTypes,
}.Build()
File_cert_proto = out.File

View File

@@ -5,6 +5,11 @@ option go_package = "github.com/slackhq/nebula/cert";
//import "google/protobuf/timestamp.proto";
enum Curve {
CURVE25519 = 0;
P256 = 1;
}
message RawNebulaCertificate {
RawNebulaCertificateDetails Details = 1;
bytes Signature = 2;
@@ -26,4 +31,24 @@ message RawNebulaCertificateDetails {
// sha-256 of the issuer certificate, if this field is blank the cert is self-signed
bytes Issuer = 9;
}
Curve curve = 100;
}
message RawNebulaEncryptedData {
RawNebulaEncryptionMetadata EncryptionMetadata = 1;
bytes Ciphertext = 2;
}
message RawNebulaEncryptionMetadata {
string EncryptionAlgorithm = 1;
RawNebulaArgon2Parameters Argon2Parameters = 2;
}
message RawNebulaArgon2Parameters {
int32 version = 1; // rune in Go
uint32 memory = 2;
uint32 parallelism = 4; // uint8 in Go
uint32 iterations = 3;
bytes salt = 5;
}

View File

@@ -1,6 +1,9 @@
package cert
import (
"crypto/ecdh"
"crypto/ecdsa"
"crypto/elliptic"
"crypto/rand"
"fmt"
"io"
@@ -8,11 +11,11 @@ import (
"testing"
"time"
"github.com/golang/protobuf/proto"
"github.com/slackhq/nebula/util"
"github.com/slackhq/nebula/test"
"github.com/stretchr/testify/assert"
"golang.org/x/crypto/curve25519"
"golang.org/x/crypto/ed25519"
"google.golang.org/protobuf/proto"
)
func TestMarshalingNebulaCertificate(t *testing.T) {
@@ -101,7 +104,49 @@ func TestNebulaCertificate_Sign(t *testing.T) {
pub, priv, err := ed25519.GenerateKey(rand.Reader)
assert.Nil(t, err)
assert.False(t, nc.CheckSignature(pub))
assert.Nil(t, nc.Sign(priv))
assert.Nil(t, nc.Sign(Curve_CURVE25519, priv))
assert.True(t, nc.CheckSignature(pub))
_, err = nc.Marshal()
assert.Nil(t, err)
//t.Log("Cert size:", len(b))
}
func TestNebulaCertificate_SignP256(t *testing.T) {
before := time.Now().Add(time.Second * -60).Round(time.Second)
after := time.Now().Add(time.Second * 60).Round(time.Second)
pubKey := []byte("01234567890abcedfghij1234567890ab1234567890abcedfghij1234567890ab")
nc := NebulaCertificate{
Details: NebulaCertificateDetails{
Name: "testing",
Ips: []*net.IPNet{
{IP: net.ParseIP("10.1.1.1"), Mask: net.IPMask(net.ParseIP("255.255.255.0"))},
{IP: net.ParseIP("10.1.1.2"), Mask: net.IPMask(net.ParseIP("255.255.0.0"))},
{IP: net.ParseIP("10.1.1.3"), Mask: net.IPMask(net.ParseIP("255.0.255.0"))},
},
Subnets: []*net.IPNet{
{IP: net.ParseIP("9.1.1.1"), Mask: net.IPMask(net.ParseIP("255.0.255.0"))},
{IP: net.ParseIP("9.1.1.2"), Mask: net.IPMask(net.ParseIP("255.255.255.0"))},
{IP: net.ParseIP("9.1.1.3"), Mask: net.IPMask(net.ParseIP("255.255.0.0"))},
},
Groups: []string{"test-group1", "test-group2", "test-group3"},
NotBefore: before,
NotAfter: after,
PublicKey: pubKey,
IsCA: false,
Curve: Curve_P256,
Issuer: "1234567890abcedfghij1234567890ab",
},
}
priv, err := ecdsa.GenerateKey(elliptic.P256(), rand.Reader)
pub := elliptic.Marshal(elliptic.P256(), priv.PublicKey.X, priv.PublicKey.Y)
rawPriv := priv.D.FillBytes(make([]byte, 32))
assert.Nil(t, err)
assert.False(t, nc.CheckSignature(pub))
assert.Nil(t, nc.Sign(Curve_P256, rawPriv))
assert.True(t, nc.CheckSignature(pub))
_, err = nc.Marshal()
@@ -153,7 +198,7 @@ func TestNebulaCertificate_MarshalJSON(t *testing.T) {
assert.Nil(t, err)
assert.Equal(
t,
"{\"details\":{\"groups\":[\"test-group1\",\"test-group2\",\"test-group3\"],\"ips\":[\"10.1.1.1/24\",\"10.1.1.2/16\",\"10.1.1.3/ff00ff00\"],\"isCa\":false,\"issuer\":\"1234567890abcedfghij1234567890ab\",\"name\":\"testing\",\"notAfter\":\"0000-11-30T02:00:00Z\",\"notBefore\":\"0000-11-30T01:00:00Z\",\"publicKey\":\"313233343536373839306162636564666768696a313233343536373839306162\",\"subnets\":[\"9.1.1.1/ff00ff00\",\"9.1.1.2/24\",\"9.1.1.3/16\"]},\"fingerprint\":\"26cb1c30ad7872c804c166b5150fa372f437aa3856b04edb4334b4470ec728e4\",\"signature\":\"313233343536373839306162636564666768696a313233343536373839306162\"}",
"{\"details\":{\"curve\":\"CURVE25519\",\"groups\":[\"test-group1\",\"test-group2\",\"test-group3\"],\"ips\":[\"10.1.1.1/24\",\"10.1.1.2/16\",\"10.1.1.3/ff00ff00\"],\"isCa\":false,\"issuer\":\"1234567890abcedfghij1234567890ab\",\"name\":\"testing\",\"notAfter\":\"0000-11-30T02:00:00Z\",\"notBefore\":\"0000-11-30T01:00:00Z\",\"publicKey\":\"313233343536373839306162636564666768696a313233343536373839306162\",\"subnets\":[\"9.1.1.1/ff00ff00\",\"9.1.1.2/24\",\"9.1.1.3/16\"]},\"fingerprint\":\"26cb1c30ad7872c804c166b5150fa372f437aa3856b04edb4334b4470ec728e4\",\"signature\":\"313233343536373839306162636564666768696a313233343536373839306162\"}",
string(b),
)
}
@@ -177,7 +222,7 @@ func TestNebulaCertificate_Verify(t *testing.T) {
v, err := c.Verify(time.Now(), caPool)
assert.False(t, v)
assert.EqualError(t, err, "certificate has been blocked")
assert.EqualError(t, err, "certificate is in the block list")
caPool.ResetCertBlocklist()
v, err = c.Verify(time.Now(), caPool)
@@ -217,6 +262,65 @@ func TestNebulaCertificate_Verify(t *testing.T) {
assert.Nil(t, err)
}
func TestNebulaCertificate_VerifyP256(t *testing.T) {
ca, _, caKey, err := newTestCaCertP256(time.Now(), time.Now().Add(10*time.Minute), []*net.IPNet{}, []*net.IPNet{}, []string{})
assert.Nil(t, err)
c, _, _, err := newTestCert(ca, caKey, time.Now(), time.Now().Add(5*time.Minute), []*net.IPNet{}, []*net.IPNet{}, []string{})
assert.Nil(t, err)
h, err := ca.Sha256Sum()
assert.Nil(t, err)
caPool := NewCAPool()
caPool.CAs[h] = ca
f, err := c.Sha256Sum()
assert.Nil(t, err)
caPool.BlocklistFingerprint(f)
v, err := c.Verify(time.Now(), caPool)
assert.False(t, v)
assert.EqualError(t, err, "certificate is in the block list")
caPool.ResetCertBlocklist()
v, err = c.Verify(time.Now(), caPool)
assert.True(t, v)
assert.Nil(t, err)
v, err = c.Verify(time.Now().Add(time.Hour*1000), caPool)
assert.False(t, v)
assert.EqualError(t, err, "root certificate is expired")
c, _, _, err = newTestCert(ca, caKey, time.Time{}, time.Time{}, []*net.IPNet{}, []*net.IPNet{}, []string{})
assert.Nil(t, err)
v, err = c.Verify(time.Now().Add(time.Minute*6), caPool)
assert.False(t, v)
assert.EqualError(t, err, "certificate is expired")
// Test group assertion
ca, _, caKey, err = newTestCaCertP256(time.Now(), time.Now().Add(10*time.Minute), []*net.IPNet{}, []*net.IPNet{}, []string{"test1", "test2"})
assert.Nil(t, err)
caPem, err := ca.MarshalToPEM()
assert.Nil(t, err)
caPool = NewCAPool()
caPool.AddCACertificate(caPem)
c, _, _, err = newTestCert(ca, caKey, time.Now(), time.Now().Add(5*time.Minute), []*net.IPNet{}, []*net.IPNet{}, []string{"test1", "bad"})
assert.Nil(t, err)
v, err = c.Verify(time.Now(), caPool)
assert.False(t, v)
assert.EqualError(t, err, "certificate contained a group not present on the signing ca: bad")
c, _, _, err = newTestCert(ca, caKey, time.Now(), time.Now().Add(5*time.Minute), []*net.IPNet{}, []*net.IPNet{}, []string{"test1"})
assert.Nil(t, err)
v, err = c.Verify(time.Now(), caPool)
assert.True(t, v)
assert.Nil(t, err)
}
func TestNebulaCertificate_Verify_IPs(t *testing.T) {
_, caIp1, _ := net.ParseCIDR("10.0.0.0/16")
_, caIp2, _ := net.ParseCIDR("192.168.0.0/24")
@@ -378,20 +482,40 @@ func TestNebulaCertificate_Verify_Subnets(t *testing.T) {
func TestNebulaCertificate_VerifyPrivateKey(t *testing.T) {
ca, _, caKey, err := newTestCaCert(time.Time{}, time.Time{}, []*net.IPNet{}, []*net.IPNet{}, []string{})
assert.Nil(t, err)
err = ca.VerifyPrivateKey(caKey)
err = ca.VerifyPrivateKey(Curve_CURVE25519, caKey)
assert.Nil(t, err)
_, _, caKey2, err := newTestCaCert(time.Time{}, time.Time{}, []*net.IPNet{}, []*net.IPNet{}, []string{})
assert.Nil(t, err)
err = ca.VerifyPrivateKey(caKey2)
err = ca.VerifyPrivateKey(Curve_CURVE25519, caKey2)
assert.NotNil(t, err)
c, _, priv, err := newTestCert(ca, caKey, time.Time{}, time.Time{}, []*net.IPNet{}, []*net.IPNet{}, []string{})
err = c.VerifyPrivateKey(priv)
err = c.VerifyPrivateKey(Curve_CURVE25519, priv)
assert.Nil(t, err)
_, priv2 := x25519Keypair()
err = c.VerifyPrivateKey(priv2)
err = c.VerifyPrivateKey(Curve_CURVE25519, priv2)
assert.NotNil(t, err)
}
func TestNebulaCertificate_VerifyPrivateKeyP256(t *testing.T) {
ca, _, caKey, err := newTestCaCertP256(time.Time{}, time.Time{}, []*net.IPNet{}, []*net.IPNet{}, []string{})
assert.Nil(t, err)
err = ca.VerifyPrivateKey(Curve_P256, caKey)
assert.Nil(t, err)
_, _, caKey2, err := newTestCaCertP256(time.Time{}, time.Time{}, []*net.IPNet{}, []*net.IPNet{}, []string{})
assert.Nil(t, err)
err = ca.VerifyPrivateKey(Curve_P256, caKey2)
assert.NotNil(t, err)
c, _, priv, err := newTestCert(ca, caKey, time.Time{}, time.Time{}, []*net.IPNet{}, []*net.IPNet{}, []string{})
err = c.VerifyPrivateKey(Curve_P256, priv)
assert.Nil(t, err)
_, priv2 := p256Keypair()
err = c.VerifyPrivateKey(Curve_P256, priv2)
assert.NotNil(t, err)
}
@@ -429,6 +553,25 @@ BVG+oJpAoqokUBbI4U0N8CSfpUABEkB/Pm5A2xyH/nc8mg/wvGUWG3pZ7nHzaDMf
8/phAUt+FLzqTECzQKisYswKvE3pl9mbEYKbOdIHrxdIp95mo4sF
-----END NEBULA CERTIFICATE-----
`
expired := `
# expired certificate
-----BEGIN NEBULA CERTIFICATE-----
CjkKB2V4cGlyZWQouPmWjQYwufmWjQY6ILCRaoCkJlqHgv5jfDN4lzLHBvDzaQm4
vZxfu144hmgjQAESQG4qlnZi8DncvD/LDZnLgJHOaX1DWCHHEh59epVsC+BNgTie
WH1M9n4O7cFtGlM6sJJOS+rCVVEJ3ABS7+MPdQs=
-----END NEBULA CERTIFICATE-----
`
p256 := `
# p256 certificate
-----BEGIN NEBULA CERTIFICATE-----
CmYKEG5lYnVsYSBQMjU2IHRlc3Qo4s+7mgYw4tXrsAc6QQRkaW2jFmllYvN4+/k2
6tctO9sPT3jOx8ES6M1nIqOhpTmZeabF/4rELDqPV4aH5jfJut798DUXql0FlF8H
76gvQAGgBgESRzBFAiEAib0/te6eMiZOKD8gdDeloMTS0wGuX2t0C7TFdUhAQzgC
IBNWYMep3ysx9zCgknfG5dKtwGTaqF++BWKDYdyl34KX
-----END NEBULA CERTIFICATE-----
`
rootCA := NebulaCertificate{
@@ -443,6 +586,12 @@ BVG+oJpAoqokUBbI4U0N8CSfpUABEkB/Pm5A2xyH/nc8mg/wvGUWG3pZ7nHzaDMf
},
}
rootCAP256 := NebulaCertificate{
Details: NebulaCertificateDetails{
Name: "nebula P256 test",
},
}
p, err := NewCAPoolFromBytes([]byte(noNewLines))
assert.Nil(t, err)
assert.Equal(t, p.CAs[string("c9bfaf7ce8e84b2eeda2e27b469f4b9617bde192efd214b68891ecda6ed49522")].Details.Name, rootCA.Details.Name)
@@ -452,6 +601,24 @@ BVG+oJpAoqokUBbI4U0N8CSfpUABEkB/Pm5A2xyH/nc8mg/wvGUWG3pZ7nHzaDMf
assert.Nil(t, err)
assert.Equal(t, pp.CAs[string("c9bfaf7ce8e84b2eeda2e27b469f4b9617bde192efd214b68891ecda6ed49522")].Details.Name, rootCA.Details.Name)
assert.Equal(t, pp.CAs[string("5c9c3f23e7ee7fe97637cbd3a0a5b854154d1d9aaaf7b566a51f4a88f76b64cd")].Details.Name, rootCA01.Details.Name)
// expired cert, no valid certs
ppp, err := NewCAPoolFromBytes([]byte(expired))
assert.Equal(t, ErrExpired, err)
assert.Equal(t, ppp.CAs[string("152070be6bb19bc9e3bde4c2f0e7d8f4ff5448b4c9856b8eccb314fade0229b0")].Details.Name, "expired")
// expired cert, with valid certs
pppp, err := NewCAPoolFromBytes(append([]byte(expired), noNewLines...))
assert.Equal(t, ErrExpired, err)
assert.Equal(t, pppp.CAs[string("c9bfaf7ce8e84b2eeda2e27b469f4b9617bde192efd214b68891ecda6ed49522")].Details.Name, rootCA.Details.Name)
assert.Equal(t, pppp.CAs[string("5c9c3f23e7ee7fe97637cbd3a0a5b854154d1d9aaaf7b566a51f4a88f76b64cd")].Details.Name, rootCA01.Details.Name)
assert.Equal(t, pppp.CAs[string("152070be6bb19bc9e3bde4c2f0e7d8f4ff5448b4c9856b8eccb314fade0229b0")].Details.Name, "expired")
assert.Equal(t, len(pppp.CAs), 3)
ppppp, err := NewCAPoolFromBytes([]byte(p256))
assert.Nil(t, err)
assert.Equal(t, ppppp.CAs[string("a7938893ec8c4ef769b06d7f425e5e46f7a7f5ffa49c3bcf4a86b608caba9159")].Details.Name, rootCAP256.Details.Name)
assert.Equal(t, len(ppppp.CAs), 1)
}
func appendByteSlices(b ...[]byte) []byte {
@@ -507,11 +674,16 @@ bzBEr00kERQxxTzTsH8cpYEgRoipvmExvg8WP8NdAJEYJosB
assert.EqualError(t, err, "input did not contain a valid PEM encoded block")
}
func TestUnmarshalEd25519PrivateKey(t *testing.T) {
func TestUnmarshalSigningPrivateKey(t *testing.T) {
privKey := []byte(`# A good key
-----BEGIN NEBULA ED25519 PRIVATE KEY-----
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA==
-----END NEBULA ED25519 PRIVATE KEY-----
`)
privP256Key := []byte(`# A good key
-----BEGIN NEBULA ECDSA P256 PRIVATE KEY-----
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=
-----END NEBULA ECDSA P256 PRIVATE KEY-----
`)
shortKey := []byte(`# A short key
-----BEGIN NEBULA ED25519 PRIVATE KEY-----
@@ -528,39 +700,139 @@ AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA==
-END NEBULA ED25519 PRIVATE KEY-----`)
keyBundle := appendByteSlices(privKey, shortKey, invalidBanner, invalidPem)
keyBundle := appendByteSlices(privKey, privP256Key, shortKey, invalidBanner, invalidPem)
// Success test case
k, rest, err := UnmarshalEd25519PrivateKey(keyBundle)
k, rest, curve, err := UnmarshalSigningPrivateKey(keyBundle)
assert.Len(t, k, 64)
assert.Equal(t, rest, appendByteSlices(privP256Key, shortKey, invalidBanner, invalidPem))
assert.Equal(t, Curve_CURVE25519, curve)
assert.Nil(t, err)
// Success test case
k, rest, curve, err = UnmarshalSigningPrivateKey(rest)
assert.Len(t, k, 32)
assert.Equal(t, rest, appendByteSlices(shortKey, invalidBanner, invalidPem))
assert.Equal(t, Curve_P256, curve)
assert.Nil(t, err)
// Fail due to short key
k, rest, err = UnmarshalEd25519PrivateKey(rest)
k, rest, curve, err = UnmarshalSigningPrivateKey(rest)
assert.Nil(t, k)
assert.Equal(t, rest, appendByteSlices(invalidBanner, invalidPem))
assert.EqualError(t, err, "key was not 64 bytes, is invalid ed25519 private key")
assert.EqualError(t, err, "key was not 64 bytes, is invalid Ed25519 private key")
// Fail due to invalid banner
k, rest, err = UnmarshalEd25519PrivateKey(rest)
k, rest, curve, err = UnmarshalSigningPrivateKey(rest)
assert.Nil(t, k)
assert.Equal(t, rest, invalidPem)
assert.EqualError(t, err, "bytes did not contain a proper nebula Ed25519 private key banner")
assert.EqualError(t, err, "bytes did not contain a proper nebula Ed25519/ECDSA private key banner")
// Fail due to ivalid PEM format, because
// it's missing the requisite pre-encapsulation boundary.
k, rest, err = UnmarshalEd25519PrivateKey(rest)
k, rest, curve, err = UnmarshalSigningPrivateKey(rest)
assert.Nil(t, k)
assert.Equal(t, rest, invalidPem)
assert.EqualError(t, err, "input did not contain a valid PEM encoded block")
}
func TestUnmarshalX25519PrivateKey(t *testing.T) {
func TestDecryptAndUnmarshalSigningPrivateKey(t *testing.T) {
passphrase := []byte("DO NOT USE THIS KEY")
privKey := []byte(`# A good key
-----BEGIN NEBULA ED25519 ENCRYPTED PRIVATE KEY-----
CjwKC0FFUy0yNTYtR0NNEi0IExCAgIABGAEgBCognnjujd67Vsv99p22wfAjQaDT
oCMW1mdjkU3gACKNW4MSXOWR9Sts4C81yk1RUku2gvGKs3TB9LYoklLsIizSYOLl
+Vs//O1T0I1Xbml2XBAROsb/VSoDln/6LMqR4B6fn6B3GOsLBBqRI8daDl9lRMPB
qrlJ69wer3ZUHFXA
-----END NEBULA ED25519 ENCRYPTED PRIVATE KEY-----
`)
shortKey := []byte(`# A key which, once decrypted, is too short
-----BEGIN NEBULA ED25519 ENCRYPTED PRIVATE KEY-----
CjwKC0FFUy0yNTYtR0NNEi0IExCAgIABGAEgBCoga5h8owMEBWRSMMJKzuUvWce7
k0qlBkQmCxiuLh80MuASW70YcKt8jeEIS2axo2V6zAKA9TSMcCsJW1kDDXEtL/xe
GLF5T7sDl5COp4LU3pGxpV+KoeQ/S3gQCAAcnaOtnJQX+aSDnbO3jCHyP7U9CHbs
rQr3bdH3Oy/WiYU=
-----END NEBULA ED25519 ENCRYPTED PRIVATE KEY-----
`)
invalidBanner := []byte(`# Invalid banner (not encrypted)
-----BEGIN NEBULA ED25519 PRIVATE KEY-----
bWRp2CTVFhW9HD/qCd28ltDgK3w8VXSeaEYczDWos8sMUBqDb9jP3+NYwcS4lURG
XgLvodMXZJuaFPssp+WwtA==
-----END NEBULA ED25519 PRIVATE KEY-----
`)
invalidPem := []byte(`# Not a valid PEM format
-BEGIN NEBULA ED25519 ENCRYPTED PRIVATE KEY-----
CjwKC0FFUy0yNTYtR0NNEi0IExCAgIABGAEgBCognnjujd67Vsv99p22wfAjQaDT
oCMW1mdjkU3gACKNW4MSXOWR9Sts4C81yk1RUku2gvGKs3TB9LYoklLsIizSYOLl
+Vs//O1T0I1Xbml2XBAROsb/VSoDln/6LMqR4B6fn6B3GOsLBBqRI8daDl9lRMPB
qrlJ69wer3ZUHFXA
-END NEBULA ED25519 ENCRYPTED PRIVATE KEY-----
`)
keyBundle := appendByteSlices(privKey, shortKey, invalidBanner, invalidPem)
// Success test case
curve, k, rest, err := DecryptAndUnmarshalSigningPrivateKey(passphrase, keyBundle)
assert.Nil(t, err)
assert.Equal(t, Curve_CURVE25519, curve)
assert.Len(t, k, 64)
assert.Equal(t, rest, appendByteSlices(shortKey, invalidBanner, invalidPem))
// Fail due to short key
curve, k, rest, err = DecryptAndUnmarshalSigningPrivateKey(passphrase, rest)
assert.EqualError(t, err, "key was not 64 bytes, is invalid ed25519 private key")
assert.Nil(t, k)
assert.Equal(t, rest, appendByteSlices(invalidBanner, invalidPem))
// Fail due to invalid banner
curve, k, rest, err = DecryptAndUnmarshalSigningPrivateKey(passphrase, rest)
assert.EqualError(t, err, "bytes did not contain a proper nebula encrypted Ed25519/ECDSA private key banner")
assert.Nil(t, k)
assert.Equal(t, rest, invalidPem)
// Fail due to ivalid PEM format, because
// it's missing the requisite pre-encapsulation boundary.
curve, k, rest, err = DecryptAndUnmarshalSigningPrivateKey(passphrase, rest)
assert.EqualError(t, err, "input did not contain a valid PEM encoded block")
assert.Nil(t, k)
assert.Equal(t, rest, invalidPem)
// Fail due to invalid passphrase
curve, k, rest, err = DecryptAndUnmarshalSigningPrivateKey([]byte("invalid passphrase"), privKey)
assert.EqualError(t, err, "invalid passphrase or corrupt private key")
assert.Nil(t, k)
assert.Equal(t, rest, []byte{})
}
func TestEncryptAndMarshalSigningPrivateKey(t *testing.T) {
// Having proved that decryption works correctly above, we can test the
// encryption function produces a value which can be decrypted
passphrase := []byte("passphrase")
bytes := []byte("AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA")
kdfParams := NewArgon2Parameters(64*1024, 4, 3)
key, err := EncryptAndMarshalSigningPrivateKey(Curve_CURVE25519, bytes, passphrase, kdfParams)
assert.Nil(t, err)
// Verify the "key" can be decrypted successfully
curve, k, rest, err := DecryptAndUnmarshalSigningPrivateKey(passphrase, key)
assert.Len(t, k, 64)
assert.Equal(t, Curve_CURVE25519, curve)
assert.Equal(t, rest, []byte{})
assert.Nil(t, err)
// EncryptAndMarshalEd25519PrivateKey does not create any errors itself
}
func TestUnmarshalPrivateKey(t *testing.T) {
privKey := []byte(`# A good key
-----BEGIN NEBULA X25519 PRIVATE KEY-----
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=
-----END NEBULA X25519 PRIVATE KEY-----
`)
privP256Key := []byte(`# A good key
-----BEGIN NEBULA P256 PRIVATE KEY-----
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=
-----END NEBULA P256 PRIVATE KEY-----
`)
shortKey := []byte(`# A short key
-----BEGIN NEBULA X25519 PRIVATE KEY-----
@@ -577,29 +849,37 @@ AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=
-END NEBULA X25519 PRIVATE KEY-----`)
keyBundle := appendByteSlices(privKey, shortKey, invalidBanner, invalidPem)
keyBundle := appendByteSlices(privKey, privP256Key, shortKey, invalidBanner, invalidPem)
// Success test case
k, rest, err := UnmarshalX25519PrivateKey(keyBundle)
k, rest, curve, err := UnmarshalPrivateKey(keyBundle)
assert.Len(t, k, 32)
assert.Equal(t, rest, appendByteSlices(privP256Key, shortKey, invalidBanner, invalidPem))
assert.Equal(t, Curve_CURVE25519, curve)
assert.Nil(t, err)
// Success test case
k, rest, curve, err = UnmarshalPrivateKey(rest)
assert.Len(t, k, 32)
assert.Equal(t, rest, appendByteSlices(shortKey, invalidBanner, invalidPem))
assert.Equal(t, Curve_P256, curve)
assert.Nil(t, err)
// Fail due to short key
k, rest, err = UnmarshalX25519PrivateKey(rest)
k, rest, curve, err = UnmarshalPrivateKey(rest)
assert.Nil(t, k)
assert.Equal(t, rest, appendByteSlices(invalidBanner, invalidPem))
assert.EqualError(t, err, "key was not 32 bytes, is invalid X25519 private key")
assert.EqualError(t, err, "key was not 32 bytes, is invalid CURVE25519 private key")
// Fail due to invalid banner
k, rest, err = UnmarshalX25519PrivateKey(rest)
k, rest, curve, err = UnmarshalPrivateKey(rest)
assert.Nil(t, k)
assert.Equal(t, rest, invalidPem)
assert.EqualError(t, err, "bytes did not contain a proper nebula X25519 private key banner")
assert.EqualError(t, err, "bytes did not contain a proper nebula private key banner")
// Fail due to ivalid PEM format, because
// it's missing the requisite pre-encapsulation boundary.
k, rest, err = UnmarshalX25519PrivateKey(rest)
k, rest, curve, err = UnmarshalPrivateKey(rest)
assert.Nil(t, k)
assert.Equal(t, rest, invalidPem)
assert.EqualError(t, err, "input did not contain a valid PEM encoded block")
@@ -659,6 +939,12 @@ func TestUnmarshalX25519PublicKey(t *testing.T) {
-----BEGIN NEBULA X25519 PUBLIC KEY-----
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=
-----END NEBULA X25519 PUBLIC KEY-----
`)
pubP256Key := []byte(`# A good key
-----BEGIN NEBULA P256 PUBLIC KEY-----
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAA=
-----END NEBULA P256 PUBLIC KEY-----
`)
shortKey := []byte(`# A short key
-----BEGIN NEBULA X25519 PUBLIC KEY-----
@@ -675,29 +961,37 @@ AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=
-END NEBULA X25519 PUBLIC KEY-----`)
keyBundle := appendByteSlices(pubKey, shortKey, invalidBanner, invalidPem)
keyBundle := appendByteSlices(pubKey, pubP256Key, shortKey, invalidBanner, invalidPem)
// Success test case
k, rest, err := UnmarshalX25519PublicKey(keyBundle)
k, rest, curve, err := UnmarshalPublicKey(keyBundle)
assert.Equal(t, len(k), 32)
assert.Nil(t, err)
assert.Equal(t, rest, appendByteSlices(pubP256Key, shortKey, invalidBanner, invalidPem))
assert.Equal(t, Curve_CURVE25519, curve)
// Success test case
k, rest, curve, err = UnmarshalPublicKey(rest)
assert.Equal(t, len(k), 65)
assert.Nil(t, err)
assert.Equal(t, rest, appendByteSlices(shortKey, invalidBanner, invalidPem))
assert.Equal(t, Curve_P256, curve)
// Fail due to short key
k, rest, err = UnmarshalX25519PublicKey(rest)
k, rest, curve, err = UnmarshalPublicKey(rest)
assert.Nil(t, k)
assert.Equal(t, rest, appendByteSlices(invalidBanner, invalidPem))
assert.EqualError(t, err, "key was not 32 bytes, is invalid X25519 public key")
assert.EqualError(t, err, "key was not 32 bytes, is invalid CURVE25519 public key")
// Fail due to invalid banner
k, rest, err = UnmarshalX25519PublicKey(rest)
k, rest, curve, err = UnmarshalPublicKey(rest)
assert.Nil(t, k)
assert.EqualError(t, err, "bytes did not contain a proper nebula X25519 public key banner")
assert.EqualError(t, err, "bytes did not contain a proper nebula public key banner")
assert.Equal(t, rest, invalidPem)
// Fail due to ivalid PEM format, because
// it's missing the requisite pre-encapsulation boundary.
k, rest, err = UnmarshalX25519PublicKey(rest)
k, rest, curve, err = UnmarshalPublicKey(rest)
assert.Nil(t, k)
assert.Equal(t, rest, invalidPem)
assert.EqualError(t, err, "input did not contain a valid PEM encoded block")
@@ -752,7 +1046,7 @@ func TestNebulaCertificate_Copy(t *testing.T) {
assert.Nil(t, err)
cc := c.Copy()
util.AssertDeepCopyEqual(t, c, cc)
test.AssertDeepCopyEqual(t, c, cc)
}
func TestUnmarshalNebulaCertificate(t *testing.T) {
@@ -794,13 +1088,56 @@ func newTestCaCert(before, after time.Time, ips, subnets []*net.IPNet, groups []
nc.Details.Groups = groups
}
err = nc.Sign(priv)
err = nc.Sign(Curve_CURVE25519, priv)
if err != nil {
return nil, nil, nil, err
}
return nc, pub, priv, nil
}
func newTestCaCertP256(before, after time.Time, ips, subnets []*net.IPNet, groups []string) (*NebulaCertificate, []byte, []byte, error) {
priv, err := ecdsa.GenerateKey(elliptic.P256(), rand.Reader)
pub := elliptic.Marshal(elliptic.P256(), priv.PublicKey.X, priv.PublicKey.Y)
rawPriv := priv.D.FillBytes(make([]byte, 32))
if before.IsZero() {
before = time.Now().Add(time.Second * -60).Round(time.Second)
}
if after.IsZero() {
after = time.Now().Add(time.Second * 60).Round(time.Second)
}
nc := &NebulaCertificate{
Details: NebulaCertificateDetails{
Name: "test ca",
NotBefore: time.Unix(before.Unix(), 0),
NotAfter: time.Unix(after.Unix(), 0),
PublicKey: pub,
IsCA: true,
Curve: Curve_P256,
InvertedGroups: make(map[string]struct{}),
},
}
if len(ips) > 0 {
nc.Details.Ips = ips
}
if len(subnets) > 0 {
nc.Details.Subnets = subnets
}
if len(groups) > 0 {
nc.Details.Groups = groups
}
err = nc.Sign(Curve_P256, rawPriv)
if err != nil {
return nil, nil, nil, err
}
return nc, pub, rawPriv, nil
}
func newTestCert(ca *NebulaCertificate, key []byte, before, after time.Time, ips, subnets []*net.IPNet, groups []string) (*NebulaCertificate, []byte, []byte, error) {
issuer, err := ca.Sha256Sum()
if err != nil {
@@ -834,7 +1171,16 @@ func newTestCert(ca *NebulaCertificate, key []byte, before, after time.Time, ips
}
}
pub, rawPriv := x25519Keypair()
var pub, rawPriv []byte
switch ca.Details.Curve {
case Curve_CURVE25519:
pub, rawPriv = x25519Keypair()
case Curve_P256:
pub, rawPriv = p256Keypair()
default:
return nil, nil, nil, fmt.Errorf("unknown curve: %v", ca.Details.Curve)
}
nc := &NebulaCertificate{
Details: NebulaCertificateDetails{
@@ -846,12 +1192,13 @@ func newTestCert(ca *NebulaCertificate, key []byte, before, after time.Time, ips
NotAfter: time.Unix(after.Unix(), 0),
PublicKey: pub,
IsCA: false,
Curve: ca.Details.Curve,
Issuer: issuer,
InvertedGroups: make(map[string]struct{}),
},
}
err = nc.Sign(key)
err = nc.Sign(ca.Details.Curve, key)
if err != nil {
return nil, nil, nil, err
}
@@ -872,3 +1219,12 @@ func x25519Keypair() ([]byte, []byte) {
return pubkey, privkey
}
func p256Keypair() ([]byte, []byte) {
privkey, err := ecdh.P256().GenerateKey(rand.Reader)
if err != nil {
panic(err)
}
pubkey := privkey.PublicKey()
return pubkey.Bytes(), privkey.Bytes()
}

143
cert/crypto.go Normal file
View File

@@ -0,0 +1,143 @@
package cert
import (
"crypto/aes"
"crypto/cipher"
"crypto/rand"
"fmt"
"io"
"golang.org/x/crypto/argon2"
)
// KDF factors
type Argon2Parameters struct {
version rune
Memory uint32 // KiB
Parallelism uint8
Iterations uint32
salt []byte
}
// Returns a new Argon2Parameters object with current version set
func NewArgon2Parameters(memory uint32, parallelism uint8, iterations uint32) *Argon2Parameters {
return &Argon2Parameters{
version: argon2.Version,
Memory: memory, // KiB
Parallelism: parallelism,
Iterations: iterations,
}
}
// Encrypts data using AES-256-GCM and the Argon2id key derivation function
func aes256Encrypt(passphrase []byte, kdfParams *Argon2Parameters, data []byte) ([]byte, error) {
key, err := aes256DeriveKey(passphrase, kdfParams)
if err != nil {
return nil, err
}
// this should never happen, but since this dictates how our calls into the
// aes package behave and could be catastraphic, let's sanity check this
if len(key) != 32 {
return nil, fmt.Errorf("invalid AES-256 key length (%d) - cowardly refusing to encrypt", len(key))
}
block, err := aes.NewCipher(key)
if err != nil {
return nil, err
}
gcm, err := cipher.NewGCM(block)
if err != nil {
return nil, err
}
nonce := make([]byte, gcm.NonceSize())
if _, err := io.ReadFull(rand.Reader, nonce); err != nil {
return nil, err
}
ciphertext := gcm.Seal(nil, nonce, data, nil)
blob := joinNonceCiphertext(nonce, ciphertext)
return blob, nil
}
// Decrypts data using AES-256-GCM and the Argon2id key derivation function
// Expects the data to include an Argon2id parameter string before the encrypted data
func aes256Decrypt(passphrase []byte, kdfParams *Argon2Parameters, data []byte) ([]byte, error) {
key, err := aes256DeriveKey(passphrase, kdfParams)
if err != nil {
return nil, err
}
block, err := aes.NewCipher(key)
if err != nil {
return nil, err
}
gcm, err := cipher.NewGCM(block)
if err != nil {
return nil, err
}
nonce, ciphertext, err := splitNonceCiphertext(data, gcm.NonceSize())
if err != nil {
return nil, err
}
plaintext, err := gcm.Open(nil, nonce, ciphertext, nil)
if err != nil {
return nil, fmt.Errorf("invalid passphrase or corrupt private key")
}
return plaintext, nil
}
func aes256DeriveKey(passphrase []byte, params *Argon2Parameters) ([]byte, error) {
if params.salt == nil {
params.salt = make([]byte, 32)
if _, err := rand.Read(params.salt); err != nil {
return nil, err
}
}
// keySize of 32 bytes will result in AES-256 encryption
key, err := deriveKey(passphrase, 32, params)
if err != nil {
return nil, err
}
return key, nil
}
// Derives a key from a passphrase using Argon2id
func deriveKey(passphrase []byte, keySize uint32, params *Argon2Parameters) ([]byte, error) {
if params.version != argon2.Version {
return nil, fmt.Errorf("incompatible Argon2 version: %d", params.version)
}
if params.salt == nil {
return nil, fmt.Errorf("salt must be set in argon2Parameters")
} else if len(params.salt) < 16 {
return nil, fmt.Errorf("salt must be at least 128 bits")
}
key := argon2.IDKey(passphrase, params.salt, params.Iterations, params.Memory, params.Parallelism, keySize)
return key, nil
}
// Prepends nonce to ciphertext
func joinNonceCiphertext(nonce []byte, ciphertext []byte) []byte {
return append(nonce, ciphertext...)
}
// Splits nonce from ciphertext
func splitNonceCiphertext(blob []byte, nonceSize int) ([]byte, []byte, error) {
if len(blob) <= nonceSize {
return nil, nil, fmt.Errorf("invalid ciphertext blob - blob shorter than nonce length")
}
return blob[:nonceSize], blob[nonceSize:], nil
}

25
cert/crypto_test.go Normal file
View File

@@ -0,0 +1,25 @@
package cert
import (
"testing"
"github.com/stretchr/testify/assert"
"golang.org/x/crypto/argon2"
)
func TestNewArgon2Parameters(t *testing.T) {
p := NewArgon2Parameters(64*1024, 4, 3)
assert.EqualValues(t, &Argon2Parameters{
version: argon2.Version,
Memory: 64 * 1024,
Parallelism: 4,
Iterations: 3,
}, p)
p = NewArgon2Parameters(2*1024*1024, 2, 1)
assert.EqualValues(t, &Argon2Parameters{
version: argon2.Version,
Memory: 2 * 1024 * 1024,
Parallelism: 2,
Iterations: 1,
}, p)
}

14
cert/errors.go Normal file
View File

@@ -0,0 +1,14 @@
package cert
import (
"errors"
)
var (
ErrRootExpired = errors.New("root certificate is expired")
ErrExpired = errors.New("certificate is expired")
ErrNotCA = errors.New("certificate is not a CA")
ErrNotSelfSigned = errors.New("certificate is not self-signed")
ErrBlockListed = errors.New("certificate is in the block list")
ErrSignatureMismatch = errors.New("certificate signature did not match")
)

View File

@@ -1,10 +0,0 @@
package cidr
import "net"
// Parse is a convenience function that returns only the IPNet
// This function ignores errors since it is primarily a test helper, the result could be nil
func Parse(s string) *net.IPNet {
_, c, _ := net.ParseCIDR(s)
return c
}

View File

@@ -1,145 +0,0 @@
package cidr
import (
"net"
"github.com/slackhq/nebula/iputil"
)
type Node struct {
left *Node
right *Node
parent *Node
value interface{}
}
type Tree4 struct {
root *Node
}
const (
startbit = iputil.VpnIp(0x80000000)
)
func NewTree4() *Tree4 {
tree := new(Tree4)
tree.root = &Node{}
return tree
}
func (tree *Tree4) AddCIDR(cidr *net.IPNet, val interface{}) {
bit := startbit
node := tree.root
next := tree.root
ip := iputil.Ip2VpnIp(cidr.IP)
mask := iputil.Ip2VpnIp(cidr.Mask)
// Find our last ancestor in the tree
for bit&mask != 0 {
if ip&bit != 0 {
next = node.right
} else {
next = node.left
}
if next == nil {
break
}
bit = bit >> 1
node = next
}
// We already have this range so update the value
if next != nil {
node.value = val
return
}
// Build up the rest of the tree we don't already have
for bit&mask != 0 {
next = &Node{}
next.parent = node
if ip&bit != 0 {
node.right = next
} else {
node.left = next
}
bit >>= 1
node = next
}
// Final node marks our cidr, set the value
node.value = val
}
// Finds the first match, which may be the least specific
func (tree *Tree4) Contains(ip iputil.VpnIp) (value interface{}) {
bit := startbit
node := tree.root
for node != nil {
if node.value != nil {
return node.value
}
if ip&bit != 0 {
node = node.right
} else {
node = node.left
}
bit >>= 1
}
return value
}
// Finds the most specific match
func (tree *Tree4) MostSpecificContains(ip iputil.VpnIp) (value interface{}) {
bit := startbit
node := tree.root
for node != nil {
if node.value != nil {
value = node.value
}
if ip&bit != 0 {
node = node.right
} else {
node = node.left
}
bit >>= 1
}
return value
}
// Finds the most specific match
func (tree *Tree4) Match(ip iputil.VpnIp) (value interface{}) {
bit := startbit
node := tree.root
lastNode := node
for node != nil {
lastNode = node
if ip&bit != 0 {
node = node.right
} else {
node = node.left
}
bit >>= 1
}
if bit == 0 && lastNode != nil {
value = lastNode.value
}
return value
}

View File

@@ -1,153 +0,0 @@
package cidr
import (
"net"
"testing"
"github.com/slackhq/nebula/iputil"
"github.com/stretchr/testify/assert"
)
func TestCIDRTree_Contains(t *testing.T) {
tree := NewTree4()
tree.AddCIDR(Parse("1.0.0.0/8"), "1")
tree.AddCIDR(Parse("2.1.0.0/16"), "2")
tree.AddCIDR(Parse("3.1.1.0/24"), "3")
tree.AddCIDR(Parse("4.1.1.0/24"), "4a")
tree.AddCIDR(Parse("4.1.1.1/32"), "4b")
tree.AddCIDR(Parse("4.1.2.1/32"), "4c")
tree.AddCIDR(Parse("254.0.0.0/4"), "5")
tests := []struct {
Result interface{}
IP string
}{
{"1", "1.0.0.0"},
{"1", "1.255.255.255"},
{"2", "2.1.0.0"},
{"2", "2.1.255.255"},
{"3", "3.1.1.0"},
{"3", "3.1.1.255"},
{"4a", "4.1.1.255"},
{"4a", "4.1.1.1"},
{"5", "240.0.0.0"},
{"5", "255.255.255.255"},
{nil, "239.0.0.0"},
{nil, "4.1.2.2"},
}
for _, tt := range tests {
assert.Equal(t, tt.Result, tree.Contains(iputil.Ip2VpnIp(net.ParseIP(tt.IP))))
}
tree = NewTree4()
tree.AddCIDR(Parse("1.1.1.1/0"), "cool")
assert.Equal(t, "cool", tree.Contains(iputil.Ip2VpnIp(net.ParseIP("0.0.0.0"))))
assert.Equal(t, "cool", tree.Contains(iputil.Ip2VpnIp(net.ParseIP("255.255.255.255"))))
}
func TestCIDRTree_MostSpecificContains(t *testing.T) {
tree := NewTree4()
tree.AddCIDR(Parse("1.0.0.0/8"), "1")
tree.AddCIDR(Parse("2.1.0.0/16"), "2")
tree.AddCIDR(Parse("3.1.1.0/24"), "3")
tree.AddCIDR(Parse("4.1.1.0/24"), "4a")
tree.AddCIDR(Parse("4.1.1.0/30"), "4b")
tree.AddCIDR(Parse("4.1.1.1/32"), "4c")
tree.AddCIDR(Parse("254.0.0.0/4"), "5")
tests := []struct {
Result interface{}
IP string
}{
{"1", "1.0.0.0"},
{"1", "1.255.255.255"},
{"2", "2.1.0.0"},
{"2", "2.1.255.255"},
{"3", "3.1.1.0"},
{"3", "3.1.1.255"},
{"4a", "4.1.1.255"},
{"4b", "4.1.1.2"},
{"4c", "4.1.1.1"},
{"5", "240.0.0.0"},
{"5", "255.255.255.255"},
{nil, "239.0.0.0"},
{nil, "4.1.2.2"},
}
for _, tt := range tests {
assert.Equal(t, tt.Result, tree.MostSpecificContains(iputil.Ip2VpnIp(net.ParseIP(tt.IP))))
}
tree = NewTree4()
tree.AddCIDR(Parse("1.1.1.1/0"), "cool")
assert.Equal(t, "cool", tree.MostSpecificContains(iputil.Ip2VpnIp(net.ParseIP("0.0.0.0"))))
assert.Equal(t, "cool", tree.MostSpecificContains(iputil.Ip2VpnIp(net.ParseIP("255.255.255.255"))))
}
func TestCIDRTree_Match(t *testing.T) {
tree := NewTree4()
tree.AddCIDR(Parse("4.1.1.0/32"), "1a")
tree.AddCIDR(Parse("4.1.1.1/32"), "1b")
tests := []struct {
Result interface{}
IP string
}{
{"1a", "4.1.1.0"},
{"1b", "4.1.1.1"},
}
for _, tt := range tests {
assert.Equal(t, tt.Result, tree.Match(iputil.Ip2VpnIp(net.ParseIP(tt.IP))))
}
tree = NewTree4()
tree.AddCIDR(Parse("1.1.1.1/0"), "cool")
assert.Equal(t, "cool", tree.Contains(iputil.Ip2VpnIp(net.ParseIP("0.0.0.0"))))
assert.Equal(t, "cool", tree.Contains(iputil.Ip2VpnIp(net.ParseIP("255.255.255.255"))))
}
func BenchmarkCIDRTree_Contains(b *testing.B) {
tree := NewTree4()
tree.AddCIDR(Parse("1.1.0.0/16"), "1")
tree.AddCIDR(Parse("1.2.1.1/32"), "1")
tree.AddCIDR(Parse("192.2.1.1/32"), "1")
tree.AddCIDR(Parse("172.2.1.1/32"), "1")
ip := iputil.Ip2VpnIp(net.ParseIP("1.2.1.1"))
b.Run("found", func(b *testing.B) {
for i := 0; i < b.N; i++ {
tree.Contains(ip)
}
})
ip = iputil.Ip2VpnIp(net.ParseIP("1.2.1.255"))
b.Run("not found", func(b *testing.B) {
for i := 0; i < b.N; i++ {
tree.Contains(ip)
}
})
}
func BenchmarkCIDRTree_Match(b *testing.B) {
tree := NewTree4()
tree.AddCIDR(Parse("1.1.0.0/16"), "1")
tree.AddCIDR(Parse("1.2.1.1/32"), "1")
tree.AddCIDR(Parse("192.2.1.1/32"), "1")
tree.AddCIDR(Parse("172.2.1.1/32"), "1")
ip := iputil.Ip2VpnIp(net.ParseIP("1.2.1.1"))
b.Run("found", func(b *testing.B) {
for i := 0; i < b.N; i++ {
tree.Match(ip)
}
})
ip = iputil.Ip2VpnIp(net.ParseIP("1.2.1.255"))
b.Run("not found", func(b *testing.B) {
for i := 0; i < b.N; i++ {
tree.Match(ip)
}
})
}

View File

@@ -1,185 +0,0 @@
package cidr
import (
"net"
"github.com/slackhq/nebula/iputil"
)
const startbit6 = uint64(1 << 63)
type Tree6 struct {
root4 *Node
root6 *Node
}
func NewTree6() *Tree6 {
tree := new(Tree6)
tree.root4 = &Node{}
tree.root6 = &Node{}
return tree
}
func (tree *Tree6) AddCIDR(cidr *net.IPNet, val interface{}) {
var node, next *Node
cidrIP, ipv4 := isIPV4(cidr.IP)
if ipv4 {
node = tree.root4
next = tree.root4
} else {
node = tree.root6
next = tree.root6
}
for i := 0; i < len(cidrIP); i += 4 {
ip := iputil.Ip2VpnIp(cidrIP[i : i+4])
mask := iputil.Ip2VpnIp(cidr.Mask[i : i+4])
bit := startbit
// Find our last ancestor in the tree
for bit&mask != 0 {
if ip&bit != 0 {
next = node.right
} else {
next = node.left
}
if next == nil {
break
}
bit = bit >> 1
node = next
}
// Build up the rest of the tree we don't already have
for bit&mask != 0 {
next = &Node{}
next.parent = node
if ip&bit != 0 {
node.right = next
} else {
node.left = next
}
bit >>= 1
node = next
}
}
// Final node marks our cidr, set the value
node.value = val
}
// Finds the most specific match
func (tree *Tree6) MostSpecificContains(ip net.IP) (value interface{}) {
var node *Node
wholeIP, ipv4 := isIPV4(ip)
if ipv4 {
node = tree.root4
} else {
node = tree.root6
}
for i := 0; i < len(wholeIP); i += 4 {
ip := iputil.Ip2VpnIp(wholeIP[i : i+4])
bit := startbit
for node != nil {
if node.value != nil {
value = node.value
}
if bit == 0 {
break
}
if ip&bit != 0 {
node = node.right
} else {
node = node.left
}
bit >>= 1
}
}
return value
}
func (tree *Tree6) MostSpecificContainsIpV4(ip iputil.VpnIp) (value interface{}) {
bit := startbit
node := tree.root4
for node != nil {
if node.value != nil {
value = node.value
}
if ip&bit != 0 {
node = node.right
} else {
node = node.left
}
bit >>= 1
}
return value
}
func (tree *Tree6) MostSpecificContainsIpV6(hi, lo uint64) (value interface{}) {
ip := hi
node := tree.root6
for i := 0; i < 2; i++ {
bit := startbit6
for node != nil {
if node.value != nil {
value = node.value
}
if bit == 0 {
break
}
if ip&bit != 0 {
node = node.right
} else {
node = node.left
}
bit >>= 1
}
ip = lo
}
return value
}
func isIPV4(ip net.IP) (net.IP, bool) {
if len(ip) == net.IPv4len {
return ip, true
}
if len(ip) == net.IPv6len && isZeros(ip[0:10]) && ip[10] == 0xff && ip[11] == 0xff {
return ip[12:16], true
}
return ip, false
}
func isZeros(p net.IP) bool {
for i := 0; i < len(p); i++ {
if p[i] != 0 {
return false
}
}
return true
}

View File

@@ -1,81 +0,0 @@
package cidr
import (
"encoding/binary"
"net"
"testing"
"github.com/stretchr/testify/assert"
)
func TestCIDR6Tree_MostSpecificContains(t *testing.T) {
tree := NewTree6()
tree.AddCIDR(Parse("1.0.0.0/8"), "1")
tree.AddCIDR(Parse("2.1.0.0/16"), "2")
tree.AddCIDR(Parse("3.1.1.0/24"), "3")
tree.AddCIDR(Parse("4.1.1.1/24"), "4a")
tree.AddCIDR(Parse("4.1.1.1/30"), "4b")
tree.AddCIDR(Parse("4.1.1.1/32"), "4c")
tree.AddCIDR(Parse("254.0.0.0/4"), "5")
tree.AddCIDR(Parse("1:2:0:4:5:0:0:0/64"), "6a")
tree.AddCIDR(Parse("1:2:0:4:5:0:0:0/80"), "6b")
tree.AddCIDR(Parse("1:2:0:4:5:0:0:0/96"), "6c")
tests := []struct {
Result interface{}
IP string
}{
{"1", "1.0.0.0"},
{"1", "1.255.255.255"},
{"2", "2.1.0.0"},
{"2", "2.1.255.255"},
{"3", "3.1.1.0"},
{"3", "3.1.1.255"},
{"4a", "4.1.1.255"},
{"4b", "4.1.1.2"},
{"4c", "4.1.1.1"},
{"5", "240.0.0.0"},
{"5", "255.255.255.255"},
{"6a", "1:2:0:4:1:1:1:1"},
{"6b", "1:2:0:4:5:1:1:1"},
{"6c", "1:2:0:4:5:0:0:0"},
{nil, "239.0.0.0"},
{nil, "4.1.2.2"},
}
for _, tt := range tests {
assert.Equal(t, tt.Result, tree.MostSpecificContains(net.ParseIP(tt.IP)))
}
tree = NewTree6()
tree.AddCIDR(Parse("1.1.1.1/0"), "cool")
tree.AddCIDR(Parse("::/0"), "cool6")
assert.Equal(t, "cool", tree.MostSpecificContains(net.ParseIP("0.0.0.0")))
assert.Equal(t, "cool", tree.MostSpecificContains(net.ParseIP("255.255.255.255")))
assert.Equal(t, "cool6", tree.MostSpecificContains(net.ParseIP("::")))
assert.Equal(t, "cool6", tree.MostSpecificContains(net.ParseIP("1:2:3:4:5:6:7:8")))
}
func TestCIDR6Tree_MostSpecificContainsIpV6(t *testing.T) {
tree := NewTree6()
tree.AddCIDR(Parse("1:2:0:4:5:0:0:0/64"), "6a")
tree.AddCIDR(Parse("1:2:0:4:5:0:0:0/80"), "6b")
tree.AddCIDR(Parse("1:2:0:4:5:0:0:0/96"), "6c")
tests := []struct {
Result interface{}
IP string
}{
{"6a", "1:2:0:4:1:1:1:1"},
{"6b", "1:2:0:4:5:1:1:1"},
{"6c", "1:2:0:4:5:0:0:0"},
}
for _, tt := range tests {
ip := net.ParseIP(tt.IP)
hi := binary.BigEndian.Uint64(ip[:8])
lo := binary.BigEndian.Uint64(ip[8:])
assert.Equal(t, tt.Result, tree.MostSpecificContainsIpV6(hi, lo))
}
}

View File

@@ -1,11 +1,13 @@
package main
import (
"crypto/ecdsa"
"crypto/elliptic"
"crypto/rand"
"flag"
"fmt"
"io"
"io/ioutil"
"math"
"net"
"os"
"strings"
@@ -17,15 +19,21 @@ import (
)
type caFlags struct {
set *flag.FlagSet
name *string
duration *time.Duration
outKeyPath *string
outCertPath *string
outQRPath *string
groups *string
ips *string
subnets *string
set *flag.FlagSet
name *string
duration *time.Duration
outKeyPath *string
outCertPath *string
outQRPath *string
groups *string
ips *string
subnets *string
argonMemory *uint
argonIterations *uint
argonParallelism *uint
encryption *bool
curve *string
}
func newCaFlags() *caFlags {
@@ -37,12 +45,31 @@ func newCaFlags() *caFlags {
cf.outCertPath = cf.set.String("out-crt", "ca.crt", "Optional: path to write the certificate to")
cf.outQRPath = cf.set.String("out-qr", "", "Optional: output a qr code image (png) of the certificate")
cf.groups = cf.set.String("groups", "", "Optional: comma separated list of groups. This will limit which groups subordinate certs can use")
cf.ips = cf.set.String("ips", "", "Optional: comma separated list of ip and network in CIDR notation. This will limit which ip addresses and networks subordinate certs can use")
cf.subnets = cf.set.String("subnets", "", "Optional: comma separated list of ip and network in CIDR notation. This will limit which subnet addresses and networks subordinate certs can use")
cf.ips = cf.set.String("ips", "", "Optional: comma separated list of ipv4 address and network in CIDR notation. This will limit which ipv4 addresses and networks subordinate certs can use for ip addresses")
cf.subnets = cf.set.String("subnets", "", "Optional: comma separated list of ipv4 address and network in CIDR notation. This will limit which ipv4 addresses and networks subordinate certs can use in subnets")
cf.argonMemory = cf.set.Uint("argon-memory", 2*1024*1024, "Optional: Argon2 memory parameter (in KiB) used for encrypted private key passphrase")
cf.argonParallelism = cf.set.Uint("argon-parallelism", 4, "Optional: Argon2 parallelism parameter used for encrypted private key passphrase")
cf.argonIterations = cf.set.Uint("argon-iterations", 1, "Optional: Argon2 iterations parameter used for encrypted private key passphrase")
cf.encryption = cf.set.Bool("encrypt", false, "Optional: prompt for passphrase and write out-key in an encrypted format")
cf.curve = cf.set.String("curve", "25519", "EdDSA/ECDSA Curve (25519, P256)")
return &cf
}
func ca(args []string, out io.Writer, errOut io.Writer) error {
func parseArgonParameters(memory uint, parallelism uint, iterations uint) (*cert.Argon2Parameters, error) {
if memory <= 0 || memory > math.MaxUint32 {
return nil, newHelpErrorf("-argon-memory must be be greater than 0 and no more than %d KiB", uint32(math.MaxUint32))
}
if parallelism <= 0 || parallelism > math.MaxUint8 {
return nil, newHelpErrorf("-argon-parallelism must be be greater than 0 and no more than %d", math.MaxUint8)
}
if iterations <= 0 || iterations > math.MaxUint32 {
return nil, newHelpErrorf("-argon-iterations must be be greater than 0 and no more than %d", uint32(math.MaxUint32))
}
return cert.NewArgon2Parameters(uint32(memory), uint8(parallelism), uint32(iterations)), nil
}
func ca(args []string, out io.Writer, errOut io.Writer, pr PasswordReader) error {
cf := newCaFlags()
err := cf.set.Parse(args)
if err != nil {
@@ -58,6 +85,12 @@ func ca(args []string, out io.Writer, errOut io.Writer) error {
if err := mustFlagString("out-crt", cf.outCertPath); err != nil {
return err
}
var kdfParams *cert.Argon2Parameters
if *cf.encryption {
if kdfParams, err = parseArgonParameters(*cf.argonMemory, *cf.argonParallelism, *cf.argonIterations); err != nil {
return err
}
}
if *cf.duration <= 0 {
return &helpError{"-duration must be greater than 0"}
@@ -82,6 +115,9 @@ func ca(args []string, out io.Writer, errOut io.Writer) error {
if err != nil {
return newHelpErrorf("invalid ip definition: %s", err)
}
if ip.To4() == nil {
return newHelpErrorf("invalid ip definition: can only be ipv4, have %s", rs)
}
ipNet.IP = ip
ips = append(ips, ipNet)
@@ -98,14 +134,61 @@ func ca(args []string, out io.Writer, errOut io.Writer) error {
if err != nil {
return newHelpErrorf("invalid subnet definition: %s", err)
}
if s.IP.To4() == nil {
return newHelpErrorf("invalid subnet definition: can only be ipv4, have %s", rs)
}
subnets = append(subnets, s)
}
}
}
pub, rawPriv, err := ed25519.GenerateKey(rand.Reader)
if err != nil {
return fmt.Errorf("error while generating ed25519 keys: %s", err)
var passphrase []byte
if *cf.encryption {
for i := 0; i < 5; i++ {
out.Write([]byte("Enter passphrase: "))
passphrase, err = pr.ReadPassword()
if err == ErrNoTerminal {
return fmt.Errorf("out-key must be encrypted interactively")
} else if err != nil {
return fmt.Errorf("error reading passphrase: %s", err)
}
if len(passphrase) > 0 {
break
}
}
if len(passphrase) == 0 {
return fmt.Errorf("no passphrase specified, remove -encrypt flag to write out-key in plaintext")
}
}
var curve cert.Curve
var pub, rawPriv []byte
switch *cf.curve {
case "25519", "X25519", "Curve25519", "CURVE25519":
curve = cert.Curve_CURVE25519
pub, rawPriv, err = ed25519.GenerateKey(rand.Reader)
if err != nil {
return fmt.Errorf("error while generating ed25519 keys: %s", err)
}
case "P256":
var key *ecdsa.PrivateKey
curve = cert.Curve_P256
key, err = ecdsa.GenerateKey(elliptic.P256(), rand.Reader)
if err != nil {
return fmt.Errorf("error while generating ecdsa keys: %s", err)
}
// ecdh.PrivateKey lets us get at the encoded bytes, even though
// we aren't using ECDH here.
eKey, err := key.ECDH()
if err != nil {
return fmt.Errorf("error while converting ecdsa key: %s", err)
}
rawPriv = eKey.Bytes()
pub = eKey.PublicKey().Bytes()
}
nc := cert.NebulaCertificate{
@@ -118,6 +201,7 @@ func ca(args []string, out io.Writer, errOut io.Writer) error {
NotAfter: time.Now().Add(*cf.duration),
PublicKey: pub,
IsCA: true,
Curve: curve,
},
}
@@ -129,22 +213,32 @@ func ca(args []string, out io.Writer, errOut io.Writer) error {
return fmt.Errorf("refusing to overwrite existing CA cert: %s", *cf.outCertPath)
}
err = nc.Sign(rawPriv)
err = nc.Sign(curve, rawPriv)
if err != nil {
return fmt.Errorf("error while signing: %s", err)
}
err = ioutil.WriteFile(*cf.outKeyPath, cert.MarshalEd25519PrivateKey(rawPriv), 0600)
var b []byte
if *cf.encryption {
b, err = cert.EncryptAndMarshalSigningPrivateKey(curve, rawPriv, passphrase, kdfParams)
if err != nil {
return fmt.Errorf("error while encrypting out-key: %s", err)
}
} else {
b = cert.MarshalSigningPrivateKey(curve, rawPriv)
}
err = os.WriteFile(*cf.outKeyPath, b, 0600)
if err != nil {
return fmt.Errorf("error while writing out-key: %s", err)
}
b, err := nc.MarshalToPEM()
b, err = nc.MarshalToPEM()
if err != nil {
return fmt.Errorf("error while marshalling certificate: %s", err)
}
err = ioutil.WriteFile(*cf.outCertPath, b, 0600)
err = os.WriteFile(*cf.outCertPath, b, 0600)
if err != nil {
return fmt.Errorf("error while writing out-crt: %s", err)
}
@@ -155,7 +249,7 @@ func ca(args []string, out io.Writer, errOut io.Writer) error {
return fmt.Errorf("error while generating qr code: %s", err)
}
err = ioutil.WriteFile(*cf.outQRPath, b, 0600)
err = os.WriteFile(*cf.outQRPath, b, 0600)
if err != nil {
return fmt.Errorf("error while writing out-qr: %s", err)
}

View File

@@ -5,8 +5,10 @@ package main
import (
"bytes"
"io/ioutil"
"encoding/pem"
"errors"
"os"
"strings"
"testing"
"time"
@@ -26,12 +28,22 @@ func Test_caHelp(t *testing.T) {
assert.Equal(
t,
"Usage of "+os.Args[0]+" ca <flags>: create a self signed certificate authority\n"+
" -argon-iterations uint\n"+
" \tOptional: Argon2 iterations parameter used for encrypted private key passphrase (default 1)\n"+
" -argon-memory uint\n"+
" \tOptional: Argon2 memory parameter (in KiB) used for encrypted private key passphrase (default 2097152)\n"+
" -argon-parallelism uint\n"+
" \tOptional: Argon2 parallelism parameter used for encrypted private key passphrase (default 4)\n"+
" -curve string\n"+
" \tEdDSA/ECDSA Curve (25519, P256) (default \"25519\")\n"+
" -duration duration\n"+
" \tOptional: amount of time the certificate should be valid for. Valid time units are seconds: \"s\", minutes: \"m\", hours: \"h\" (default 8760h0m0s)\n"+
" -encrypt\n"+
" \tOptional: prompt for passphrase and write out-key in an encrypted format\n"+
" -groups string\n"+
" \tOptional: comma separated list of groups. This will limit which groups subordinate certs can use\n"+
" -ips string\n"+
" \tOptional: comma separated list of ip and network in CIDR notation. This will limit which ip addresses and networks subordinate certs can use\n"+
" \tOptional: comma separated list of ipv4 address and network in CIDR notation. This will limit which ipv4 addresses and networks subordinate certs can use for ip addresses\n"+
" -name string\n"+
" \tRequired: name of the certificate authority\n"+
" -out-crt string\n"+
@@ -41,7 +53,7 @@ func Test_caHelp(t *testing.T) {
" -out-qr string\n"+
" \tOptional: output a qr code image (png) of the certificate\n"+
" -subnets string\n"+
" \tOptional: comma separated list of ip and network in CIDR notation. This will limit which subnet addresses and networks subordinate certs can use\n",
" \tOptional: comma separated list of ipv4 address and network in CIDR notation. This will limit which ipv4 addresses and networks subordinate certs can use in subnets\n",
ob.String(),
)
}
@@ -50,8 +62,38 @@ func Test_ca(t *testing.T) {
ob := &bytes.Buffer{}
eb := &bytes.Buffer{}
nopw := &StubPasswordReader{
password: []byte(""),
err: nil,
}
errpw := &StubPasswordReader{
password: []byte(""),
err: errors.New("stub error"),
}
passphrase := []byte("DO NOT USE THIS KEY")
testpw := &StubPasswordReader{
password: passphrase,
err: nil,
}
pwPromptOb := "Enter passphrase: "
// required args
assertHelpError(t, ca([]string{"-out-key", "nope", "-out-crt", "nope", "duration", "100m"}, ob, eb), "-name is required")
assertHelpError(t, ca(
[]string{"-out-key", "nope", "-out-crt", "nope", "duration", "100m"}, ob, eb, nopw,
), "-name is required")
assert.Equal(t, "", ob.String())
assert.Equal(t, "", eb.String())
// ipv4 only ips
assertHelpError(t, ca([]string{"-name", "ipv6", "-ips", "100::100/100"}, ob, eb, nopw), "invalid ip definition: can only be ipv4, have 100::100/100")
assert.Equal(t, "", ob.String())
assert.Equal(t, "", eb.String())
// ipv4 only subnets
assertHelpError(t, ca([]string{"-name", "ipv6", "-subnets", "100::100/100"}, ob, eb, nopw), "invalid subnet definition: can only be ipv4, have 100::100/100")
assert.Equal(t, "", ob.String())
assert.Equal(t, "", eb.String())
@@ -59,12 +101,12 @@ func Test_ca(t *testing.T) {
ob.Reset()
eb.Reset()
args := []string{"-name", "test", "-duration", "100m", "-out-crt", "/do/not/write/pleasecrt", "-out-key", "/do/not/write/pleasekey"}
assert.EqualError(t, ca(args, ob, eb), "error while writing out-key: open /do/not/write/pleasekey: "+NoSuchDirError)
assert.EqualError(t, ca(args, ob, eb, nopw), "error while writing out-key: open /do/not/write/pleasekey: "+NoSuchDirError)
assert.Equal(t, "", ob.String())
assert.Equal(t, "", eb.String())
// create temp key file
keyF, err := ioutil.TempFile("", "test.key")
keyF, err := os.CreateTemp("", "test.key")
assert.Nil(t, err)
os.Remove(keyF.Name())
@@ -72,12 +114,12 @@ func Test_ca(t *testing.T) {
ob.Reset()
eb.Reset()
args = []string{"-name", "test", "-duration", "100m", "-out-crt", "/do/not/write/pleasecrt", "-out-key", keyF.Name()}
assert.EqualError(t, ca(args, ob, eb), "error while writing out-crt: open /do/not/write/pleasecrt: "+NoSuchDirError)
assert.EqualError(t, ca(args, ob, eb, nopw), "error while writing out-crt: open /do/not/write/pleasecrt: "+NoSuchDirError)
assert.Equal(t, "", ob.String())
assert.Equal(t, "", eb.String())
// create temp cert file
crtF, err := ioutil.TempFile("", "test.crt")
crtF, err := os.CreateTemp("", "test.crt")
assert.Nil(t, err)
os.Remove(crtF.Name())
os.Remove(keyF.Name())
@@ -86,18 +128,18 @@ func Test_ca(t *testing.T) {
ob.Reset()
eb.Reset()
args = []string{"-name", "test", "-duration", "100m", "-groups", "1,, 2 , ,,,3,4,5", "-out-crt", crtF.Name(), "-out-key", keyF.Name()}
assert.Nil(t, ca(args, ob, eb))
assert.Nil(t, ca(args, ob, eb, nopw))
assert.Equal(t, "", ob.String())
assert.Equal(t, "", eb.String())
// read cert and key files
rb, _ := ioutil.ReadFile(keyF.Name())
rb, _ := os.ReadFile(keyF.Name())
lKey, b, err := cert.UnmarshalEd25519PrivateKey(rb)
assert.Len(t, b, 0)
assert.Nil(t, err)
assert.Len(t, lKey, 64)
rb, _ = ioutil.ReadFile(crtF.Name())
rb, _ = os.ReadFile(crtF.Name())
lCrt, b, err := cert.UnmarshalNebulaCertificateFromPEM(rb)
assert.Len(t, b, 0)
assert.Nil(t, err)
@@ -112,19 +154,67 @@ func Test_ca(t *testing.T) {
assert.Equal(t, "", lCrt.Details.Issuer)
assert.True(t, lCrt.CheckSignature(lCrt.Details.PublicKey))
// test encrypted key
os.Remove(keyF.Name())
os.Remove(crtF.Name())
ob.Reset()
eb.Reset()
args = []string{"-encrypt", "-name", "test", "-duration", "100m", "-groups", "1,2,3,4,5", "-out-crt", crtF.Name(), "-out-key", keyF.Name()}
assert.Nil(t, ca(args, ob, eb, testpw))
assert.Equal(t, pwPromptOb, ob.String())
assert.Equal(t, "", eb.String())
// read encrypted key file and verify default params
rb, _ = os.ReadFile(keyF.Name())
k, _ := pem.Decode(rb)
ned, err := cert.UnmarshalNebulaEncryptedData(k.Bytes)
assert.Nil(t, err)
// we won't know salt in advance, so just check start of string
assert.Equal(t, uint32(2*1024*1024), ned.EncryptionMetadata.Argon2Parameters.Memory)
assert.Equal(t, uint8(4), ned.EncryptionMetadata.Argon2Parameters.Parallelism)
assert.Equal(t, uint32(1), ned.EncryptionMetadata.Argon2Parameters.Iterations)
// verify the key is valid and decrypt-able
var curve cert.Curve
curve, lKey, b, err = cert.DecryptAndUnmarshalSigningPrivateKey(passphrase, rb)
assert.Equal(t, cert.Curve_CURVE25519, curve)
assert.Nil(t, err)
assert.Len(t, b, 0)
assert.Len(t, lKey, 64)
// test when reading passsword results in an error
os.Remove(keyF.Name())
os.Remove(crtF.Name())
ob.Reset()
eb.Reset()
args = []string{"-encrypt", "-name", "test", "-duration", "100m", "-groups", "1,2,3,4,5", "-out-crt", crtF.Name(), "-out-key", keyF.Name()}
assert.Error(t, ca(args, ob, eb, errpw))
assert.Equal(t, pwPromptOb, ob.String())
assert.Equal(t, "", eb.String())
// test when user fails to enter a password
os.Remove(keyF.Name())
os.Remove(crtF.Name())
ob.Reset()
eb.Reset()
args = []string{"-encrypt", "-name", "test", "-duration", "100m", "-groups", "1,2,3,4,5", "-out-crt", crtF.Name(), "-out-key", keyF.Name()}
assert.EqualError(t, ca(args, ob, eb, nopw), "no passphrase specified, remove -encrypt flag to write out-key in plaintext")
assert.Equal(t, strings.Repeat(pwPromptOb, 5), ob.String()) // prompts 5 times before giving up
assert.Equal(t, "", eb.String())
// create valid cert/key for overwrite tests
os.Remove(keyF.Name())
os.Remove(crtF.Name())
ob.Reset()
eb.Reset()
args = []string{"-name", "test", "-duration", "100m", "-groups", "1,, 2 , ,,,3,4,5", "-out-crt", crtF.Name(), "-out-key", keyF.Name()}
assert.Nil(t, ca(args, ob, eb))
assert.Nil(t, ca(args, ob, eb, nopw))
// test that we won't overwrite existing certificate file
ob.Reset()
eb.Reset()
args = []string{"-name", "test", "-duration", "100m", "-groups", "1,, 2 , ,,,3,4,5", "-out-crt", crtF.Name(), "-out-key", keyF.Name()}
assert.EqualError(t, ca(args, ob, eb), "refusing to overwrite existing CA key: "+keyF.Name())
assert.EqualError(t, ca(args, ob, eb, nopw), "refusing to overwrite existing CA key: "+keyF.Name())
assert.Equal(t, "", ob.String())
assert.Equal(t, "", eb.String())
@@ -133,7 +223,7 @@ func Test_ca(t *testing.T) {
ob.Reset()
eb.Reset()
args = []string{"-name", "test", "-duration", "100m", "-groups", "1,, 2 , ,,,3,4,5", "-out-crt", crtF.Name(), "-out-key", keyF.Name()}
assert.EqualError(t, ca(args, ob, eb), "refusing to overwrite existing CA cert: "+crtF.Name())
assert.EqualError(t, ca(args, ob, eb, nopw), "refusing to overwrite existing CA cert: "+crtF.Name())
assert.Equal(t, "", ob.String())
assert.Equal(t, "", eb.String())
os.Remove(keyF.Name())

View File

@@ -4,7 +4,6 @@ import (
"flag"
"fmt"
"io"
"io/ioutil"
"os"
"github.com/slackhq/nebula/cert"
@@ -14,6 +13,8 @@ type keygenFlags struct {
set *flag.FlagSet
outKeyPath *string
outPubPath *string
curve *string
}
func newKeygenFlags() *keygenFlags {
@@ -21,6 +22,7 @@ func newKeygenFlags() *keygenFlags {
cf.set.Usage = func() {}
cf.outPubPath = cf.set.String("out-pub", "", "Required: path to write the public key to")
cf.outKeyPath = cf.set.String("out-key", "", "Required: path to write the private key to")
cf.curve = cf.set.String("curve", "25519", "ECDH Curve (25519, P256)")
return &cf
}
@@ -38,14 +40,25 @@ func keygen(args []string, out io.Writer, errOut io.Writer) error {
return err
}
pub, rawPriv := x25519Keypair()
var pub, rawPriv []byte
var curve cert.Curve
switch *cf.curve {
case "25519", "X25519", "Curve25519", "CURVE25519":
pub, rawPriv = x25519Keypair()
curve = cert.Curve_CURVE25519
case "P256":
pub, rawPriv = p256Keypair()
curve = cert.Curve_P256
default:
return fmt.Errorf("invalid curve: %s", *cf.curve)
}
err = ioutil.WriteFile(*cf.outKeyPath, cert.MarshalX25519PrivateKey(rawPriv), 0600)
err = os.WriteFile(*cf.outKeyPath, cert.MarshalPrivateKey(curve, rawPriv), 0600)
if err != nil {
return fmt.Errorf("error while writing out-key: %s", err)
}
err = ioutil.WriteFile(*cf.outPubPath, cert.MarshalX25519PublicKey(pub), 0600)
err = os.WriteFile(*cf.outPubPath, cert.MarshalPublicKey(curve, pub), 0600)
if err != nil {
return fmt.Errorf("error while writing out-pub: %s", err)
}

View File

@@ -2,7 +2,6 @@ package main
import (
"bytes"
"io/ioutil"
"os"
"testing"
@@ -22,6 +21,8 @@ func Test_keygenHelp(t *testing.T) {
assert.Equal(
t,
"Usage of "+os.Args[0]+" keygen <flags>: create a public/private key pair. the public key can be passed to `nebula-cert sign`\n"+
" -curve string\n"+
" \tECDH Curve (25519, P256) (default \"25519\")\n"+
" -out-key string\n"+
" \tRequired: path to write the private key to\n"+
" -out-pub string\n"+
@@ -52,7 +53,7 @@ func Test_keygen(t *testing.T) {
assert.Equal(t, "", eb.String())
// create temp key file
keyF, err := ioutil.TempFile("", "test.key")
keyF, err := os.CreateTemp("", "test.key")
assert.Nil(t, err)
defer os.Remove(keyF.Name())
@@ -65,7 +66,7 @@ func Test_keygen(t *testing.T) {
assert.Equal(t, "", eb.String())
// create temp pub file
pubF, err := ioutil.TempFile("", "test.pub")
pubF, err := os.CreateTemp("", "test.pub")
assert.Nil(t, err)
defer os.Remove(pubF.Name())
@@ -78,13 +79,13 @@ func Test_keygen(t *testing.T) {
assert.Equal(t, "", eb.String())
// read cert and key files
rb, _ := ioutil.ReadFile(keyF.Name())
rb, _ := os.ReadFile(keyF.Name())
lKey, b, err := cert.UnmarshalX25519PrivateKey(rb)
assert.Len(t, b, 0)
assert.Nil(t, err)
assert.Len(t, lKey, 32)
rb, _ = ioutil.ReadFile(pubF.Name())
rb, _ = os.ReadFile(pubF.Name())
lPub, b, err := cert.UnmarshalX25519PublicKey(rb)
assert.Len(t, b, 0)
assert.Nil(t, err)

View File

@@ -62,11 +62,11 @@ func main() {
switch args[0] {
case "ca":
err = ca(args[1:], os.Stdout, os.Stderr)
err = ca(args[1:], os.Stdout, os.Stderr, StdinPasswordReader{})
case "keygen":
err = keygen(args[1:], os.Stdout, os.Stderr)
case "sign":
err = signCert(args[1:], os.Stdout, os.Stderr)
err = signCert(args[1:], os.Stdout, os.Stderr, StdinPasswordReader{})
case "print":
err = printCert(args[1:], os.Stdout, os.Stderr)
case "verify":
@@ -127,6 +127,8 @@ func help(err string, out io.Writer) {
fmt.Fprintln(out, " "+signSummary())
fmt.Fprintln(out, " "+printSummary())
fmt.Fprintln(out, " "+verifySummary())
fmt.Fprintln(out, "")
fmt.Fprintf(out, " To see usage for a given mode, use %s <mode> -h\n", os.Args[0])
}
func mustFlagString(name string, val *string) error {

View File

@@ -22,7 +22,9 @@ func Test_help(t *testing.T) {
" " + keygenSummary() + "\n" +
" " + signSummary() + "\n" +
" " + printSummary() + "\n" +
" " + verifySummary() + "\n"
" " + verifySummary() + "\n" +
"\n" +
" To see usage for a given mode, use " + os.Args[0] + " <mode> -h\n"
ob := &bytes.Buffer{}

View File

@@ -0,0 +1,28 @@
package main
import (
"errors"
"fmt"
"os"
"golang.org/x/term"
)
var ErrNoTerminal = errors.New("cannot read password from nonexistent terminal")
type PasswordReader interface {
ReadPassword() ([]byte, error)
}
type StdinPasswordReader struct{}
func (pr StdinPasswordReader) ReadPassword() ([]byte, error) {
if !term.IsTerminal(int(os.Stdin.Fd())) {
return nil, ErrNoTerminal
}
password, err := term.ReadPassword(int(os.Stdin.Fd()))
fmt.Println()
return password, err
}

View File

@@ -0,0 +1,10 @@
package main
type StubPasswordReader struct {
password []byte
err error
}
func (pr *StubPasswordReader) ReadPassword() ([]byte, error) {
return pr.password, pr.err
}

View File

@@ -5,7 +5,6 @@ import (
"flag"
"fmt"
"io"
"io/ioutil"
"os"
"strings"
@@ -41,7 +40,7 @@ func printCert(args []string, out io.Writer, errOut io.Writer) error {
return err
}
rawCert, err := ioutil.ReadFile(*pf.path)
rawCert, err := os.ReadFile(*pf.path)
if err != nil {
return fmt.Errorf("unable to read cert; %s", err)
}
@@ -87,7 +86,7 @@ func printCert(args []string, out io.Writer, errOut io.Writer) error {
return fmt.Errorf("error while generating qr code: %s", err)
}
err = ioutil.WriteFile(*pf.outQRPath, b, 0600)
err = os.WriteFile(*pf.outQRPath, b, 0600)
if err != nil {
return fmt.Errorf("error while writing out-qr: %s", err)
}

View File

@@ -2,7 +2,6 @@ package main
import (
"bytes"
"io/ioutil"
"os"
"testing"
"time"
@@ -54,7 +53,7 @@ func Test_printCert(t *testing.T) {
// invalid cert at path
ob.Reset()
eb.Reset()
tf, err := ioutil.TempFile("", "print-cert")
tf, err := os.CreateTemp("", "print-cert")
assert.Nil(t, err)
defer os.Remove(tf.Name())
@@ -87,7 +86,7 @@ func Test_printCert(t *testing.T) {
assert.Nil(t, err)
assert.Equal(
t,
"NebulaCertificate {\n\tDetails {\n\t\tName: test\n\t\tIps: []\n\t\tSubnets: []\n\t\tGroups: [\n\t\t\t\"hi\"\n\t\t]\n\t\tNot before: 0001-01-01 00:00:00 +0000 UTC\n\t\tNot After: 0001-01-01 00:00:00 +0000 UTC\n\t\tIs CA: false\n\t\tIssuer: \n\t\tPublic key: 0102030405060708090001020304050607080900010203040506070809000102\n\t}\n\tFingerprint: cc3492c0e9c48f17547f5987ea807462ebb3451e622590a10bb3763c344c82bd\n\tSignature: 0102030405060708090001020304050607080900010203040506070809000102\n}\nNebulaCertificate {\n\tDetails {\n\t\tName: test\n\t\tIps: []\n\t\tSubnets: []\n\t\tGroups: [\n\t\t\t\"hi\"\n\t\t]\n\t\tNot before: 0001-01-01 00:00:00 +0000 UTC\n\t\tNot After: 0001-01-01 00:00:00 +0000 UTC\n\t\tIs CA: false\n\t\tIssuer: \n\t\tPublic key: 0102030405060708090001020304050607080900010203040506070809000102\n\t}\n\tFingerprint: cc3492c0e9c48f17547f5987ea807462ebb3451e622590a10bb3763c344c82bd\n\tSignature: 0102030405060708090001020304050607080900010203040506070809000102\n}\nNebulaCertificate {\n\tDetails {\n\t\tName: test\n\t\tIps: []\n\t\tSubnets: []\n\t\tGroups: [\n\t\t\t\"hi\"\n\t\t]\n\t\tNot before: 0001-01-01 00:00:00 +0000 UTC\n\t\tNot After: 0001-01-01 00:00:00 +0000 UTC\n\t\tIs CA: false\n\t\tIssuer: \n\t\tPublic key: 0102030405060708090001020304050607080900010203040506070809000102\n\t}\n\tFingerprint: cc3492c0e9c48f17547f5987ea807462ebb3451e622590a10bb3763c344c82bd\n\tSignature: 0102030405060708090001020304050607080900010203040506070809000102\n}\n",
"NebulaCertificate {\n\tDetails {\n\t\tName: test\n\t\tIps: []\n\t\tSubnets: []\n\t\tGroups: [\n\t\t\t\"hi\"\n\t\t]\n\t\tNot before: 0001-01-01 00:00:00 +0000 UTC\n\t\tNot After: 0001-01-01 00:00:00 +0000 UTC\n\t\tIs CA: false\n\t\tIssuer: \n\t\tPublic key: 0102030405060708090001020304050607080900010203040506070809000102\n\t\tCurve: CURVE25519\n\t}\n\tFingerprint: cc3492c0e9c48f17547f5987ea807462ebb3451e622590a10bb3763c344c82bd\n\tSignature: 0102030405060708090001020304050607080900010203040506070809000102\n}\nNebulaCertificate {\n\tDetails {\n\t\tName: test\n\t\tIps: []\n\t\tSubnets: []\n\t\tGroups: [\n\t\t\t\"hi\"\n\t\t]\n\t\tNot before: 0001-01-01 00:00:00 +0000 UTC\n\t\tNot After: 0001-01-01 00:00:00 +0000 UTC\n\t\tIs CA: false\n\t\tIssuer: \n\t\tPublic key: 0102030405060708090001020304050607080900010203040506070809000102\n\t\tCurve: CURVE25519\n\t}\n\tFingerprint: cc3492c0e9c48f17547f5987ea807462ebb3451e622590a10bb3763c344c82bd\n\tSignature: 0102030405060708090001020304050607080900010203040506070809000102\n}\nNebulaCertificate {\n\tDetails {\n\t\tName: test\n\t\tIps: []\n\t\tSubnets: []\n\t\tGroups: [\n\t\t\t\"hi\"\n\t\t]\n\t\tNot before: 0001-01-01 00:00:00 +0000 UTC\n\t\tNot After: 0001-01-01 00:00:00 +0000 UTC\n\t\tIs CA: false\n\t\tIssuer: \n\t\tPublic key: 0102030405060708090001020304050607080900010203040506070809000102\n\t\tCurve: CURVE25519\n\t}\n\tFingerprint: cc3492c0e9c48f17547f5987ea807462ebb3451e622590a10bb3763c344c82bd\n\tSignature: 0102030405060708090001020304050607080900010203040506070809000102\n}\n",
ob.String(),
)
assert.Equal(t, "", eb.String())
@@ -115,7 +114,7 @@ func Test_printCert(t *testing.T) {
assert.Nil(t, err)
assert.Equal(
t,
"{\"details\":{\"groups\":[\"hi\"],\"ips\":[],\"isCa\":false,\"issuer\":\"\",\"name\":\"test\",\"notAfter\":\"0001-01-01T00:00:00Z\",\"notBefore\":\"0001-01-01T00:00:00Z\",\"publicKey\":\"0102030405060708090001020304050607080900010203040506070809000102\",\"subnets\":[]},\"fingerprint\":\"cc3492c0e9c48f17547f5987ea807462ebb3451e622590a10bb3763c344c82bd\",\"signature\":\"0102030405060708090001020304050607080900010203040506070809000102\"}\n{\"details\":{\"groups\":[\"hi\"],\"ips\":[],\"isCa\":false,\"issuer\":\"\",\"name\":\"test\",\"notAfter\":\"0001-01-01T00:00:00Z\",\"notBefore\":\"0001-01-01T00:00:00Z\",\"publicKey\":\"0102030405060708090001020304050607080900010203040506070809000102\",\"subnets\":[]},\"fingerprint\":\"cc3492c0e9c48f17547f5987ea807462ebb3451e622590a10bb3763c344c82bd\",\"signature\":\"0102030405060708090001020304050607080900010203040506070809000102\"}\n{\"details\":{\"groups\":[\"hi\"],\"ips\":[],\"isCa\":false,\"issuer\":\"\",\"name\":\"test\",\"notAfter\":\"0001-01-01T00:00:00Z\",\"notBefore\":\"0001-01-01T00:00:00Z\",\"publicKey\":\"0102030405060708090001020304050607080900010203040506070809000102\",\"subnets\":[]},\"fingerprint\":\"cc3492c0e9c48f17547f5987ea807462ebb3451e622590a10bb3763c344c82bd\",\"signature\":\"0102030405060708090001020304050607080900010203040506070809000102\"}\n",
"{\"details\":{\"curve\":\"CURVE25519\",\"groups\":[\"hi\"],\"ips\":[],\"isCa\":false,\"issuer\":\"\",\"name\":\"test\",\"notAfter\":\"0001-01-01T00:00:00Z\",\"notBefore\":\"0001-01-01T00:00:00Z\",\"publicKey\":\"0102030405060708090001020304050607080900010203040506070809000102\",\"subnets\":[]},\"fingerprint\":\"cc3492c0e9c48f17547f5987ea807462ebb3451e622590a10bb3763c344c82bd\",\"signature\":\"0102030405060708090001020304050607080900010203040506070809000102\"}\n{\"details\":{\"curve\":\"CURVE25519\",\"groups\":[\"hi\"],\"ips\":[],\"isCa\":false,\"issuer\":\"\",\"name\":\"test\",\"notAfter\":\"0001-01-01T00:00:00Z\",\"notBefore\":\"0001-01-01T00:00:00Z\",\"publicKey\":\"0102030405060708090001020304050607080900010203040506070809000102\",\"subnets\":[]},\"fingerprint\":\"cc3492c0e9c48f17547f5987ea807462ebb3451e622590a10bb3763c344c82bd\",\"signature\":\"0102030405060708090001020304050607080900010203040506070809000102\"}\n{\"details\":{\"curve\":\"CURVE25519\",\"groups\":[\"hi\"],\"ips\":[],\"isCa\":false,\"issuer\":\"\",\"name\":\"test\",\"notAfter\":\"0001-01-01T00:00:00Z\",\"notBefore\":\"0001-01-01T00:00:00Z\",\"publicKey\":\"0102030405060708090001020304050607080900010203040506070809000102\",\"subnets\":[]},\"fingerprint\":\"cc3492c0e9c48f17547f5987ea807462ebb3451e622590a10bb3763c344c82bd\",\"signature\":\"0102030405060708090001020304050607080900010203040506070809000102\"}\n",
ob.String(),
)
assert.Equal(t, "", eb.String())

View File

@@ -1,11 +1,11 @@
package main
import (
"crypto/ecdh"
"crypto/rand"
"flag"
"fmt"
"io"
"io/ioutil"
"net"
"os"
"strings"
@@ -37,19 +37,19 @@ func newSignFlags() *signFlags {
sf.caKeyPath = sf.set.String("ca-key", "ca.key", "Optional: path to the signing CA key")
sf.caCertPath = sf.set.String("ca-crt", "ca.crt", "Optional: path to the signing CA cert")
sf.name = sf.set.String("name", "", "Required: name of the cert, usually a hostname")
sf.ip = sf.set.String("ip", "", "Required: ip and network in CIDR notation to assign the cert")
sf.ip = sf.set.String("ip", "", "Required: ipv4 address and network in CIDR notation to assign the cert")
sf.duration = sf.set.Duration("duration", 0, "Optional: how long the cert should be valid for. The default is 1 second before the signing cert expires. Valid time units are seconds: \"s\", minutes: \"m\", hours: \"h\"")
sf.inPubPath = sf.set.String("in-pub", "", "Optional (if out-key not set): path to read a previously generated public key")
sf.outKeyPath = sf.set.String("out-key", "", "Optional (if in-pub not set): path to write the private key to")
sf.outCertPath = sf.set.String("out-crt", "", "Optional: path to write the certificate to")
sf.outQRPath = sf.set.String("out-qr", "", "Optional: output a qr code image (png) of the certificate")
sf.groups = sf.set.String("groups", "", "Optional: comma separated list of groups")
sf.subnets = sf.set.String("subnets", "", "Optional: comma separated list of subnet this cert can serve for")
sf.subnets = sf.set.String("subnets", "", "Optional: comma separated list of ipv4 address and network in CIDR notation. Subnets this cert can serve for")
return &sf
}
func signCert(args []string, out io.Writer, errOut io.Writer) error {
func signCert(args []string, out io.Writer, errOut io.Writer, pr PasswordReader) error {
sf := newSignFlags()
err := sf.set.Parse(args)
if err != nil {
@@ -72,17 +72,46 @@ func signCert(args []string, out io.Writer, errOut io.Writer) error {
return newHelpErrorf("cannot set both -in-pub and -out-key")
}
rawCAKey, err := ioutil.ReadFile(*sf.caKeyPath)
rawCAKey, err := os.ReadFile(*sf.caKeyPath)
if err != nil {
return fmt.Errorf("error while reading ca-key: %s", err)
}
caKey, _, err := cert.UnmarshalEd25519PrivateKey(rawCAKey)
if err != nil {
var curve cert.Curve
var caKey []byte
// naively attempt to decode the private key as though it is not encrypted
caKey, _, curve, err = cert.UnmarshalSigningPrivateKey(rawCAKey)
if err == cert.ErrPrivateKeyEncrypted {
// ask for a passphrase until we get one
var passphrase []byte
for i := 0; i < 5; i++ {
out.Write([]byte("Enter passphrase: "))
passphrase, err = pr.ReadPassword()
if err == ErrNoTerminal {
return fmt.Errorf("ca-key is encrypted and must be decrypted interactively")
} else if err != nil {
return fmt.Errorf("error reading password: %s", err)
}
if len(passphrase) > 0 {
break
}
}
if len(passphrase) == 0 {
return fmt.Errorf("cannot open encrypted ca-key without passphrase")
}
curve, caKey, _, err = cert.DecryptAndUnmarshalSigningPrivateKey(passphrase, rawCAKey)
if err != nil {
return fmt.Errorf("error while parsing encrypted ca-key: %s", err)
}
} else if err != nil {
return fmt.Errorf("error while parsing ca-key: %s", err)
}
rawCACert, err := ioutil.ReadFile(*sf.caCertPath)
rawCACert, err := os.ReadFile(*sf.caCertPath)
if err != nil {
return fmt.Errorf("error while reading ca-crt: %s", err)
}
@@ -92,7 +121,7 @@ func signCert(args []string, out io.Writer, errOut io.Writer) error {
return fmt.Errorf("error while parsing ca-crt: %s", err)
}
if err := caCert.VerifyPrivateKey(caKey); err != nil {
if err := caCert.VerifyPrivateKey(curve, caKey); err != nil {
return fmt.Errorf("refusing to sign, root certificate does not match private key")
}
@@ -114,6 +143,9 @@ func signCert(args []string, out io.Writer, errOut io.Writer) error {
if err != nil {
return newHelpErrorf("invalid ip definition: %s", err)
}
if ip.To4() == nil {
return newHelpErrorf("invalid ip definition: can only be ipv4, have %s", *sf.ip)
}
ipNet.IP = ip
groups := []string{}
@@ -135,6 +167,9 @@ func signCert(args []string, out io.Writer, errOut io.Writer) error {
if err != nil {
return newHelpErrorf("invalid subnet definition: %s", err)
}
if s.IP.To4() == nil {
return newHelpErrorf("invalid subnet definition: can only be ipv4, have %s", rs)
}
subnets = append(subnets, s)
}
}
@@ -142,16 +177,20 @@ func signCert(args []string, out io.Writer, errOut io.Writer) error {
var pub, rawPriv []byte
if *sf.inPubPath != "" {
rawPub, err := ioutil.ReadFile(*sf.inPubPath)
rawPub, err := os.ReadFile(*sf.inPubPath)
if err != nil {
return fmt.Errorf("error while reading in-pub: %s", err)
}
pub, _, err = cert.UnmarshalX25519PublicKey(rawPub)
var pubCurve cert.Curve
pub, _, pubCurve, err = cert.UnmarshalPublicKey(rawPub)
if err != nil {
return fmt.Errorf("error while parsing in-pub: %s", err)
}
if pubCurve != curve {
return fmt.Errorf("curve of in-pub does not match ca")
}
} else {
pub, rawPriv = x25519Keypair()
pub, rawPriv = newKeypair(curve)
}
nc := cert.NebulaCertificate{
@@ -165,6 +204,7 @@ func signCert(args []string, out io.Writer, errOut io.Writer) error {
PublicKey: pub,
IsCA: false,
Issuer: issuer,
Curve: curve,
},
}
@@ -184,7 +224,7 @@ func signCert(args []string, out io.Writer, errOut io.Writer) error {
return fmt.Errorf("refusing to overwrite existing cert: %s", *sf.outCertPath)
}
err = nc.Sign(caKey)
err = nc.Sign(curve, caKey)
if err != nil {
return fmt.Errorf("error while signing: %s", err)
}
@@ -194,7 +234,7 @@ func signCert(args []string, out io.Writer, errOut io.Writer) error {
return fmt.Errorf("refusing to overwrite existing key: %s", *sf.outKeyPath)
}
err = ioutil.WriteFile(*sf.outKeyPath, cert.MarshalX25519PrivateKey(rawPriv), 0600)
err = os.WriteFile(*sf.outKeyPath, cert.MarshalPrivateKey(curve, rawPriv), 0600)
if err != nil {
return fmt.Errorf("error while writing out-key: %s", err)
}
@@ -205,7 +245,7 @@ func signCert(args []string, out io.Writer, errOut io.Writer) error {
return fmt.Errorf("error while marshalling certificate: %s", err)
}
err = ioutil.WriteFile(*sf.outCertPath, b, 0600)
err = os.WriteFile(*sf.outCertPath, b, 0600)
if err != nil {
return fmt.Errorf("error while writing out-crt: %s", err)
}
@@ -216,7 +256,7 @@ func signCert(args []string, out io.Writer, errOut io.Writer) error {
return fmt.Errorf("error while generating qr code: %s", err)
}
err = ioutil.WriteFile(*sf.outQRPath, b, 0600)
err = os.WriteFile(*sf.outQRPath, b, 0600)
if err != nil {
return fmt.Errorf("error while writing out-qr: %s", err)
}
@@ -225,6 +265,17 @@ func signCert(args []string, out io.Writer, errOut io.Writer) error {
return nil
}
func newKeypair(curve cert.Curve) ([]byte, []byte) {
switch curve {
case cert.Curve_CURVE25519:
return x25519Keypair()
case cert.Curve_P256:
return p256Keypair()
default:
return nil, nil
}
}
func x25519Keypair() ([]byte, []byte) {
privkey := make([]byte, 32)
if _, err := io.ReadFull(rand.Reader, privkey); err != nil {
@@ -239,6 +290,15 @@ func x25519Keypair() ([]byte, []byte) {
return pubkey, privkey
}
func p256Keypair() ([]byte, []byte) {
privkey, err := ecdh.P256().GenerateKey(rand.Reader)
if err != nil {
panic(err)
}
pubkey := privkey.PublicKey()
return pubkey.Bytes(), privkey.Bytes()
}
func signSummary() string {
return "sign <flags>: create and sign a certificate"
}

View File

@@ -6,7 +6,7 @@ package main
import (
"bytes"
"crypto/rand"
"io/ioutil"
"errors"
"os"
"testing"
"time"
@@ -39,7 +39,7 @@ func Test_signHelp(t *testing.T) {
" -in-pub string\n"+
" \tOptional (if out-key not set): path to read a previously generated public key\n"+
" -ip string\n"+
" \tRequired: ip and network in CIDR notation to assign the cert\n"+
" \tRequired: ipv4 address and network in CIDR notation to assign the cert\n"+
" -name string\n"+
" \tRequired: name of the cert, usually a hostname\n"+
" -out-crt string\n"+
@@ -49,7 +49,7 @@ func Test_signHelp(t *testing.T) {
" -out-qr string\n"+
" \tOptional: output a qr code image (png) of the certificate\n"+
" -subnets string\n"+
" \tOptional: comma separated list of subnet this cert can serve for\n",
" \tOptional: comma separated list of ipv4 address and network in CIDR notation. Subnets this cert can serve for\n",
ob.String(),
)
}
@@ -58,18 +58,39 @@ func Test_signCert(t *testing.T) {
ob := &bytes.Buffer{}
eb := &bytes.Buffer{}
// required args
nopw := &StubPasswordReader{
password: []byte(""),
err: nil,
}
assertHelpError(t, signCert([]string{"-ca-crt", "./nope", "-ca-key", "./nope", "-ip", "1.1.1.1/24", "-out-key", "nope", "-out-crt", "nope"}, ob, eb), "-name is required")
errpw := &StubPasswordReader{
password: []byte(""),
err: errors.New("stub error"),
}
passphrase := []byte("DO NOT USE THIS KEY")
testpw := &StubPasswordReader{
password: passphrase,
err: nil,
}
// required args
assertHelpError(t, signCert(
[]string{"-ca-crt", "./nope", "-ca-key", "./nope", "-ip", "1.1.1.1/24", "-out-key", "nope", "-out-crt", "nope"}, ob, eb, nopw,
), "-name is required")
assert.Empty(t, ob.String())
assert.Empty(t, eb.String())
assertHelpError(t, signCert([]string{"-ca-crt", "./nope", "-ca-key", "./nope", "-name", "test", "-out-key", "nope", "-out-crt", "nope"}, ob, eb), "-ip is required")
assertHelpError(t, signCert(
[]string{"-ca-crt", "./nope", "-ca-key", "./nope", "-name", "test", "-out-key", "nope", "-out-crt", "nope"}, ob, eb, nopw,
), "-ip is required")
assert.Empty(t, ob.String())
assert.Empty(t, eb.String())
// cannot set -in-pub and -out-key
assertHelpError(t, signCert([]string{"-ca-crt", "./nope", "-ca-key", "./nope", "-name", "test", "-in-pub", "nope", "-ip", "1.1.1.1/24", "-out-crt", "nope", "-out-key", "nope"}, ob, eb), "cannot set both -in-pub and -out-key")
assertHelpError(t, signCert(
[]string{"-ca-crt", "./nope", "-ca-key", "./nope", "-name", "test", "-in-pub", "nope", "-ip", "1.1.1.1/24", "-out-crt", "nope", "-out-key", "nope"}, ob, eb, nopw,
), "cannot set both -in-pub and -out-key")
assert.Empty(t, ob.String())
assert.Empty(t, eb.String())
@@ -77,17 +98,17 @@ func Test_signCert(t *testing.T) {
ob.Reset()
eb.Reset()
args := []string{"-ca-crt", "./nope", "-ca-key", "./nope", "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", "nope", "-out-key", "nope", "-duration", "100m"}
assert.EqualError(t, signCert(args, ob, eb), "error while reading ca-key: open ./nope: "+NoSuchFileError)
assert.EqualError(t, signCert(args, ob, eb, nopw), "error while reading ca-key: open ./nope: "+NoSuchFileError)
// failed to unmarshal key
ob.Reset()
eb.Reset()
caKeyF, err := ioutil.TempFile("", "sign-cert.key")
caKeyF, err := os.CreateTemp("", "sign-cert.key")
assert.Nil(t, err)
defer os.Remove(caKeyF.Name())
args = []string{"-ca-crt", "./nope", "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", "nope", "-out-key", "nope", "-duration", "100m"}
assert.EqualError(t, signCert(args, ob, eb), "error while parsing ca-key: input did not contain a valid PEM encoded block")
assert.EqualError(t, signCert(args, ob, eb, nopw), "error while parsing ca-key: input did not contain a valid PEM encoded block")
assert.Empty(t, ob.String())
assert.Empty(t, eb.String())
@@ -99,19 +120,19 @@ func Test_signCert(t *testing.T) {
// failed to read cert
args = []string{"-ca-crt", "./nope", "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", "nope", "-out-key", "nope", "-duration", "100m"}
assert.EqualError(t, signCert(args, ob, eb), "error while reading ca-crt: open ./nope: "+NoSuchFileError)
assert.EqualError(t, signCert(args, ob, eb, nopw), "error while reading ca-crt: open ./nope: "+NoSuchFileError)
assert.Empty(t, ob.String())
assert.Empty(t, eb.String())
// failed to unmarshal cert
ob.Reset()
eb.Reset()
caCrtF, err := ioutil.TempFile("", "sign-cert.crt")
caCrtF, err := os.CreateTemp("", "sign-cert.crt")
assert.Nil(t, err)
defer os.Remove(caCrtF.Name())
args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", "nope", "-out-key", "nope", "-duration", "100m"}
assert.EqualError(t, signCert(args, ob, eb), "error while parsing ca-crt: input did not contain a valid PEM encoded block")
assert.EqualError(t, signCert(args, ob, eb, nopw), "error while parsing ca-crt: input did not contain a valid PEM encoded block")
assert.Empty(t, ob.String())
assert.Empty(t, eb.String())
@@ -130,19 +151,19 @@ func Test_signCert(t *testing.T) {
// failed to read pub
args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", "nope", "-in-pub", "./nope", "-duration", "100m"}
assert.EqualError(t, signCert(args, ob, eb), "error while reading in-pub: open ./nope: "+NoSuchFileError)
assert.EqualError(t, signCert(args, ob, eb, nopw), "error while reading in-pub: open ./nope: "+NoSuchFileError)
assert.Empty(t, ob.String())
assert.Empty(t, eb.String())
// failed to unmarshal pub
ob.Reset()
eb.Reset()
inPubF, err := ioutil.TempFile("", "in.pub")
inPubF, err := os.CreateTemp("", "in.pub")
assert.Nil(t, err)
defer os.Remove(inPubF.Name())
args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", "nope", "-in-pub", inPubF.Name(), "-duration", "100m"}
assert.EqualError(t, signCert(args, ob, eb), "error while parsing in-pub: input did not contain a valid PEM encoded block")
assert.EqualError(t, signCert(args, ob, eb, nopw), "error while parsing in-pub: input did not contain a valid PEM encoded block")
assert.Empty(t, ob.String())
assert.Empty(t, eb.String())
@@ -156,7 +177,14 @@ func Test_signCert(t *testing.T) {
ob.Reset()
eb.Reset()
args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "a1.1.1.1/24", "-out-crt", "nope", "-out-key", "nope", "-duration", "100m"}
assertHelpError(t, signCert(args, ob, eb), "invalid ip definition: invalid CIDR address: a1.1.1.1/24")
assertHelpError(t, signCert(args, ob, eb, nopw), "invalid ip definition: invalid CIDR address: a1.1.1.1/24")
assert.Empty(t, ob.String())
assert.Empty(t, eb.String())
ob.Reset()
eb.Reset()
args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "100::100/100", "-out-crt", "nope", "-out-key", "nope", "-duration", "100m"}
assertHelpError(t, signCert(args, ob, eb, nopw), "invalid ip definition: can only be ipv4, have 100::100/100")
assert.Empty(t, ob.String())
assert.Empty(t, eb.String())
@@ -164,13 +192,20 @@ func Test_signCert(t *testing.T) {
ob.Reset()
eb.Reset()
args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", "nope", "-out-key", "nope", "-duration", "100m", "-subnets", "a"}
assertHelpError(t, signCert(args, ob, eb), "invalid subnet definition: invalid CIDR address: a")
assertHelpError(t, signCert(args, ob, eb, nopw), "invalid subnet definition: invalid CIDR address: a")
assert.Empty(t, ob.String())
assert.Empty(t, eb.String())
ob.Reset()
eb.Reset()
args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", "nope", "-out-key", "nope", "-duration", "100m", "-subnets", "100::100/100"}
assertHelpError(t, signCert(args, ob, eb, nopw), "invalid subnet definition: can only be ipv4, have 100::100/100")
assert.Empty(t, ob.String())
assert.Empty(t, eb.String())
// mismatched ca key
_, caPriv2, _ := ed25519.GenerateKey(rand.Reader)
caKeyF2, err := ioutil.TempFile("", "sign-cert-2.key")
caKeyF2, err := os.CreateTemp("", "sign-cert-2.key")
assert.Nil(t, err)
defer os.Remove(caKeyF2.Name())
caKeyF2.Write(cert.MarshalEd25519PrivateKey(caPriv2))
@@ -178,7 +213,7 @@ func Test_signCert(t *testing.T) {
ob.Reset()
eb.Reset()
args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF2.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", "nope", "-out-key", "nope", "-duration", "100m", "-subnets", "a"}
assert.EqualError(t, signCert(args, ob, eb), "refusing to sign, root certificate does not match private key")
assert.EqualError(t, signCert(args, ob, eb, nopw), "refusing to sign, root certificate does not match private key")
assert.Empty(t, ob.String())
assert.Empty(t, eb.String())
@@ -186,12 +221,12 @@ func Test_signCert(t *testing.T) {
ob.Reset()
eb.Reset()
args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", "/do/not/write/pleasecrt", "-out-key", "/do/not/write/pleasekey", "-duration", "100m", "-subnets", "10.1.1.1/32"}
assert.EqualError(t, signCert(args, ob, eb), "error while writing out-key: open /do/not/write/pleasekey: "+NoSuchDirError)
assert.EqualError(t, signCert(args, ob, eb, nopw), "error while writing out-key: open /do/not/write/pleasekey: "+NoSuchDirError)
assert.Empty(t, ob.String())
assert.Empty(t, eb.String())
// create temp key file
keyF, err := ioutil.TempFile("", "test.key")
keyF, err := os.CreateTemp("", "test.key")
assert.Nil(t, err)
os.Remove(keyF.Name())
@@ -199,13 +234,13 @@ func Test_signCert(t *testing.T) {
ob.Reset()
eb.Reset()
args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", "/do/not/write/pleasecrt", "-out-key", keyF.Name(), "-duration", "100m", "-subnets", "10.1.1.1/32"}
assert.EqualError(t, signCert(args, ob, eb), "error while writing out-crt: open /do/not/write/pleasecrt: "+NoSuchDirError)
assert.EqualError(t, signCert(args, ob, eb, nopw), "error while writing out-crt: open /do/not/write/pleasecrt: "+NoSuchDirError)
assert.Empty(t, ob.String())
assert.Empty(t, eb.String())
os.Remove(keyF.Name())
// create temp cert file
crtF, err := ioutil.TempFile("", "test.crt")
crtF, err := os.CreateTemp("", "test.crt")
assert.Nil(t, err)
os.Remove(crtF.Name())
@@ -213,18 +248,18 @@ func Test_signCert(t *testing.T) {
ob.Reset()
eb.Reset()
args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", crtF.Name(), "-out-key", keyF.Name(), "-duration", "100m", "-subnets", "10.1.1.1/32, , 10.2.2.2/32 , , ,, 10.5.5.5/32", "-groups", "1,, 2 , ,,,3,4,5"}
assert.Nil(t, signCert(args, ob, eb))
assert.Nil(t, signCert(args, ob, eb, nopw))
assert.Empty(t, ob.String())
assert.Empty(t, eb.String())
// read cert and key files
rb, _ := ioutil.ReadFile(keyF.Name())
rb, _ := os.ReadFile(keyF.Name())
lKey, b, err := cert.UnmarshalX25519PrivateKey(rb)
assert.Len(t, b, 0)
assert.Nil(t, err)
assert.Len(t, lKey, 32)
rb, _ = ioutil.ReadFile(crtF.Name())
rb, _ = os.ReadFile(crtF.Name())
lCrt, b, err := cert.UnmarshalNebulaCertificateFromPEM(rb)
assert.Len(t, b, 0)
assert.Nil(t, err)
@@ -255,12 +290,12 @@ func Test_signCert(t *testing.T) {
ob.Reset()
eb.Reset()
args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", crtF.Name(), "-in-pub", inPubF.Name(), "-duration", "100m", "-groups", "1"}
assert.Nil(t, signCert(args, ob, eb))
assert.Nil(t, signCert(args, ob, eb, nopw))
assert.Empty(t, ob.String())
assert.Empty(t, eb.String())
// read cert file and check pub key matches in-pub
rb, _ = ioutil.ReadFile(crtF.Name())
rb, _ = os.ReadFile(crtF.Name())
lCrt, b, err = cert.UnmarshalNebulaCertificateFromPEM(rb)
assert.Len(t, b, 0)
assert.Nil(t, err)
@@ -270,7 +305,7 @@ func Test_signCert(t *testing.T) {
ob.Reset()
eb.Reset()
args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", crtF.Name(), "-out-key", keyF.Name(), "-duration", "1000m", "-subnets", "10.1.1.1/32, , 10.2.2.2/32 , , ,, 10.5.5.5/32", "-groups", "1,, 2 , ,,,3,4,5"}
assert.EqualError(t, signCert(args, ob, eb), "refusing to sign, root certificate constraints violated: certificate expires after signing certificate")
assert.EqualError(t, signCert(args, ob, eb, nopw), "refusing to sign, root certificate constraints violated: certificate expires after signing certificate")
assert.Empty(t, ob.String())
assert.Empty(t, eb.String())
@@ -278,14 +313,14 @@ func Test_signCert(t *testing.T) {
os.Remove(keyF.Name())
os.Remove(crtF.Name())
args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", crtF.Name(), "-out-key", keyF.Name(), "-duration", "100m", "-subnets", "10.1.1.1/32, , 10.2.2.2/32 , , ,, 10.5.5.5/32", "-groups", "1,, 2 , ,,,3,4,5"}
assert.Nil(t, signCert(args, ob, eb))
assert.Nil(t, signCert(args, ob, eb, nopw))
// test that we won't overwrite existing key file
os.Remove(crtF.Name())
ob.Reset()
eb.Reset()
args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", crtF.Name(), "-out-key", keyF.Name(), "-duration", "100m", "-subnets", "10.1.1.1/32, , 10.2.2.2/32 , , ,, 10.5.5.5/32", "-groups", "1,, 2 , ,,,3,4,5"}
assert.EqualError(t, signCert(args, ob, eb), "refusing to overwrite existing key: "+keyF.Name())
assert.EqualError(t, signCert(args, ob, eb, nopw), "refusing to overwrite existing key: "+keyF.Name())
assert.Empty(t, ob.String())
assert.Empty(t, eb.String())
@@ -293,14 +328,83 @@ func Test_signCert(t *testing.T) {
os.Remove(keyF.Name())
os.Remove(crtF.Name())
args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", crtF.Name(), "-out-key", keyF.Name(), "-duration", "100m", "-subnets", "10.1.1.1/32, , 10.2.2.2/32 , , ,, 10.5.5.5/32", "-groups", "1,, 2 , ,,,3,4,5"}
assert.Nil(t, signCert(args, ob, eb))
assert.Nil(t, signCert(args, ob, eb, nopw))
// test that we won't overwrite existing certificate file
os.Remove(keyF.Name())
ob.Reset()
eb.Reset()
args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", crtF.Name(), "-out-key", keyF.Name(), "-duration", "100m", "-subnets", "10.1.1.1/32, , 10.2.2.2/32 , , ,, 10.5.5.5/32", "-groups", "1,, 2 , ,,,3,4,5"}
assert.EqualError(t, signCert(args, ob, eb), "refusing to overwrite existing cert: "+crtF.Name())
assert.EqualError(t, signCert(args, ob, eb, nopw), "refusing to overwrite existing cert: "+crtF.Name())
assert.Empty(t, ob.String())
assert.Empty(t, eb.String())
// create valid cert/key using encrypted CA key
os.Remove(caKeyF.Name())
os.Remove(caCrtF.Name())
os.Remove(keyF.Name())
os.Remove(crtF.Name())
ob.Reset()
eb.Reset()
caKeyF, err = os.CreateTemp("", "sign-cert.key")
assert.Nil(t, err)
defer os.Remove(caKeyF.Name())
caCrtF, err = os.CreateTemp("", "sign-cert.crt")
assert.Nil(t, err)
defer os.Remove(caCrtF.Name())
// generate the encrypted key
caPub, caPriv, _ = ed25519.GenerateKey(rand.Reader)
kdfParams := cert.NewArgon2Parameters(64*1024, 4, 3)
b, _ = cert.EncryptAndMarshalSigningPrivateKey(cert.Curve_CURVE25519, caPriv, passphrase, kdfParams)
caKeyF.Write(b)
ca = cert.NebulaCertificate{
Details: cert.NebulaCertificateDetails{
Name: "ca",
NotBefore: time.Now(),
NotAfter: time.Now().Add(time.Minute * 200),
PublicKey: caPub,
IsCA: true,
},
}
b, _ = ca.MarshalToPEM()
caCrtF.Write(b)
// test with the proper password
args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", crtF.Name(), "-out-key", keyF.Name(), "-duration", "100m", "-subnets", "10.1.1.1/32, , 10.2.2.2/32 , , ,, 10.5.5.5/32", "-groups", "1,, 2 , ,,,3,4,5"}
assert.Nil(t, signCert(args, ob, eb, testpw))
assert.Equal(t, "Enter passphrase: ", ob.String())
assert.Empty(t, eb.String())
// test with the wrong password
ob.Reset()
eb.Reset()
testpw.password = []byte("invalid password")
args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", crtF.Name(), "-out-key", keyF.Name(), "-duration", "100m", "-subnets", "10.1.1.1/32, , 10.2.2.2/32 , , ,, 10.5.5.5/32", "-groups", "1,, 2 , ,,,3,4,5"}
assert.Error(t, signCert(args, ob, eb, testpw))
assert.Equal(t, "Enter passphrase: ", ob.String())
assert.Empty(t, eb.String())
// test with the user not entering a password
ob.Reset()
eb.Reset()
args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", crtF.Name(), "-out-key", keyF.Name(), "-duration", "100m", "-subnets", "10.1.1.1/32, , 10.2.2.2/32 , , ,, 10.5.5.5/32", "-groups", "1,, 2 , ,,,3,4,5"}
assert.Error(t, signCert(args, ob, eb, nopw))
// normally the user hitting enter on the prompt would add newlines between these
assert.Equal(t, "Enter passphrase: Enter passphrase: Enter passphrase: Enter passphrase: Enter passphrase: ", ob.String())
assert.Empty(t, eb.String())
// test an error condition
ob.Reset()
eb.Reset()
args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", crtF.Name(), "-out-key", keyF.Name(), "-duration", "100m", "-subnets", "10.1.1.1/32, , 10.2.2.2/32 , , ,, 10.5.5.5/32", "-groups", "1,, 2 , ,,,3,4,5"}
assert.Error(t, signCert(args, ob, eb, errpw))
assert.Equal(t, "Enter passphrase: ", ob.String())
assert.Empty(t, eb.String())
}

View File

@@ -4,7 +4,6 @@ import (
"flag"
"fmt"
"io"
"io/ioutil"
"os"
"strings"
"time"
@@ -40,7 +39,7 @@ func verify(args []string, out io.Writer, errOut io.Writer) error {
return err
}
rawCACert, err := ioutil.ReadFile(*vf.caPath)
rawCACert, err := os.ReadFile(*vf.caPath)
if err != nil {
return fmt.Errorf("error while reading ca: %s", err)
}
@@ -57,7 +56,7 @@ func verify(args []string, out io.Writer, errOut io.Writer) error {
}
}
rawCert, err := ioutil.ReadFile(*vf.certPath)
rawCert, err := os.ReadFile(*vf.certPath)
if err != nil {
return fmt.Errorf("unable to read crt; %s", err)
}

View File

@@ -3,7 +3,6 @@ package main
import (
"bytes"
"crypto/rand"
"io/ioutil"
"os"
"testing"
"time"
@@ -56,7 +55,7 @@ func Test_verify(t *testing.T) {
// invalid ca at path
ob.Reset()
eb.Reset()
caFile, err := ioutil.TempFile("", "verify-ca")
caFile, err := os.CreateTemp("", "verify-ca")
assert.Nil(t, err)
defer os.Remove(caFile.Name())
@@ -72,12 +71,12 @@ func Test_verify(t *testing.T) {
Details: cert.NebulaCertificateDetails{
Name: "test-ca",
NotBefore: time.Now().Add(time.Hour * -1),
NotAfter: time.Now().Add(time.Hour),
NotAfter: time.Now().Add(time.Hour * 2),
PublicKey: caPub,
IsCA: true,
},
}
ca.Sign(caPriv)
ca.Sign(cert.Curve_CURVE25519, caPriv)
b, _ := ca.MarshalToPEM()
caFile.Truncate(0)
caFile.Seek(0, 0)
@@ -92,7 +91,7 @@ func Test_verify(t *testing.T) {
// invalid crt at path
ob.Reset()
eb.Reset()
certFile, err := ioutil.TempFile("", "verify-cert")
certFile, err := os.CreateTemp("", "verify-cert")
assert.Nil(t, err)
defer os.Remove(certFile.Name())
@@ -117,7 +116,7 @@ func Test_verify(t *testing.T) {
},
}
crt.Sign(badPriv)
crt.Sign(cert.Curve_CURVE25519, badPriv)
b, _ = crt.MarshalToPEM()
certFile.Truncate(0)
certFile.Seek(0, 0)
@@ -129,7 +128,7 @@ func Test_verify(t *testing.T) {
assert.EqualError(t, err, "certificate signature did not match")
// verified cert at path
crt.Sign(caPriv)
crt.Sign(cert.Curve_CURVE25519, caPriv)
b, _ = crt.MarshalToPEM()
certFile.Truncate(0)
certFile.Seek(0, 0)

View File

@@ -8,11 +8,12 @@ import (
"github.com/sirupsen/logrus"
"github.com/slackhq/nebula"
"github.com/slackhq/nebula/config"
"github.com/slackhq/nebula/util"
)
// A version string that can be set with
//
// -ldflags "-X main.Build=SOMEVERSION"
// -ldflags "-X main.Build=SOMEVERSION"
//
// at compile-time.
var Build string
@@ -58,13 +59,8 @@ func main() {
}
ctrl, err := nebula.Main(c, *configTest, Build, l, nil)
switch v := err.(type) {
case nebula.ContextualError:
v.Log(l)
os.Exit(1)
case error:
l.WithError(err).Error("Failed to start")
if err != nil {
util.LogWithContextIfNeeded("Failed to start", err, l)
os.Exit(1)
}

View File

@@ -49,6 +49,14 @@ func (p *program) Stop(s service.Service) error {
return nil
}
func fileExists(filename string) bool {
_, err := os.Stat(filename)
if os.IsNotExist(err) {
return false
}
return true
}
func doService(configPath *string, configTest *bool, build string, serviceFlag *string) {
if *configPath == "" {
ex, err := os.Executable()
@@ -56,6 +64,9 @@ func doService(configPath *string, configTest *bool, build string, serviceFlag *
panic(err)
}
*configPath = filepath.Dir(ex) + "/config.yaml"
if !fileExists(*configPath) {
*configPath = filepath.Dir(ex) + "/config.yml"
}
}
svcConfig := &service.Config{

View File

@@ -8,11 +8,12 @@ import (
"github.com/sirupsen/logrus"
"github.com/slackhq/nebula"
"github.com/slackhq/nebula/config"
"github.com/slackhq/nebula/util"
)
// A version string that can be set with
//
// -ldflags "-X main.Build=SOMEVERSION"
// -ldflags "-X main.Build=SOMEVERSION"
//
// at compile-time.
var Build string
@@ -52,18 +53,14 @@ func main() {
}
ctrl, err := nebula.Main(c, *configTest, Build, l, nil)
switch v := err.(type) {
case nebula.ContextualError:
v.Log(l)
os.Exit(1)
case error:
l.WithError(err).Error("Failed to start")
if err != nil {
util.LogWithContextIfNeeded("Failed to start", err, l)
os.Exit(1)
}
if !*configTest {
ctrl.Start()
notifyReady(l)
ctrl.ShutdownBlock()
}

View File

@@ -0,0 +1,42 @@
package main
import (
"net"
"os"
"time"
"github.com/sirupsen/logrus"
)
// SdNotifyReady tells systemd the service is ready and dependent services can now be started
// https://www.freedesktop.org/software/systemd/man/sd_notify.html
// https://www.freedesktop.org/software/systemd/man/systemd.service.html
const SdNotifyReady = "READY=1"
func notifyReady(l *logrus.Logger) {
sockName := os.Getenv("NOTIFY_SOCKET")
if sockName == "" {
l.Debugln("NOTIFY_SOCKET systemd env var not set, not sending ready signal")
return
}
conn, err := net.DialTimeout("unixgram", sockName, time.Second)
if err != nil {
l.WithError(err).Error("failed to connect to systemd notification socket")
return
}
defer conn.Close()
err = conn.SetWriteDeadline(time.Now().Add(time.Second))
if err != nil {
l.WithError(err).Error("failed to set the write deadline for the systemd notification socket")
return
}
if _, err = conn.Write([]byte(SdNotifyReady)); err != nil {
l.WithError(err).Error("failed to signal the systemd notification socket")
return
}
l.Debugln("notified systemd the service is ready")
}

View File

@@ -0,0 +1,10 @@
//go:build !linux
// +build !linux
package main
import "github.com/sirupsen/logrus"
func notifyReady(_ *logrus.Logger) {
// No init service to notify
}

View File

@@ -4,17 +4,18 @@ import (
"context"
"errors"
"fmt"
"io/ioutil"
"math"
"os"
"os/signal"
"path/filepath"
"sort"
"strconv"
"strings"
"sync"
"syscall"
"time"
"github.com/imdario/mergo"
"dario.cat/mergo"
"github.com/sirupsen/logrus"
"gopkg.in/yaml.v2"
)
@@ -26,6 +27,7 @@ type C struct {
oldSettings map[interface{}]interface{}
callbacks []func(*C)
l *logrus.Logger
reloadLock sync.Mutex
}
func NewC(l *logrus.Logger) *C {
@@ -74,6 +76,11 @@ func (c *C) RegisterReloadCallback(f func(*C)) {
c.callbacks = append(c.callbacks, f)
}
// InitialLoad returns true if this is the first load of the config, and ReloadConfig has not been called yet.
func (c *C) InitialLoad() bool {
return c.oldSettings == nil
}
// HasChanged checks if the underlying structure of the provided key has changed after a config reload. The value of
// k in both the old and new settings will be serialized, the result of the string comparison is returned.
// If k is an empty string the entire config is tested.
@@ -114,6 +121,10 @@ func (c *C) HasChanged(k string) bool {
// CatchHUP will listen for the HUP signal in a go routine and reload all configs found in the
// original path provided to Load. The old settings are shallow copied for change detection after the reload.
func (c *C) CatchHUP(ctx context.Context) {
if c.path == "" {
return
}
ch := make(chan os.Signal, 1)
signal.Notify(ch, syscall.SIGHUP)
@@ -133,6 +144,9 @@ func (c *C) CatchHUP(ctx context.Context) {
}
func (c *C) ReloadConfig() {
c.reloadLock.Lock()
defer c.reloadLock.Unlock()
c.oldSettings = make(map[interface{}]interface{})
for k, v := range c.Settings {
c.oldSettings[k] = v
@@ -149,6 +163,27 @@ func (c *C) ReloadConfig() {
}
}
func (c *C) ReloadConfigString(raw string) error {
c.reloadLock.Lock()
defer c.reloadLock.Unlock()
c.oldSettings = make(map[interface{}]interface{})
for k, v := range c.Settings {
c.oldSettings[k] = v
}
err := c.LoadString(raw)
if err != nil {
return err
}
for _, v := range c.callbacks {
v(c)
}
return nil
}
// GetString will get the string for k or return the default d if not found or invalid
func (c *C) GetString(k, d string) string {
r := c.Get(k)
@@ -205,6 +240,15 @@ func (c *C) GetInt(k string, d int) int {
return v
}
// GetUint32 will get the uint32 for k or return the default d if not found or invalid
func (c *C) GetUint32(k string, d uint32) uint32 {
r := c.GetInt(k, int(d))
if uint64(r) > uint64(math.MaxUint32) {
return d
}
return uint32(r)
}
// GetBool will get the bool for k or return the default d if not found or invalid
func (c *C) GetBool(k string, d bool) bool {
r := strings.ToLower(c.GetString(k, fmt.Sprintf("%v", d)))
@@ -317,7 +361,7 @@ func (c *C) parse() error {
var m map[interface{}]interface{}
for _, path := range c.files {
b, err := ioutil.ReadFile(path)
b, err := os.ReadFile(path)
if err != nil {
return err
}

View File

@@ -1,22 +1,24 @@
package config
import (
"io/ioutil"
"os"
"path/filepath"
"testing"
"time"
"github.com/slackhq/nebula/util"
"dario.cat/mergo"
"github.com/slackhq/nebula/test"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"gopkg.in/yaml.v2"
)
func TestConfig_Load(t *testing.T) {
l := util.NewTestLogger()
dir, err := ioutil.TempDir("", "config-test")
l := test.NewLogger()
dir, err := os.MkdirTemp("", "config-test")
// invalid yaml
c := NewC(l)
ioutil.WriteFile(filepath.Join(dir, "01.yaml"), []byte(" invalid yaml"), 0644)
os.WriteFile(filepath.Join(dir, "01.yaml"), []byte(" invalid yaml"), 0644)
assert.EqualError(t, c.Load(dir), "yaml: unmarshal errors:\n line 1: cannot unmarshal !!str `invalid...` into map[interface {}]interface {}")
// simple multi config merge
@@ -26,8 +28,8 @@ func TestConfig_Load(t *testing.T) {
assert.Nil(t, err)
ioutil.WriteFile(filepath.Join(dir, "01.yaml"), []byte("outer:\n inner: hi"), 0644)
ioutil.WriteFile(filepath.Join(dir, "02.yml"), []byte("outer:\n inner: override\nnew: hi"), 0644)
os.WriteFile(filepath.Join(dir, "01.yaml"), []byte("outer:\n inner: hi"), 0644)
os.WriteFile(filepath.Join(dir, "02.yml"), []byte("outer:\n inner: override\nnew: hi"), 0644)
assert.Nil(t, c.Load(dir))
expected := map[interface{}]interface{}{
"outer": map[interface{}]interface{}{
@@ -42,7 +44,7 @@ func TestConfig_Load(t *testing.T) {
}
func TestConfig_Get(t *testing.T) {
l := util.NewTestLogger()
l := test.NewLogger()
// test simple type
c := NewC(l)
c.Settings["firewall"] = map[interface{}]interface{}{"outbound": "hi"}
@@ -58,14 +60,14 @@ func TestConfig_Get(t *testing.T) {
}
func TestConfig_GetStringSlice(t *testing.T) {
l := util.NewTestLogger()
l := test.NewLogger()
c := NewC(l)
c.Settings["slice"] = []interface{}{"one", "two"}
assert.Equal(t, []string{"one", "two"}, c.GetStringSlice("slice", []string{}))
}
func TestConfig_GetBool(t *testing.T) {
l := util.NewTestLogger()
l := test.NewLogger()
c := NewC(l)
c.Settings["bool"] = true
assert.Equal(t, true, c.GetBool("bool", false))
@@ -93,7 +95,7 @@ func TestConfig_GetBool(t *testing.T) {
}
func TestConfig_HasChanged(t *testing.T) {
l := util.NewTestLogger()
l := test.NewLogger()
// No reload has occurred, return false
c := NewC(l)
c.Settings["test"] = "hi"
@@ -115,11 +117,11 @@ func TestConfig_HasChanged(t *testing.T) {
}
func TestConfig_ReloadConfig(t *testing.T) {
l := util.NewTestLogger()
l := test.NewLogger()
done := make(chan bool, 1)
dir, err := ioutil.TempDir("", "config-test")
dir, err := os.MkdirTemp("", "config-test")
assert.Nil(t, err)
ioutil.WriteFile(filepath.Join(dir, "01.yaml"), []byte("outer:\n inner: hi"), 0644)
os.WriteFile(filepath.Join(dir, "01.yaml"), []byte("outer:\n inner: hi"), 0644)
c := NewC(l)
assert.Nil(t, c.Load(dir))
@@ -128,7 +130,7 @@ func TestConfig_ReloadConfig(t *testing.T) {
assert.False(t, c.HasChanged("outer"))
assert.False(t, c.HasChanged(""))
ioutil.WriteFile(filepath.Join(dir, "01.yaml"), []byte("outer:\n inner: ho"), 0644)
os.WriteFile(filepath.Join(dir, "01.yaml"), []byte("outer:\n inner: ho"), 0644)
c.RegisterReloadCallback(func(c *C) {
done <- true
@@ -147,3 +149,77 @@ func TestConfig_ReloadConfig(t *testing.T) {
}
}
// Ensure mergo merges are done the way we expect.
// This is needed to test for potential regressions, like:
// - https://github.com/imdario/mergo/issues/187
func TestConfig_MergoMerge(t *testing.T) {
configs := [][]byte{
[]byte(`
listen:
port: 1234
`),
[]byte(`
firewall:
inbound:
- port: 443
proto: tcp
groups:
- server
- port: 443
proto: tcp
groups:
- webapp
`),
[]byte(`
listen:
host: 0.0.0.0
port: 4242
firewall:
outbound:
- port: any
proto: any
host: any
inbound:
- port: any
proto: icmp
host: any
`),
}
var m map[any]any
// merge the same way config.parse() merges
for _, b := range configs {
var nm map[any]any
err := yaml.Unmarshal(b, &nm)
require.NoError(t, err)
// We need to use WithAppendSlice so that firewall rules in separate
// files are appended together
err = mergo.Merge(&nm, m, mergo.WithAppendSlice)
m = nm
require.NoError(t, err)
}
t.Logf("Merged Config: %#v", m)
mYaml, err := yaml.Marshal(m)
require.NoError(t, err)
t.Logf("Merged Config as YAML:\n%s", mYaml)
// If a bug is present, some items might be replaced instead of merged like we expect
expected := map[any]any{
"firewall": map[any]any{
"inbound": []any{
map[any]any{"host": "any", "port": "any", "proto": "icmp"},
map[any]any{"groups": []any{"server"}, "port": 443, "proto": "tcp"},
map[any]any{"groups": []any{"webapp"}, "port": 443, "proto": "tcp"}},
"outbound": []any{
map[any]any{"host": "any", "port": "any", "proto": "any"}}},
"listen": map[any]any{
"host": "0.0.0.0",
"port": 4242,
},
}
assert.Equal(t, expected, m)
}

View File

@@ -1,150 +1,155 @@
package nebula
import (
"bytes"
"context"
"encoding/binary"
"net/netip"
"sync"
"time"
"github.com/rcrowley/go-metrics"
"github.com/sirupsen/logrus"
"github.com/slackhq/nebula/cert"
"github.com/slackhq/nebula/header"
"github.com/slackhq/nebula/iputil"
)
// TODO: incount and outcount are intended as a shortcut to locking the mutexes for every single packet
// and something like every 10 packets we could lock, send 10, then unlock for a moment
type trafficDecision int
const (
doNothing trafficDecision = 0
deleteTunnel trafficDecision = 1 // delete the hostinfo on our side, do not notify the remote
closeTunnel trafficDecision = 2 // delete the hostinfo and notify the remote
swapPrimary trafficDecision = 3
migrateRelays trafficDecision = 4
tryRehandshake trafficDecision = 5
sendTestPacket trafficDecision = 6
)
type connectionManager struct {
hostMap *HostMap
in map[iputil.VpnIp]struct{}
inLock *sync.RWMutex
inCount int
out map[iputil.VpnIp]struct{}
outLock *sync.RWMutex
outCount int
TrafficTimer *SystemTimerWheel
intf *Interface
in map[uint32]struct{}
inLock *sync.RWMutex
pendingDeletion map[iputil.VpnIp]int
pendingDeletionLock *sync.RWMutex
pendingDeletionTimer *SystemTimerWheel
out map[uint32]struct{}
outLock *sync.RWMutex
checkInterval int
pendingDeletionInterval int
// relayUsed holds which relay localIndexs are in use
relayUsed map[uint32]struct{}
relayUsedLock *sync.RWMutex
hostMap *HostMap
trafficTimer *LockingTimerWheel[uint32]
intf *Interface
pendingDeletion map[uint32]struct{}
punchy *Punchy
checkInterval time.Duration
pendingDeletionInterval time.Duration
metricsTxPunchy metrics.Counter
l *logrus.Logger
// I wanted to call one matLock
}
func newConnectionManager(ctx context.Context, l *logrus.Logger, intf *Interface, checkInterval, pendingDeletionInterval int) *connectionManager {
func newConnectionManager(ctx context.Context, l *logrus.Logger, intf *Interface, checkInterval, pendingDeletionInterval time.Duration, punchy *Punchy) *connectionManager {
var max time.Duration
if checkInterval < pendingDeletionInterval {
max = pendingDeletionInterval
} else {
max = checkInterval
}
nc := &connectionManager{
hostMap: intf.hostMap,
in: make(map[iputil.VpnIp]struct{}),
in: make(map[uint32]struct{}),
inLock: &sync.RWMutex{},
inCount: 0,
out: make(map[iputil.VpnIp]struct{}),
out: make(map[uint32]struct{}),
outLock: &sync.RWMutex{},
outCount: 0,
TrafficTimer: NewSystemTimerWheel(time.Millisecond*500, time.Second*60),
relayUsed: make(map[uint32]struct{}),
relayUsedLock: &sync.RWMutex{},
trafficTimer: NewLockingTimerWheel[uint32](time.Millisecond*500, max),
intf: intf,
pendingDeletion: make(map[iputil.VpnIp]int),
pendingDeletionLock: &sync.RWMutex{},
pendingDeletionTimer: NewSystemTimerWheel(time.Millisecond*500, time.Second*60),
pendingDeletion: make(map[uint32]struct{}),
checkInterval: checkInterval,
pendingDeletionInterval: pendingDeletionInterval,
punchy: punchy,
metricsTxPunchy: metrics.GetOrRegisterCounter("messages.tx.punchy", nil),
l: l,
}
nc.Start(ctx)
return nc
}
func (n *connectionManager) In(ip iputil.VpnIp) {
func (n *connectionManager) In(localIndex uint32) {
n.inLock.RLock()
// If this already exists, return
if _, ok := n.in[ip]; ok {
if _, ok := n.in[localIndex]; ok {
n.inLock.RUnlock()
return
}
n.inLock.RUnlock()
n.inLock.Lock()
n.in[ip] = struct{}{}
n.in[localIndex] = struct{}{}
n.inLock.Unlock()
}
func (n *connectionManager) Out(ip iputil.VpnIp) {
func (n *connectionManager) Out(localIndex uint32) {
n.outLock.RLock()
// If this already exists, return
if _, ok := n.out[ip]; ok {
if _, ok := n.out[localIndex]; ok {
n.outLock.RUnlock()
return
}
n.outLock.RUnlock()
n.outLock.Lock()
// double check since we dropped the lock temporarily
if _, ok := n.out[ip]; ok {
n.out[localIndex] = struct{}{}
n.outLock.Unlock()
}
func (n *connectionManager) RelayUsed(localIndex uint32) {
n.relayUsedLock.RLock()
// If this already exists, return
if _, ok := n.relayUsed[localIndex]; ok {
n.relayUsedLock.RUnlock()
return
}
n.relayUsedLock.RUnlock()
n.relayUsedLock.Lock()
n.relayUsed[localIndex] = struct{}{}
n.relayUsedLock.Unlock()
}
// getAndResetTrafficCheck returns if there was any inbound or outbound traffic within the last tick and
// resets the state for this local index
func (n *connectionManager) getAndResetTrafficCheck(localIndex uint32) (bool, bool) {
n.inLock.Lock()
n.outLock.Lock()
_, in := n.in[localIndex]
_, out := n.out[localIndex]
delete(n.in, localIndex)
delete(n.out, localIndex)
n.inLock.Unlock()
n.outLock.Unlock()
return in, out
}
func (n *connectionManager) AddTrafficWatch(localIndex uint32) {
// Use a write lock directly because it should be incredibly rare that we are ever already tracking this index
n.outLock.Lock()
if _, ok := n.out[localIndex]; ok {
n.outLock.Unlock()
return
}
n.out[ip] = struct{}{}
n.AddTrafficWatch(ip, n.checkInterval)
n.out[localIndex] = struct{}{}
n.trafficTimer.Add(localIndex, n.checkInterval)
n.outLock.Unlock()
}
func (n *connectionManager) CheckIn(vpnIp iputil.VpnIp) bool {
n.inLock.RLock()
if _, ok := n.in[vpnIp]; ok {
n.inLock.RUnlock()
return true
}
n.inLock.RUnlock()
return false
}
func (n *connectionManager) ClearIP(ip iputil.VpnIp) {
n.inLock.Lock()
n.outLock.Lock()
delete(n.in, ip)
delete(n.out, ip)
n.inLock.Unlock()
n.outLock.Unlock()
}
func (n *connectionManager) ClearPendingDeletion(ip iputil.VpnIp) {
n.pendingDeletionLock.Lock()
delete(n.pendingDeletion, ip)
n.pendingDeletionLock.Unlock()
}
func (n *connectionManager) AddPendingDeletion(ip iputil.VpnIp) {
n.pendingDeletionLock.Lock()
if _, ok := n.pendingDeletion[ip]; ok {
n.pendingDeletion[ip] += 1
} else {
n.pendingDeletion[ip] = 0
}
n.pendingDeletionTimer.Add(ip, time.Second*time.Duration(n.pendingDeletionInterval))
n.pendingDeletionLock.Unlock()
}
func (n *connectionManager) checkPendingDeletion(ip iputil.VpnIp) bool {
n.pendingDeletionLock.RLock()
if _, ok := n.pendingDeletion[ip]; ok {
n.pendingDeletionLock.RUnlock()
return true
}
n.pendingDeletionLock.RUnlock()
return false
}
func (n *connectionManager) AddTrafficWatch(vpnIp iputil.VpnIp, seconds int) {
n.TrafficTimer.Add(vpnIp, time.Second*time.Duration(seconds))
}
func (n *connectionManager) Start(ctx context.Context) {
go n.Run(ctx)
}
func (n *connectionManager) Run(ctx context.Context) {
//TODO: this tick should be based on the min wheel tick? Check firewall
clockSource := time.NewTicker(500 * time.Millisecond)
defer clockSource.Stop()
@@ -156,160 +161,326 @@ func (n *connectionManager) Run(ctx context.Context) {
select {
case <-ctx.Done():
return
case now := <-clockSource.C:
n.HandleMonitorTick(now, p, nb, out)
n.HandleDeletionTick(now)
n.trafficTimer.Advance(now)
for {
localIndex, has := n.trafficTimer.Purge()
if !has {
break
}
n.doTrafficCheck(localIndex, p, nb, out, now)
}
}
}
}
func (n *connectionManager) HandleMonitorTick(now time.Time, p, nb, out []byte) {
n.TrafficTimer.advance(now)
for {
ep := n.TrafficTimer.Purge()
if ep == nil {
break
func (n *connectionManager) doTrafficCheck(localIndex uint32, p, nb, out []byte, now time.Time) {
decision, hostinfo, primary := n.makeTrafficDecision(localIndex, now)
switch decision {
case deleteTunnel:
if n.hostMap.DeleteHostInfo(hostinfo) {
// Only clearing the lighthouse cache if this is the last hostinfo for this vpn ip in the hostmap
n.intf.lightHouse.DeleteVpnIp(hostinfo.vpnIp)
}
vpnIp := ep.(iputil.VpnIp)
case closeTunnel:
n.intf.sendCloseTunnel(hostinfo)
n.intf.closeTunnel(hostinfo)
// Check for traffic coming back in from this host.
traf := n.CheckIn(vpnIp)
case swapPrimary:
n.swapPrimary(hostinfo, primary)
hostinfo, err := n.hostMap.QueryVpnIp(vpnIp)
if err != nil {
n.l.Debugf("Not found in hostmap: %s", vpnIp)
case migrateRelays:
n.migrateRelayUsed(hostinfo, primary)
if !n.intf.disconnectInvalid {
n.ClearIP(vpnIp)
n.ClearPendingDeletion(vpnIp)
case tryRehandshake:
n.tryRehandshake(hostinfo)
case sendTestPacket:
n.intf.SendMessageToHostInfo(header.Test, header.TestRequest, hostinfo, p, nb, out)
}
n.resetRelayTrafficCheck(hostinfo)
}
func (n *connectionManager) resetRelayTrafficCheck(hostinfo *HostInfo) {
if hostinfo != nil {
n.relayUsedLock.Lock()
defer n.relayUsedLock.Unlock()
// No need to migrate any relays, delete usage info now.
for _, idx := range hostinfo.relayState.CopyRelayForIdxs() {
delete(n.relayUsed, idx)
}
}
}
func (n *connectionManager) migrateRelayUsed(oldhostinfo, newhostinfo *HostInfo) {
relayFor := oldhostinfo.relayState.CopyAllRelayFor()
for _, r := range relayFor {
existing, ok := newhostinfo.relayState.QueryRelayForByIp(r.PeerIp)
var index uint32
var relayFrom netip.Addr
var relayTo netip.Addr
switch {
case ok && existing.State == Established:
// This relay already exists in newhostinfo, then do nothing.
continue
case ok && existing.State == Requested:
// The relay exists in a Requested state; re-send the request
index = existing.LocalIndex
switch r.Type {
case TerminalType:
relayFrom = n.intf.myVpnNet.Addr()
relayTo = existing.PeerIp
case ForwardingType:
relayFrom = existing.PeerIp
relayTo = newhostinfo.vpnIp
default:
// should never happen
}
case !ok:
n.relayUsedLock.RLock()
if _, relayUsed := n.relayUsed[r.LocalIndex]; !relayUsed {
// The relay hasn't been used; don't migrate it.
n.relayUsedLock.RUnlock()
continue
}
}
if n.handleInvalidCertificate(now, vpnIp, hostinfo) {
continue
}
// If we saw an incoming packets from this ip and peer's certificate is not
// expired, just ignore.
if traf {
if n.l.Level >= logrus.DebugLevel {
n.l.WithField("vpnIp", vpnIp).
WithField("tunnelCheck", m{"state": "alive", "method": "passive"}).
Debug("Tunnel status")
n.relayUsedLock.RUnlock()
// The relay doesn't exist at all; create some relay state and send the request.
var err error
index, err = AddRelay(n.l, newhostinfo, n.hostMap, r.PeerIp, nil, r.Type, Requested)
if err != nil {
n.l.WithError(err).Error("failed to migrate relay to new hostinfo")
continue
}
switch r.Type {
case TerminalType:
relayFrom = n.intf.myVpnNet.Addr()
relayTo = r.PeerIp
case ForwardingType:
relayFrom = r.PeerIp
relayTo = newhostinfo.vpnIp
default:
// should never happen
}
n.ClearIP(vpnIp)
n.ClearPendingDeletion(vpnIp)
continue
}
hostinfo.logger(n.l).
WithField("tunnelCheck", m{"state": "testing", "method": "active"}).
Debug("Tunnel status")
if hostinfo != nil && hostinfo.ConnectionState != nil {
// Send a test packet to trigger an authenticated tunnel test, this should suss out any lingering tunnel issues
n.intf.SendMessageToVpnIp(header.Test, header.TestRequest, vpnIp, p, nb, out)
//TODO: IPV6-WORK
relayFromB := relayFrom.As4()
relayToB := relayTo.As4()
// Send a CreateRelayRequest to the peer.
req := NebulaControl{
Type: NebulaControl_CreateRelayRequest,
InitiatorRelayIndex: index,
RelayFromIp: binary.BigEndian.Uint32(relayFromB[:]),
RelayToIp: binary.BigEndian.Uint32(relayToB[:]),
}
msg, err := req.Marshal()
if err != nil {
n.l.WithError(err).Error("failed to marshal Control message to migrate relay")
} else {
hostinfo.logger(n.l).Debugf("Hostinfo sadness: %s", vpnIp)
n.intf.SendMessageToHostInfo(header.Control, 0, newhostinfo, msg, make([]byte, 12), make([]byte, mtu))
n.l.WithFields(logrus.Fields{
"relayFrom": req.RelayFromIp,
"relayTo": req.RelayToIp,
"initiatorRelayIndex": req.InitiatorRelayIndex,
"responderRelayIndex": req.ResponderRelayIndex,
"vpnIp": newhostinfo.vpnIp}).
Info("send CreateRelayRequest")
}
n.AddPendingDeletion(vpnIp)
}
}
func (n *connectionManager) HandleDeletionTick(now time.Time) {
n.pendingDeletionTimer.advance(now)
for {
ep := n.pendingDeletionTimer.Purge()
if ep == nil {
break
}
func (n *connectionManager) makeTrafficDecision(localIndex uint32, now time.Time) (trafficDecision, *HostInfo, *HostInfo) {
n.hostMap.RLock()
defer n.hostMap.RUnlock()
vpnIp := ep.(iputil.VpnIp)
hostinfo := n.hostMap.Indexes[localIndex]
if hostinfo == nil {
n.l.WithField("localIndex", localIndex).Debugf("Not found in hostmap")
delete(n.pendingDeletion, localIndex)
return doNothing, nil, nil
}
hostinfo, err := n.hostMap.QueryVpnIp(vpnIp)
if err != nil {
n.l.Debugf("Not found in hostmap: %s", vpnIp)
if n.isInvalidCertificate(now, hostinfo) {
delete(n.pendingDeletion, hostinfo.localIndexId)
return closeTunnel, hostinfo, nil
}
if !n.intf.disconnectInvalid {
n.ClearIP(vpnIp)
n.ClearPendingDeletion(vpnIp)
continue
}
}
primary := n.hostMap.Hosts[hostinfo.vpnIp]
mainHostInfo := true
if primary != nil && primary != hostinfo {
mainHostInfo = false
}
if n.handleInvalidCertificate(now, vpnIp, hostinfo) {
continue
}
// Check for traffic on this hostinfo
inTraffic, outTraffic := n.getAndResetTrafficCheck(localIndex)
// If we saw an incoming packets from this ip and peer's certificate is not
// expired, just ignore.
traf := n.CheckIn(vpnIp)
if traf {
n.l.WithField("vpnIp", vpnIp).
WithField("tunnelCheck", m{"state": "alive", "method": "active"}).
Debug("Tunnel status")
n.ClearIP(vpnIp)
n.ClearPendingDeletion(vpnIp)
continue
}
// If it comes around on deletion wheel and hasn't resolved itself, delete
if n.checkPendingDeletion(vpnIp) {
cn := ""
if hostinfo.ConnectionState != nil && hostinfo.ConnectionState.peerCert != nil {
cn = hostinfo.ConnectionState.peerCert.Details.Name
}
// A hostinfo is determined alive if there is incoming traffic
if inTraffic {
decision := doNothing
if n.l.Level >= logrus.DebugLevel {
hostinfo.logger(n.l).
WithField("tunnelCheck", m{"state": "dead", "method": "active"}).
WithField("certName", cn).
Info("Tunnel status")
WithField("tunnelCheck", m{"state": "alive", "method": "passive"}).
Debug("Tunnel status")
}
delete(n.pendingDeletion, hostinfo.localIndexId)
if mainHostInfo {
decision = tryRehandshake
n.ClearIP(vpnIp)
n.ClearPendingDeletion(vpnIp)
// TODO: This is only here to let tests work. Should do proper mocking
if n.intf.lightHouse != nil {
n.intf.lightHouse.DeleteVpnIp(vpnIp)
}
n.hostMap.DeleteHostInfo(hostinfo)
} else {
n.ClearIP(vpnIp)
n.ClearPendingDeletion(vpnIp)
if n.shouldSwapPrimary(hostinfo, primary) {
decision = swapPrimary
} else {
// migrate the relays to the primary, if in use.
decision = migrateRelays
}
}
n.trafficTimer.Add(hostinfo.localIndexId, n.checkInterval)
if !outTraffic {
// Send a punch packet to keep the NAT state alive
n.sendPunch(hostinfo)
}
return decision, hostinfo, primary
}
if _, ok := n.pendingDeletion[hostinfo.localIndexId]; ok {
// We have already sent a test packet and nothing was returned, this hostinfo is dead
hostinfo.logger(n.l).
WithField("tunnelCheck", m{"state": "dead", "method": "active"}).
Info("Tunnel status")
delete(n.pendingDeletion, hostinfo.localIndexId)
return deleteTunnel, hostinfo, nil
}
decision := doNothing
if hostinfo != nil && hostinfo.ConnectionState != nil && mainHostInfo {
if !outTraffic {
// If we aren't sending or receiving traffic then its an unused tunnel and we don't to test the tunnel.
// Just maintain NAT state if configured to do so.
n.sendPunch(hostinfo)
n.trafficTimer.Add(hostinfo.localIndexId, n.checkInterval)
return doNothing, nil, nil
}
if n.punchy.GetTargetEverything() {
// This is similar to the old punchy behavior with a slight optimization.
// We aren't receiving traffic but we are sending it, punch on all known
// ips in case we need to re-prime NAT state
n.sendPunch(hostinfo)
}
if n.l.Level >= logrus.DebugLevel {
hostinfo.logger(n.l).
WithField("tunnelCheck", m{"state": "testing", "method": "active"}).
Debug("Tunnel status")
}
// Send a test packet to trigger an authenticated tunnel test, this should suss out any lingering tunnel issues
decision = sendTestPacket
} else {
if n.l.Level >= logrus.DebugLevel {
hostinfo.logger(n.l).Debugf("Hostinfo sadness")
}
}
n.pendingDeletion[hostinfo.localIndexId] = struct{}{}
n.trafficTimer.Add(hostinfo.localIndexId, n.pendingDeletionInterval)
return decision, hostinfo, nil
}
// handleInvalidCertificates will destroy a tunnel if pki.disconnect_invalid is true and the certificate is no longer valid
func (n *connectionManager) handleInvalidCertificate(now time.Time, vpnIp iputil.VpnIp, hostinfo *HostInfo) bool {
if !n.intf.disconnectInvalid {
func (n *connectionManager) shouldSwapPrimary(current, primary *HostInfo) bool {
// The primary tunnel is the most recent handshake to complete locally and should work entirely fine.
// If we are here then we have multiple tunnels for a host pair and neither side believes the same tunnel is primary.
// Let's sort this out.
if current.vpnIp.Compare(n.intf.myVpnNet.Addr()) < 0 {
// Only one side should flip primary because if both flip then we may never resolve to a single tunnel.
// vpn ip is static across all tunnels for this host pair so lets use that to determine who is flipping.
// The remotes vpn ip is lower than mine. I will not flip.
return false
}
certState := n.intf.pki.GetCertState()
return bytes.Equal(current.ConnectionState.myCert.Signature, certState.Certificate.Signature)
}
func (n *connectionManager) swapPrimary(current, primary *HostInfo) {
n.hostMap.Lock()
// Make sure the primary is still the same after the write lock. This avoids a race with a rehandshake.
if n.hostMap.Hosts[current.vpnIp] == primary {
n.hostMap.unlockedMakePrimary(current)
}
n.hostMap.Unlock()
}
// isInvalidCertificate will check if we should destroy a tunnel if pki.disconnect_invalid is true and
// the certificate is no longer valid. Block listed certificates will skip the pki.disconnect_invalid
// check and return true.
func (n *connectionManager) isInvalidCertificate(now time.Time, hostinfo *HostInfo) bool {
remoteCert := hostinfo.GetCert()
if remoteCert == nil {
return false
}
valid, err := remoteCert.Verify(now, n.intf.caPool)
valid, err := remoteCert.VerifyWithCache(now, n.intf.pki.GetCAPool())
if valid {
return false
}
if !n.intf.disconnectInvalid.Load() && err != cert.ErrBlockListed {
// Block listed certificates should always be disconnected
return false
}
fingerprint, _ := remoteCert.Sha256Sum()
n.l.WithField("vpnIp", vpnIp).WithError(err).
WithField("certName", remoteCert.Details.Name).
hostinfo.logger(n.l).WithError(err).
WithField("fingerprint", fingerprint).
Info("Remote certificate is no longer valid, tearing down the tunnel")
// Inform the remote and close the tunnel locally
n.intf.sendCloseTunnel(hostinfo)
n.intf.closeTunnel(hostinfo, false)
n.ClearIP(vpnIp)
n.ClearPendingDeletion(vpnIp)
return true
}
func (n *connectionManager) sendPunch(hostinfo *HostInfo) {
if !n.punchy.GetPunch() {
// Punching is disabled
return
}
if n.punchy.GetTargetEverything() {
hostinfo.remotes.ForEach(n.hostMap.GetPreferredRanges(), func(addr netip.AddrPort, preferred bool) {
n.metricsTxPunchy.Inc(1)
n.intf.outside.WriteTo([]byte{1}, addr)
})
} else if hostinfo.remote.IsValid() {
n.metricsTxPunchy.Inc(1)
n.intf.outside.WriteTo([]byte{1}, hostinfo.remote)
}
}
func (n *connectionManager) tryRehandshake(hostinfo *HostInfo) {
certState := n.intf.pki.GetCertState()
if bytes.Equal(hostinfo.ConnectionState.myCert.Signature, certState.Certificate.Signature) {
return
}
n.l.WithField("vpnIp", hostinfo.vpnIp).
WithField("reason", "local certificate is not current").
Info("Re-handshaking with remote")
n.intf.handshakeManager.StartHandshake(hostinfo.vpnIp, nil)
}

View File

@@ -5,158 +5,199 @@ import (
"crypto/ed25519"
"crypto/rand"
"net"
"net/netip"
"testing"
"time"
"github.com/flynn/noise"
"github.com/slackhq/nebula/cert"
"github.com/slackhq/nebula/iputil"
"github.com/slackhq/nebula/config"
"github.com/slackhq/nebula/test"
"github.com/slackhq/nebula/udp"
"github.com/slackhq/nebula/util"
"github.com/stretchr/testify/assert"
)
var vpnIp iputil.VpnIp
func newTestLighthouse() *LightHouse {
lh := &LightHouse{
l: test.NewLogger(),
addrMap: map[netip.Addr]*RemoteList{},
queryChan: make(chan netip.Addr, 10),
}
lighthouses := map[netip.Addr]struct{}{}
staticList := map[netip.Addr]struct{}{}
lh.lighthouses.Store(&lighthouses)
lh.staticList.Store(&staticList)
return lh
}
func Test_NewConnectionManagerTest(t *testing.T) {
l := util.NewTestLogger()
l := test.NewLogger()
//_, tuncidr, _ := net.ParseCIDR("1.1.1.1/24")
_, vpncidr, _ := net.ParseCIDR("172.1.1.1/24")
_, localrange, _ := net.ParseCIDR("10.1.1.1/24")
vpnIp = iputil.Ip2VpnIp(net.ParseIP("172.1.1.2"))
preferredRanges := []*net.IPNet{localrange}
vpncidr := netip.MustParsePrefix("172.1.1.1/24")
localrange := netip.MustParsePrefix("10.1.1.1/24")
vpnIp := netip.MustParseAddr("172.1.1.2")
preferredRanges := []netip.Prefix{localrange}
// Very incomplete mock objects
hostMap := NewHostMap(l, "test", vpncidr, preferredRanges)
hostMap := newHostMap(l, vpncidr)
hostMap.preferredRanges.Store(&preferredRanges)
cs := &CertState{
rawCertificate: []byte{},
privateKey: []byte{},
certificate: &cert.NebulaCertificate{},
rawCertificateNoKey: []byte{},
RawCertificate: []byte{},
PrivateKey: []byte{},
Certificate: &cert.NebulaCertificate{},
RawCertificateNoKey: []byte{},
}
lh := NewLightHouse(l, false, &net.IPNet{IP: net.IP{0, 0, 0, 0}, Mask: net.IPMask{0, 0, 0, 0}}, []iputil.VpnIp{}, 1000, 0, &udp.Conn{}, false, 1, false)
lh := newTestLighthouse()
ifce := &Interface{
hostMap: hostMap,
inside: &Tun{},
outside: &udp.Conn{},
certState: cs,
inside: &test.NoopTun{},
outside: &udp.NoopConn{},
firewall: &Firewall{},
lightHouse: lh,
handshakeManager: NewHandshakeManager(l, vpncidr, preferredRanges, hostMap, lh, &udp.Conn{}, defaultHandshakeConfig),
pki: &PKI{},
handshakeManager: NewHandshakeManager(l, hostMap, lh, &udp.NoopConn{}, defaultHandshakeConfig),
l: l,
}
now := time.Now()
ifce.pki.cs.Store(cs)
// Create manager
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
nc := newConnectionManager(ctx, l, ifce, 5, 10)
punchy := NewPunchyFromConfig(l, config.NewC(l))
nc := newConnectionManager(ctx, l, ifce, 5, 10, punchy)
p := []byte("")
nb := make([]byte, 12, 12)
out := make([]byte, mtu)
nc.HandleMonitorTick(now, p, nb, out)
// Add an ip we have established a connection w/ to hostmap
hostinfo := nc.hostMap.AddVpnIp(vpnIp)
hostinfo.ConnectionState = &ConnectionState{
certState: cs,
H: &noise.HandshakeState{},
hostinfo := &HostInfo{
vpnIp: vpnIp,
localIndexId: 1099,
remoteIndexId: 9901,
}
hostinfo.ConnectionState = &ConnectionState{
myCert: &cert.NebulaCertificate{},
H: &noise.HandshakeState{},
}
nc.hostMap.unlockedAddHostInfo(hostinfo, ifce)
// We saw traffic out to vpnIp
nc.Out(vpnIp)
assert.NotContains(t, nc.pendingDeletion, vpnIp)
assert.Contains(t, nc.hostMap.Hosts, vpnIp)
// Move ahead 5s. Nothing should happen
next_tick := now.Add(5 * time.Second)
nc.HandleMonitorTick(next_tick, p, nb, out)
nc.HandleDeletionTick(next_tick)
// Move ahead 6s. We haven't heard back
next_tick = now.Add(6 * time.Second)
nc.HandleMonitorTick(next_tick, p, nb, out)
nc.HandleDeletionTick(next_tick)
// This host should now be up for deletion
assert.Contains(t, nc.pendingDeletion, vpnIp)
assert.Contains(t, nc.hostMap.Hosts, vpnIp)
// Move ahead some more
next_tick = now.Add(45 * time.Second)
nc.HandleMonitorTick(next_tick, p, nb, out)
nc.HandleDeletionTick(next_tick)
// The host should be evicted
assert.NotContains(t, nc.pendingDeletion, vpnIp)
assert.NotContains(t, nc.hostMap.Hosts, vpnIp)
nc.Out(hostinfo.localIndexId)
nc.In(hostinfo.localIndexId)
assert.NotContains(t, nc.pendingDeletion, hostinfo.localIndexId)
assert.Contains(t, nc.hostMap.Hosts, hostinfo.vpnIp)
assert.Contains(t, nc.hostMap.Indexes, hostinfo.localIndexId)
assert.Contains(t, nc.out, hostinfo.localIndexId)
// Do a traffic check tick, should not be pending deletion but should not have any in/out packets recorded
nc.doTrafficCheck(hostinfo.localIndexId, p, nb, out, time.Now())
assert.NotContains(t, nc.pendingDeletion, hostinfo.localIndexId)
assert.NotContains(t, nc.out, hostinfo.localIndexId)
assert.NotContains(t, nc.in, hostinfo.localIndexId)
// Do another traffic check tick, this host should be pending deletion now
nc.Out(hostinfo.localIndexId)
nc.doTrafficCheck(hostinfo.localIndexId, p, nb, out, time.Now())
assert.Contains(t, nc.pendingDeletion, hostinfo.localIndexId)
assert.NotContains(t, nc.out, hostinfo.localIndexId)
assert.NotContains(t, nc.in, hostinfo.localIndexId)
assert.Contains(t, nc.hostMap.Indexes, hostinfo.localIndexId)
assert.Contains(t, nc.hostMap.Hosts, hostinfo.vpnIp)
// Do a final traffic check tick, the host should now be removed
nc.doTrafficCheck(hostinfo.localIndexId, p, nb, out, time.Now())
assert.NotContains(t, nc.pendingDeletion, hostinfo.localIndexId)
assert.NotContains(t, nc.hostMap.Hosts, hostinfo.vpnIp)
assert.NotContains(t, nc.hostMap.Indexes, hostinfo.localIndexId)
}
func Test_NewConnectionManagerTest2(t *testing.T) {
l := util.NewTestLogger()
l := test.NewLogger()
//_, tuncidr, _ := net.ParseCIDR("1.1.1.1/24")
_, vpncidr, _ := net.ParseCIDR("172.1.1.1/24")
_, localrange, _ := net.ParseCIDR("10.1.1.1/24")
preferredRanges := []*net.IPNet{localrange}
vpncidr := netip.MustParsePrefix("172.1.1.1/24")
localrange := netip.MustParsePrefix("10.1.1.1/24")
vpnIp := netip.MustParseAddr("172.1.1.2")
preferredRanges := []netip.Prefix{localrange}
// Very incomplete mock objects
hostMap := NewHostMap(l, "test", vpncidr, preferredRanges)
hostMap := newHostMap(l, vpncidr)
hostMap.preferredRanges.Store(&preferredRanges)
cs := &CertState{
rawCertificate: []byte{},
privateKey: []byte{},
certificate: &cert.NebulaCertificate{},
rawCertificateNoKey: []byte{},
RawCertificate: []byte{},
PrivateKey: []byte{},
Certificate: &cert.NebulaCertificate{},
RawCertificateNoKey: []byte{},
}
lh := NewLightHouse(l, false, &net.IPNet{IP: net.IP{0, 0, 0, 0}, Mask: net.IPMask{0, 0, 0, 0}}, []iputil.VpnIp{}, 1000, 0, &udp.Conn{}, false, 1, false)
lh := newTestLighthouse()
ifce := &Interface{
hostMap: hostMap,
inside: &Tun{},
outside: &udp.Conn{},
certState: cs,
inside: &test.NoopTun{},
outside: &udp.NoopConn{},
firewall: &Firewall{},
lightHouse: lh,
handshakeManager: NewHandshakeManager(l, vpncidr, preferredRanges, hostMap, lh, &udp.Conn{}, defaultHandshakeConfig),
pki: &PKI{},
handshakeManager: NewHandshakeManager(l, hostMap, lh, &udp.NoopConn{}, defaultHandshakeConfig),
l: l,
}
now := time.Now()
ifce.pki.cs.Store(cs)
// Create manager
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
nc := newConnectionManager(ctx, l, ifce, 5, 10)
punchy := NewPunchyFromConfig(l, config.NewC(l))
nc := newConnectionManager(ctx, l, ifce, 5, 10, punchy)
p := []byte("")
nb := make([]byte, 12, 12)
out := make([]byte, mtu)
nc.HandleMonitorTick(now, p, nb, out)
// Add an ip we have established a connection w/ to hostmap
hostinfo := nc.hostMap.AddVpnIp(vpnIp)
hostinfo.ConnectionState = &ConnectionState{
certState: cs,
H: &noise.HandshakeState{},
hostinfo := &HostInfo{
vpnIp: vpnIp,
localIndexId: 1099,
remoteIndexId: 9901,
}
hostinfo.ConnectionState = &ConnectionState{
myCert: &cert.NebulaCertificate{},
H: &noise.HandshakeState{},
}
nc.hostMap.unlockedAddHostInfo(hostinfo, ifce)
// We saw traffic out to vpnIp
nc.Out(vpnIp)
assert.NotContains(t, nc.pendingDeletion, vpnIp)
assert.Contains(t, nc.hostMap.Hosts, vpnIp)
// Move ahead 5s. Nothing should happen
next_tick := now.Add(5 * time.Second)
nc.HandleMonitorTick(next_tick, p, nb, out)
nc.HandleDeletionTick(next_tick)
// Move ahead 6s. We haven't heard back
next_tick = now.Add(6 * time.Second)
nc.HandleMonitorTick(next_tick, p, nb, out)
nc.HandleDeletionTick(next_tick)
// This host should now be up for deletion
assert.Contains(t, nc.pendingDeletion, vpnIp)
assert.Contains(t, nc.hostMap.Hosts, vpnIp)
// We heard back this time
nc.In(vpnIp)
// Move ahead some more
next_tick = now.Add(45 * time.Second)
nc.HandleMonitorTick(next_tick, p, nb, out)
nc.HandleDeletionTick(next_tick)
// The host should be evicted
assert.NotContains(t, nc.pendingDeletion, vpnIp)
assert.Contains(t, nc.hostMap.Hosts, vpnIp)
nc.Out(hostinfo.localIndexId)
nc.In(hostinfo.localIndexId)
assert.NotContains(t, nc.pendingDeletion, hostinfo.vpnIp)
assert.Contains(t, nc.hostMap.Hosts, hostinfo.vpnIp)
assert.Contains(t, nc.hostMap.Indexes, hostinfo.localIndexId)
// Do a traffic check tick, should not be pending deletion but should not have any in/out packets recorded
nc.doTrafficCheck(hostinfo.localIndexId, p, nb, out, time.Now())
assert.NotContains(t, nc.pendingDeletion, hostinfo.localIndexId)
assert.NotContains(t, nc.out, hostinfo.localIndexId)
assert.NotContains(t, nc.in, hostinfo.localIndexId)
// Do another traffic check tick, this host should be pending deletion now
nc.Out(hostinfo.localIndexId)
nc.doTrafficCheck(hostinfo.localIndexId, p, nb, out, time.Now())
assert.Contains(t, nc.pendingDeletion, hostinfo.localIndexId)
assert.NotContains(t, nc.out, hostinfo.localIndexId)
assert.NotContains(t, nc.in, hostinfo.localIndexId)
assert.Contains(t, nc.hostMap.Indexes, hostinfo.localIndexId)
assert.Contains(t, nc.hostMap.Hosts, hostinfo.vpnIp)
// We saw traffic, should no longer be pending deletion
nc.In(hostinfo.localIndexId)
nc.doTrafficCheck(hostinfo.localIndexId, p, nb, out, time.Now())
assert.NotContains(t, nc.pendingDeletion, hostinfo.localIndexId)
assert.NotContains(t, nc.out, hostinfo.localIndexId)
assert.NotContains(t, nc.in, hostinfo.localIndexId)
assert.Contains(t, nc.hostMap.Indexes, hostinfo.localIndexId)
assert.Contains(t, nc.hostMap.Hosts, hostinfo.vpnIp)
}
// Check if we can disconnect the peer.
@@ -164,15 +205,17 @@ func Test_NewConnectionManagerTest2(t *testing.T) {
// Disconnect only if disconnectInvalid: true is set.
func Test_NewConnectionManagerTest_DisconnectInvalid(t *testing.T) {
now := time.Now()
l := util.NewTestLogger()
l := test.NewLogger()
ipNet := net.IPNet{
IP: net.IPv4(172, 1, 1, 2),
Mask: net.IPMask{255, 255, 255, 0},
}
_, vpncidr, _ := net.ParseCIDR("172.1.1.1/24")
_, localrange, _ := net.ParseCIDR("10.1.1.1/24")
preferredRanges := []*net.IPNet{localrange}
hostMap := NewHostMap(l, "test", vpncidr, preferredRanges)
vpncidr := netip.MustParsePrefix("172.1.1.1/24")
localrange := netip.MustParsePrefix("10.1.1.1/24")
vpnIp := netip.MustParseAddr("172.1.1.2")
preferredRanges := []netip.Prefix{localrange}
hostMap := newHostMap(l, vpncidr)
hostMap.preferredRanges.Store(&preferredRanges)
// Generate keys for CA and peer's cert.
pubCA, privCA, _ := ed25519.GenerateKey(rand.Reader)
@@ -185,7 +228,8 @@ func Test_NewConnectionManagerTest_DisconnectInvalid(t *testing.T) {
PublicKey: pubCA,
},
}
caCert.Sign(privCA)
assert.NoError(t, caCert.Sign(cert.Curve_CURVE25519, privCA))
ncp := &cert.NebulaCAPool{
CAs: cert.NewCAPool().CAs,
}
@@ -204,52 +248,58 @@ func Test_NewConnectionManagerTest_DisconnectInvalid(t *testing.T) {
Issuer: "ca",
},
}
peerCert.Sign(privCA)
assert.NoError(t, peerCert.Sign(cert.Curve_CURVE25519, privCA))
cs := &CertState{
rawCertificate: []byte{},
privateKey: []byte{},
certificate: &cert.NebulaCertificate{},
rawCertificateNoKey: []byte{},
RawCertificate: []byte{},
PrivateKey: []byte{},
Certificate: &cert.NebulaCertificate{},
RawCertificateNoKey: []byte{},
}
lh := NewLightHouse(l, false, &net.IPNet{IP: net.IP{0, 0, 0, 0}, Mask: net.IPMask{0, 0, 0, 0}}, []iputil.VpnIp{}, 1000, 0, &udp.Conn{}, false, 1, false)
lh := newTestLighthouse()
ifce := &Interface{
hostMap: hostMap,
inside: &Tun{},
outside: &udp.Conn{},
certState: cs,
firewall: &Firewall{},
lightHouse: lh,
handshakeManager: NewHandshakeManager(l, vpncidr, preferredRanges, hostMap, lh, &udp.Conn{}, defaultHandshakeConfig),
l: l,
disconnectInvalid: true,
caPool: ncp,
hostMap: hostMap,
inside: &test.NoopTun{},
outside: &udp.NoopConn{},
firewall: &Firewall{},
lightHouse: lh,
handshakeManager: NewHandshakeManager(l, hostMap, lh, &udp.NoopConn{}, defaultHandshakeConfig),
l: l,
pki: &PKI{},
}
ifce.pki.cs.Store(cs)
ifce.pki.caPool.Store(ncp)
ifce.disconnectInvalid.Store(true)
// Create manager
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
nc := newConnectionManager(ctx, l, ifce, 5, 10)
punchy := NewPunchyFromConfig(l, config.NewC(l))
nc := newConnectionManager(ctx, l, ifce, 5, 10, punchy)
ifce.connectionManager = nc
hostinfo := nc.hostMap.AddVpnIp(vpnIp)
hostinfo.ConnectionState = &ConnectionState{
certState: cs,
peerCert: &peerCert,
H: &noise.HandshakeState{},
hostinfo := &HostInfo{
vpnIp: vpnIp,
ConnectionState: &ConnectionState{
myCert: &cert.NebulaCertificate{},
peerCert: &peerCert,
H: &noise.HandshakeState{},
},
}
nc.hostMap.unlockedAddHostInfo(hostinfo, ifce)
// Move ahead 45s.
// Check if to disconnect with invalid certificate.
// Should be alive.
nextTick := now.Add(45 * time.Second)
destroyed := nc.handleInvalidCertificate(nextTick, vpnIp, hostinfo)
assert.False(t, destroyed)
invalid := nc.isInvalidCertificate(nextTick, hostinfo)
assert.False(t, invalid)
// Move ahead 61s.
// Check if to disconnect with invalid certificate.
// Should be disconnected.
nextTick = now.Add(61 * time.Second)
destroyed = nc.handleInvalidCertificate(nextTick, vpnIp, hostinfo)
assert.True(t, destroyed)
invalid = nc.isInvalidCertificate(nextTick, hostinfo)
assert.True(t, invalid)
}

View File

@@ -9,32 +9,43 @@ import (
"github.com/flynn/noise"
"github.com/sirupsen/logrus"
"github.com/slackhq/nebula/cert"
"github.com/slackhq/nebula/noiseutil"
)
const ReplayWindow = 1024
type ConnectionState struct {
eKey *NebulaCipherState
dKey *NebulaCipherState
H *noise.HandshakeState
certState *CertState
peerCert *cert.NebulaCertificate
initiator bool
atomicMessageCounter uint64
window *Bits
queueLock sync.Mutex
writeLock sync.Mutex
ready bool
eKey *NebulaCipherState
dKey *NebulaCipherState
H *noise.HandshakeState
myCert *cert.NebulaCertificate
peerCert *cert.NebulaCertificate
initiator bool
messageCounter atomic.Uint64
window *Bits
writeLock sync.Mutex
}
func (f *Interface) newConnectionState(l *logrus.Logger, initiator bool, p []byte) (*ConnectionState, error) {
cs := noise.NewCipherSuite(noise.DH25519, noise.CipherAESGCM, noise.HashSHA256)
if f.cipher == "chachapoly" {
cs = noise.NewCipherSuite(noise.DH25519, noise.CipherChaChaPoly, noise.HashSHA256)
func NewConnectionState(l *logrus.Logger, cipher string, certState *CertState, initiator bool, pattern noise.HandshakePattern, psk []byte, pskStage int) *ConnectionState {
var dhFunc noise.DHFunc
switch certState.Certificate.Details.Curve {
case cert.Curve_CURVE25519:
dhFunc = noise.DH25519
case cert.Curve_P256:
dhFunc = noiseutil.DHP256
default:
l.Errorf("invalid curve: %s", certState.Certificate.Details.Curve)
return nil
}
curCertState := f.certState
static := noise.DHKey{Private: curCertState.privateKey, Public: curCertState.publicKey}
var cs noise.CipherSuite
if cipher == "chachapoly" {
cs = noise.NewCipherSuite(dhFunc, noise.CipherChaChaPoly, noise.HashSHA256)
} else {
cs = noise.NewCipherSuite(dhFunc, noiseutil.CipherAESGCM, noise.HashSHA256)
}
static := noise.DHKey{Private: certState.PrivateKey, Public: certState.PublicKey}
b := NewBits(ReplayWindow)
// Clear out bit 0, we never transmit it and we don't want it showing as packet loss
@@ -43,15 +54,14 @@ func (f *Interface) newConnectionState(l *logrus.Logger, initiator bool, p []byt
hs, err := noise.NewHandshakeState(noise.Config{
CipherSuite: cs,
Random: rand.Reader,
Pattern: noise.HandshakeIX,
Pattern: pattern,
Initiator: initiator,
StaticKeypair: static,
PresharedKey: p,
PresharedKeyPlacement: 0,
PresharedKey: psk,
PresharedKeyPlacement: pskStage,
})
if err != nil {
return nil, err
return nil
}
// The queue and ready params prevent a counter race that would happen when
@@ -60,18 +70,18 @@ func (f *Interface) newConnectionState(l *logrus.Logger, initiator bool, p []byt
H: hs,
initiator: initiator,
window: b,
ready: false,
certState: curCertState,
myCert: certState.Certificate,
}
// always start the counter from 2, as packet 1 and packet 2 are handshake packets.
ci.messageCounter.Add(2)
return ci, nil
return ci
}
func (cs *ConnectionState) MarshalJSON() ([]byte, error) {
return json.Marshal(m{
"certificate": cs.peerCert,
"initiator": cs.initiator,
"message_counter": atomic.LoadUint64(&cs.atomicMessageCounter),
"ready": cs.ready,
"message_counter": cs.messageCounter.Load(),
})
}

View File

@@ -2,40 +2,50 @@ package nebula
import (
"context"
"net"
"net/netip"
"os"
"os/signal"
"sync/atomic"
"syscall"
"github.com/sirupsen/logrus"
"github.com/slackhq/nebula/cert"
"github.com/slackhq/nebula/header"
"github.com/slackhq/nebula/iputil"
"github.com/slackhq/nebula/udp"
"github.com/slackhq/nebula/overlay"
)
// Every interaction here needs to take extra care to copy memory and not return or use arguments "as is" when touching
// core. This means copying IP objects, slices, de-referencing pointers and taking the actual value, etc
type controlEach func(h *HostInfo)
type controlHostLister interface {
QueryVpnIp(vpnIp netip.Addr) *HostInfo
ForEachIndex(each controlEach)
ForEachVpnIp(each controlEach)
GetPreferredRanges() []netip.Prefix
}
type Control struct {
f *Interface
l *logrus.Logger
cancel context.CancelFunc
sshStart func()
statsStart func()
dnsStart func()
f *Interface
l *logrus.Logger
ctx context.Context
cancel context.CancelFunc
sshStart func()
statsStart func()
dnsStart func()
lighthouseStart func()
}
type ControlHostInfo struct {
VpnIp net.IP `json:"vpnIp"`
LocalIndex uint32 `json:"localIndex"`
RemoteIndex uint32 `json:"remoteIndex"`
RemoteAddrs []*udp.Addr `json:"remoteAddrs"`
CachedPackets int `json:"cachedPackets"`
Cert *cert.NebulaCertificate `json:"cert"`
MessageCounter uint64 `json:"messageCounter"`
CurrentRemote *udp.Addr `json:"currentRemote"`
VpnIp netip.Addr `json:"vpnIp"`
LocalIndex uint32 `json:"localIndex"`
RemoteIndex uint32 `json:"remoteIndex"`
RemoteAddrs []netip.AddrPort `json:"remoteAddrs"`
Cert *cert.NebulaCertificate `json:"cert"`
MessageCounter uint64 `json:"messageCounter"`
CurrentRemote netip.AddrPort `json:"currentRemote"`
CurrentRelaysToMe []netip.Addr `json:"currentRelaysToMe"`
CurrentRelaysThroughMe []netip.Addr `json:"currentRelaysThroughMe"`
}
// Start actually runs nebula, this is a nonblocking call. To block use Control.ShutdownBlock()
@@ -53,22 +63,34 @@ func (c *Control) Start() {
if c.dnsStart != nil {
go c.dnsStart()
}
if c.lighthouseStart != nil {
c.lighthouseStart()
}
// Start reading packets.
c.f.run()
}
// Stop signals nebula to shutdown, returns after the shutdown is complete
func (c *Control) Context() context.Context {
return c.ctx
}
// Stop signals nebula to shutdown and close all tunnels, returns after the shutdown is complete
func (c *Control) Stop() {
//TODO: stop tun and udp routines, the lock on hostMap effectively does that though
c.CloseAllTunnels(false)
// Stop the handshakeManager (and other services), to prevent new tunnels from
// being created while we're shutting them all down.
c.cancel()
c.CloseAllTunnels(false)
if err := c.f.Close(); err != nil {
c.l.WithError(err).Error("Close interface failed")
}
c.l.Info("Goodbye")
}
// ShutdownBlock will listen for and block on term and interrupt signals, calling Control.Stop() once signalled
func (c *Control) ShutdownBlock() {
sigChan := make(chan os.Signal)
sigChan := make(chan os.Signal, 1)
signal.Notify(sigChan, syscall.SIGTERM)
signal.Notify(sigChan, syscall.SIGINT)
@@ -83,55 +105,103 @@ func (c *Control) RebindUDPServer() {
_ = c.f.outside.Rebind()
// Trigger a lighthouse update, useful for mobile clients that should have an update interval of 0
c.f.lightHouse.SendUpdate(c.f)
c.f.lightHouse.SendUpdate()
// Let the main interface know that we rebound so that underlying tunnels know to trigger punches from their remotes
c.f.rebindCount++
}
// ListHostmap returns details about the actual or pending (handshaking) hostmap
func (c *Control) ListHostmap(pendingMap bool) []ControlHostInfo {
// ListHostmapHosts returns details about the actual or pending (handshaking) hostmap by vpn ip
func (c *Control) ListHostmapHosts(pendingMap bool) []ControlHostInfo {
if pendingMap {
return listHostMap(c.f.handshakeManager.pendingHostMap)
return listHostMapHosts(c.f.handshakeManager)
} else {
return listHostMap(c.f.hostMap)
return listHostMapHosts(c.f.hostMap)
}
}
// GetHostInfoByVpnIp returns a single tunnels hostInfo, or nil if not found
func (c *Control) GetHostInfoByVpnIp(vpnIp iputil.VpnIp, pending bool) *ControlHostInfo {
var hm *HostMap
if pending {
hm = c.f.handshakeManager.pendingHostMap
// ListHostmapIndexes returns details about the actual or pending (handshaking) hostmap by local index id
func (c *Control) ListHostmapIndexes(pendingMap bool) []ControlHostInfo {
if pendingMap {
return listHostMapIndexes(c.f.handshakeManager)
} else {
hm = c.f.hostMap
return listHostMapIndexes(c.f.hostMap)
}
}
// GetCertByVpnIp returns the authenticated certificate of the given vpn IP, or nil if not found
func (c *Control) GetCertByVpnIp(vpnIp netip.Addr) *cert.NebulaCertificate {
if c.f.myVpnNet.Addr() == vpnIp {
return c.f.pki.GetCertState().Certificate
}
hi := c.f.hostMap.QueryVpnIp(vpnIp)
if hi == nil {
return nil
}
return hi.GetCert()
}
// CreateTunnel creates a new tunnel to the given vpn ip.
func (c *Control) CreateTunnel(vpnIp netip.Addr) {
c.f.handshakeManager.StartHandshake(vpnIp, nil)
}
// PrintTunnel creates a new tunnel to the given vpn ip.
func (c *Control) PrintTunnel(vpnIp netip.Addr) *ControlHostInfo {
hi := c.f.hostMap.QueryVpnIp(vpnIp)
if hi == nil {
return nil
}
chi := copyHostInfo(hi, c.f.hostMap.GetPreferredRanges())
return &chi
}
// QueryLighthouse queries the lighthouse.
func (c *Control) QueryLighthouse(vpnIp netip.Addr) *CacheMap {
hi := c.f.lightHouse.Query(vpnIp)
if hi == nil {
return nil
}
return hi.CopyCache()
}
// GetHostInfoByVpnIp returns a single tunnels hostInfo, or nil if not found
// Caller should take care to Unmap() any 4in6 addresses prior to calling.
func (c *Control) GetHostInfoByVpnIp(vpnIp netip.Addr, pending bool) *ControlHostInfo {
var hl controlHostLister
if pending {
hl = c.f.handshakeManager
} else {
hl = c.f.hostMap
}
h, err := hm.QueryVpnIp(vpnIp)
if err != nil {
h := hl.QueryVpnIp(vpnIp)
if h == nil {
return nil
}
ch := copyHostInfo(h, c.f.hostMap.preferredRanges)
ch := copyHostInfo(h, c.f.hostMap.GetPreferredRanges())
return &ch
}
// SetRemoteForTunnel forces a tunnel to use a specific remote
func (c *Control) SetRemoteForTunnel(vpnIp iputil.VpnIp, addr udp.Addr) *ControlHostInfo {
hostInfo, err := c.f.hostMap.QueryVpnIp(vpnIp)
if err != nil {
// Caller should take care to Unmap() any 4in6 addresses prior to calling.
func (c *Control) SetRemoteForTunnel(vpnIp netip.Addr, addr netip.AddrPort) *ControlHostInfo {
hostInfo := c.f.hostMap.QueryVpnIp(vpnIp)
if hostInfo == nil {
return nil
}
hostInfo.SetRemote(addr.Copy())
ch := copyHostInfo(hostInfo, c.f.hostMap.preferredRanges)
hostInfo.SetRemote(addr)
ch := copyHostInfo(hostInfo, c.f.hostMap.GetPreferredRanges())
return &ch
}
// CloseTunnel closes a fully established tunnel. If localOnly is false it will notify the remote end as well.
func (c *Control) CloseTunnel(vpnIp iputil.VpnIp, localOnly bool) bool {
hostInfo, err := c.f.hostMap.QueryVpnIp(vpnIp)
if err != nil {
// Caller should take care to Unmap() any 4in6 addresses prior to calling.
func (c *Control) CloseTunnel(vpnIp netip.Addr, localOnly bool) bool {
hostInfo := c.f.hostMap.QueryVpnIp(vpnIp)
if hostInfo == nil {
return false
}
@@ -141,14 +211,13 @@ func (c *Control) CloseTunnel(vpnIp iputil.VpnIp, localOnly bool) bool {
0,
hostInfo.ConnectionState,
hostInfo,
hostInfo.remote,
[]byte{},
make([]byte, 12, 12),
make([]byte, mtu),
)
}
c.f.closeTunnel(hostInfo, false)
c.f.closeTunnel(hostInfo)
return true
}
@@ -156,60 +225,91 @@ func (c *Control) CloseTunnel(vpnIp iputil.VpnIp, localOnly bool) bool {
// the int returned is a count of tunnels closed
func (c *Control) CloseAllTunnels(excludeLighthouses bool) (closed int) {
//TODO: this is probably better as a function in ConnectionManager or HostMap directly
c.f.hostMap.Lock()
for _, h := range c.f.hostMap.Hosts {
lighthouses := c.f.lightHouse.GetLighthouses()
shutdown := func(h *HostInfo) {
if excludeLighthouses {
if _, ok := c.f.lightHouse.lighthouses[h.vpnIp]; ok {
continue
if _, ok := lighthouses[h.vpnIp]; ok {
return
}
}
c.f.send(header.CloseTunnel, 0, h.ConnectionState, h, []byte{}, make([]byte, 12, 12), make([]byte, mtu))
c.f.closeTunnel(h)
if h.ConnectionState.ready {
c.f.send(header.CloseTunnel, 0, h.ConnectionState, h, h.remote, []byte{}, make([]byte, 12, 12), make([]byte, mtu))
c.f.closeTunnel(h, true)
c.l.WithField("vpnIp", h.vpnIp).WithField("udpAddr", h.remote).
Debug("Sending close tunnel message")
closed++
}
c.l.WithField("vpnIp", h.vpnIp).WithField("udpAddr", h.remote).
Debug("Sending close tunnel message")
closed++
// Learn which hosts are being used as relays, so we can shut them down last.
relayingHosts := map[netip.Addr]*HostInfo{}
// Grab the hostMap lock to access the Relays map
c.f.hostMap.Lock()
for _, relayingHost := range c.f.hostMap.Relays {
relayingHosts[relayingHost.vpnIp] = relayingHost
}
c.f.hostMap.Unlock()
hostInfos := []*HostInfo{}
// Grab the hostMap lock to access the Hosts map
c.f.hostMap.Lock()
for _, relayHost := range c.f.hostMap.Indexes {
if _, ok := relayingHosts[relayHost.vpnIp]; !ok {
hostInfos = append(hostInfos, relayHost)
}
}
c.f.hostMap.Unlock()
for _, h := range hostInfos {
shutdown(h)
}
for _, h := range relayingHosts {
shutdown(h)
}
return
}
func copyHostInfo(h *HostInfo, preferredRanges []*net.IPNet) ControlHostInfo {
func (c *Control) Device() overlay.Device {
return c.f.inside
}
func copyHostInfo(h *HostInfo, preferredRanges []netip.Prefix) ControlHostInfo {
chi := ControlHostInfo{
VpnIp: h.vpnIp.ToIP(),
LocalIndex: h.localIndexId,
RemoteIndex: h.remoteIndexId,
RemoteAddrs: h.remotes.CopyAddrs(preferredRanges),
CachedPackets: len(h.packetStore),
VpnIp: h.vpnIp,
LocalIndex: h.localIndexId,
RemoteIndex: h.remoteIndexId,
RemoteAddrs: h.remotes.CopyAddrs(preferredRanges),
CurrentRelaysToMe: h.relayState.CopyRelayIps(),
CurrentRelaysThroughMe: h.relayState.CopyRelayForIps(),
CurrentRemote: h.remote,
}
if h.ConnectionState != nil {
chi.MessageCounter = atomic.LoadUint64(&h.ConnectionState.atomicMessageCounter)
chi.MessageCounter = h.ConnectionState.messageCounter.Load()
}
if c := h.GetCert(); c != nil {
chi.Cert = c.Copy()
}
if h.remote != nil {
chi.CurrentRemote = h.remote.Copy()
}
return chi
}
func listHostMap(hm *HostMap) []ControlHostInfo {
hm.RLock()
hosts := make([]ControlHostInfo, len(hm.Hosts))
i := 0
for _, v := range hm.Hosts {
hosts[i] = copyHostInfo(v, hm.preferredRanges)
i++
}
hm.RUnlock()
func listHostMapHosts(hl controlHostLister) []ControlHostInfo {
hosts := make([]ControlHostInfo, 0)
pr := hl.GetPreferredRanges()
hl.ForEachVpnIp(func(hostinfo *HostInfo) {
hosts = append(hosts, copyHostInfo(hostinfo, pr))
})
return hosts
}
func listHostMapIndexes(hl controlHostLister) []ControlHostInfo {
hosts := make([]ControlHostInfo, 0)
pr := hl.GetPreferredRanges()
hl.ForEachIndex(func(hostinfo *HostInfo) {
hosts = append(hosts, copyHostInfo(hostinfo, pr))
})
return hosts
}

View File

@@ -2,32 +2,34 @@ package nebula
import (
"net"
"net/netip"
"reflect"
"testing"
"time"
"github.com/sirupsen/logrus"
"github.com/slackhq/nebula/cert"
"github.com/slackhq/nebula/iputil"
"github.com/slackhq/nebula/udp"
"github.com/slackhq/nebula/util"
"github.com/slackhq/nebula/test"
"github.com/stretchr/testify/assert"
)
func TestControl_GetHostInfoByVpnIp(t *testing.T) {
l := util.NewTestLogger()
l := test.NewLogger()
// Special care must be taken to re-use all objects provided to the hostmap and certificate in the expectedInfo object
// To properly ensure we are not exposing core memory to the caller
hm := NewHostMap(l, "test", &net.IPNet{}, make([]*net.IPNet, 0))
remote1 := udp.NewAddr(net.ParseIP("0.0.0.100"), 4444)
remote2 := udp.NewAddr(net.ParseIP("1:2:3:4:5:6:7:8"), 4444)
hm := newHostMap(l, netip.Prefix{})
hm.preferredRanges.Store(&[]netip.Prefix{})
remote1 := netip.MustParseAddrPort("0.0.0.100:4444")
remote2 := netip.MustParseAddrPort("[1:2:3:4:5:6:7:8]:4444")
ipNet := net.IPNet{
IP: net.IPv4(1, 2, 3, 4),
IP: remote1.Addr().AsSlice(),
Mask: net.IPMask{255, 255, 255, 0},
}
ipNet2 := net.IPNet{
IP: net.ParseIP("1:2:3:4:5:6:7:8"),
IP: remote2.Addr().AsSlice(),
Mask: net.IPMask{255, 255, 255, 0},
}
@@ -47,10 +49,14 @@ func TestControl_GetHostInfoByVpnIp(t *testing.T) {
Signature: []byte{1, 2, 1, 2, 1, 3},
}
remotes := NewRemoteList()
remotes.unlockedPrependV4(0, NewIp4AndPort(remote1.IP, uint32(remote1.Port)))
remotes.unlockedPrependV6(0, NewIp6AndPort(remote2.IP, uint32(remote2.Port)))
hm.Add(iputil.Ip2VpnIp(ipNet.IP), &HostInfo{
remotes := NewRemoteList(nil)
remotes.unlockedPrependV4(netip.IPv4Unspecified(), NewIp4AndPortFromNetIP(remote1.Addr(), remote1.Port()))
remotes.unlockedPrependV6(netip.IPv4Unspecified(), NewIp6AndPortFromNetIP(remote2.Addr(), remote2.Port()))
vpnIp, ok := netip.AddrFromSlice(ipNet.IP)
assert.True(t, ok)
hm.unlockedAddHostInfo(&HostInfo{
remote: remote1,
remotes: remotes,
ConnectionState: &ConnectionState{
@@ -58,10 +64,18 @@ func TestControl_GetHostInfoByVpnIp(t *testing.T) {
},
remoteIndexId: 200,
localIndexId: 201,
vpnIp: iputil.Ip2VpnIp(ipNet.IP),
})
vpnIp: vpnIp,
relayState: RelayState{
relays: map[netip.Addr]struct{}{},
relayForByIp: map[netip.Addr]*Relay{},
relayForByIdx: map[uint32]*Relay{},
},
}, &Interface{})
hm.Add(iputil.Ip2VpnIp(ipNet2.IP), &HostInfo{
vpnIp2, ok := netip.AddrFromSlice(ipNet2.IP)
assert.True(t, ok)
hm.unlockedAddHostInfo(&HostInfo{
remote: remote1,
remotes: remotes,
ConnectionState: &ConnectionState{
@@ -69,8 +83,13 @@ func TestControl_GetHostInfoByVpnIp(t *testing.T) {
},
remoteIndexId: 200,
localIndexId: 201,
vpnIp: iputil.Ip2VpnIp(ipNet2.IP),
})
vpnIp: vpnIp2,
relayState: RelayState{
relays: map[netip.Addr]struct{}{},
relayForByIp: map[netip.Addr]*Relay{},
relayForByIdx: map[uint32]*Relay{},
},
}, &Interface{})
c := Control{
f: &Interface{
@@ -79,26 +98,29 @@ func TestControl_GetHostInfoByVpnIp(t *testing.T) {
l: logrus.New(),
}
thi := c.GetHostInfoByVpnIp(iputil.Ip2VpnIp(ipNet.IP), false)
thi := c.GetHostInfoByVpnIp(vpnIp, false)
expectedInfo := ControlHostInfo{
VpnIp: net.IPv4(1, 2, 3, 4).To4(),
LocalIndex: 201,
RemoteIndex: 200,
RemoteAddrs: []*udp.Addr{remote2, remote1},
CachedPackets: 0,
Cert: crt.Copy(),
MessageCounter: 0,
CurrentRemote: udp.NewAddr(net.ParseIP("0.0.0.100"), 4444),
VpnIp: vpnIp,
LocalIndex: 201,
RemoteIndex: 200,
RemoteAddrs: []netip.AddrPort{remote2, remote1},
Cert: crt.Copy(),
MessageCounter: 0,
CurrentRemote: remote1,
CurrentRelaysToMe: []netip.Addr{},
CurrentRelaysThroughMe: []netip.Addr{},
}
// Make sure we don't have any unexpected fields
assertFields(t, []string{"VpnIp", "LocalIndex", "RemoteIndex", "RemoteAddrs", "CachedPackets", "Cert", "MessageCounter", "CurrentRemote"}, thi)
util.AssertDeepCopyEqual(t, &expectedInfo, thi)
assertFields(t, []string{"VpnIp", "LocalIndex", "RemoteIndex", "RemoteAddrs", "Cert", "MessageCounter", "CurrentRemote", "CurrentRelaysToMe", "CurrentRelaysThroughMe"}, thi)
assert.EqualValues(t, &expectedInfo, thi)
//TODO: netip.Addr reuses global memory for zone identifiers which breaks our "no reused memory check" here
//test.AssertDeepCopyEqual(t, &expectedInfo, thi)
// Make sure we don't panic if the host info doesn't have a cert yet
assert.NotPanics(t, func() {
thi = c.GetHostInfoByVpnIp(iputil.Ip2VpnIp(ipNet2.IP), false)
thi = c.GetHostInfoByVpnIp(vpnIp2, false)
})
}

View File

@@ -4,21 +4,23 @@
package nebula
import (
"net"
"net/netip"
"github.com/slackhq/nebula/cert"
"github.com/google/gopacket"
"github.com/google/gopacket/layers"
"github.com/slackhq/nebula/header"
"github.com/slackhq/nebula/iputil"
"github.com/slackhq/nebula/overlay"
"github.com/slackhq/nebula/udp"
)
// WaitForTypeByIndex will pipe all messages from this control device into the pipeTo control device
// WaitForType will pipe all messages from this control device into the pipeTo control device
// returning after a message matching the criteria has been piped
func (c *Control) WaitForType(msgType header.MessageType, subType header.MessageSubType, pipeTo *Control) {
h := &header.H{}
for {
p := c.f.outside.Get(true)
p := c.f.outside.(*udp.TesterConn).Get(true)
if err := h.Parse(p.Data); err != nil {
panic(err)
}
@@ -34,7 +36,7 @@ func (c *Control) WaitForType(msgType header.MessageType, subType header.Message
func (c *Control) WaitForTypeByIndex(toIndex uint32, msgType header.MessageType, subType header.MessageSubType, pipeTo *Control) {
h := &header.H{}
for {
p := c.f.outside.Get(true)
p := c.f.outside.(*udp.TesterConn).Get(true)
if err := h.Parse(p.Data); err != nil {
panic(err)
}
@@ -47,52 +49,64 @@ func (c *Control) WaitForTypeByIndex(toIndex uint32, msgType header.MessageType,
// InjectLightHouseAddr will push toAddr into the local lighthouse cache for the vpnIp
// This is necessary if you did not configure static hosts or are not running a lighthouse
func (c *Control) InjectLightHouseAddr(vpnIp net.IP, toAddr *net.UDPAddr) {
func (c *Control) InjectLightHouseAddr(vpnIp netip.Addr, toAddr netip.AddrPort) {
c.f.lightHouse.Lock()
remoteList := c.f.lightHouse.unlockedGetRemoteList(iputil.Ip2VpnIp(vpnIp))
remoteList := c.f.lightHouse.unlockedGetRemoteList(vpnIp)
remoteList.Lock()
defer remoteList.Unlock()
c.f.lightHouse.Unlock()
iVpnIp := iputil.Ip2VpnIp(vpnIp)
if v4 := toAddr.IP.To4(); v4 != nil {
remoteList.unlockedPrependV4(iVpnIp, NewIp4AndPort(v4, uint32(toAddr.Port)))
if toAddr.Addr().Is4() {
remoteList.unlockedPrependV4(vpnIp, NewIp4AndPortFromNetIP(toAddr.Addr(), toAddr.Port()))
} else {
remoteList.unlockedPrependV6(iVpnIp, NewIp6AndPort(toAddr.IP, uint32(toAddr.Port)))
remoteList.unlockedPrependV6(vpnIp, NewIp6AndPortFromNetIP(toAddr.Addr(), toAddr.Port()))
}
}
// InjectRelays will push relayVpnIps into the local lighthouse cache for the vpnIp
// This is necessary to inform an initiator of possible relays for communicating with a responder
func (c *Control) InjectRelays(vpnIp netip.Addr, relayVpnIps []netip.Addr) {
c.f.lightHouse.Lock()
remoteList := c.f.lightHouse.unlockedGetRemoteList(vpnIp)
remoteList.Lock()
defer remoteList.Unlock()
c.f.lightHouse.Unlock()
remoteList.unlockedSetRelay(vpnIp, vpnIp, relayVpnIps)
}
// GetFromTun will pull a packet off the tun side of nebula
func (c *Control) GetFromTun(block bool) []byte {
return c.f.inside.(*Tun).Get(block)
return c.f.inside.(*overlay.TestTun).Get(block)
}
// GetFromUDP will pull a udp packet off the udp side of nebula
func (c *Control) GetFromUDP(block bool) *udp.Packet {
return c.f.outside.Get(block)
return c.f.outside.(*udp.TesterConn).Get(block)
}
func (c *Control) GetUDPTxChan() <-chan *udp.Packet {
return c.f.outside.TxPackets
return c.f.outside.(*udp.TesterConn).TxPackets
}
func (c *Control) GetTunTxChan() <-chan []byte {
return c.f.inside.(*Tun).txPackets
return c.f.inside.(*overlay.TestTun).TxPackets
}
// InjectUDPPacket will inject a packet into the udp side of nebula
func (c *Control) InjectUDPPacket(p *udp.Packet) {
c.f.outside.Send(p)
c.f.outside.(*udp.TesterConn).Send(p)
}
// InjectTunUDPPacket puts a udp packet on the tun interface. Using UDP here because it's a simpler protocol
func (c *Control) InjectTunUDPPacket(toIp net.IP, toPort uint16, fromPort uint16, data []byte) {
func (c *Control) InjectTunUDPPacket(toIp netip.Addr, toPort uint16, fromPort uint16, data []byte) {
//TODO: IPV6-WORK
ip := layers.IPv4{
Version: 4,
TTL: 64,
Protocol: layers.IPProtocolUDP,
SrcIP: c.f.inside.CidrNet().IP,
DstIP: toIp,
SrcIP: c.f.inside.Cidr().Addr().Unmap().AsSlice(),
DstIP: toIp.Unmap().AsSlice(),
}
udp := layers.UDP{
@@ -114,19 +128,35 @@ func (c *Control) InjectTunUDPPacket(toIp net.IP, toPort uint16, fromPort uint16
panic(err)
}
c.f.inside.(*Tun).Send(buffer.Bytes())
c.f.inside.(*overlay.TestTun).Send(buffer.Bytes())
}
func (c *Control) GetUDPAddr() string {
return c.f.outside.Addr.String()
func (c *Control) GetVpnIp() netip.Addr {
return c.f.myVpnNet.Addr()
}
func (c *Control) KillPendingTunnel(vpnIp net.IP) bool {
hostinfo, ok := c.f.handshakeManager.pendingHostMap.Hosts[iputil.Ip2VpnIp(vpnIp)]
if !ok {
func (c *Control) GetUDPAddr() netip.AddrPort {
return c.f.outside.(*udp.TesterConn).Addr
}
func (c *Control) KillPendingTunnel(vpnIp netip.Addr) bool {
hostinfo := c.f.handshakeManager.QueryVpnIp(vpnIp)
if hostinfo == nil {
return false
}
c.f.handshakeManager.pendingHostMap.DeleteHostInfo(hostinfo)
c.f.handshakeManager.DeleteHostInfo(hostinfo)
return true
}
func (c *Control) GetHostmap() *HostMap {
return c.f.hostMap
}
func (c *Control) GetCert() *cert.NebulaCertificate {
return c.f.pki.GetCertState().Certificate
}
func (c *Control) ReHandshake(vpnIp netip.Addr) {
c.f.handshakeManager.StartHandshake(vpnIp, nil)
}

View File

@@ -1,13 +0,0 @@
[Unit]
Description=nebula
Wants=basic.target network-online.target
After=basic.target network.target network-online.target
[Service]
SyslogIdentifier=nebula
ExecReload=/bin/kill -HUP $MAINPID
ExecStart=/usr/bin/nebula -config /etc/nebula/config.yml
Restart=always
[Install]
WantedBy=multi-user.target

View File

@@ -1,15 +0,0 @@
[Unit]
Description=Nebula overlay networking tool
After=basic.target network.target network-online.target
Before=sshd.service
Wants=basic.target network-online.target
[Service]
ExecReload=/bin/kill -HUP $MAINPID
ExecStart=/usr/bin/nebula -config /etc/nebula/config.yml
Restart=always
SyslogIdentifier=nebula
[Install]
WantedBy=multi-user.target

84
dist/windows/wintun/LICENSE.txt vendored Normal file
View File

@@ -0,0 +1,84 @@
Prebuilt Binaries License
-------------------------
1. DEFINITIONS. "Software" means the precise contents of the "wintun.dll"
files that are included in the .zip file that contains this document as
downloaded from wintun.net/builds.
2. LICENSE GRANT. WireGuard LLC grants to you a non-exclusive and
non-transferable right to use Software for lawful purposes under certain
obligations and limited rights as set forth in this agreement.
3. RESTRICTIONS. Software is owned and copyrighted by WireGuard LLC. It is
licensed, not sold. Title to Software and all associated intellectual
property rights are retained by WireGuard. You must not:
a. reverse engineer, decompile, disassemble, extract from, or otherwise
modify the Software;
b. modify or create derivative work based upon Software in whole or in
parts, except insofar as only the API interfaces of the "wintun.h" file
distributed alongside the Software (the "Permitted API") are used;
c. remove any proprietary notices, labels, or copyrights from the Software;
d. resell, redistribute, lease, rent, transfer, sublicense, or otherwise
transfer rights of the Software without the prior written consent of
WireGuard LLC, except insofar as the Software is distributed alongside
other software that uses the Software only via the Permitted API;
e. use the name of WireGuard LLC, the WireGuard project, the Wintun
project, or the names of its contributors to endorse or promote products
derived from the Software without specific prior written consent.
4. LIMITED WARRANTY. THE SOFTWARE IS PROVIDED "AS IS" AND WITHOUT WARRANTY OF
ANY KIND. WIREGUARD LLC HEREBY EXCLUDES AND DISCLAIMS ALL IMPLIED OR
STATUTORY WARRANTIES, INCLUDING ANY WARRANTIES OF MERCHANTABILITY, FITNESS
FOR A PARTICULAR PURPOSE, QUALITY, NON-INFRINGEMENT, TITLE, RESULTS,
EFFORTS, OR QUIET ENJOYMENT. THERE IS NO WARRANTY THAT THE PRODUCT WILL BE
ERROR-FREE OR WILL FUNCTION WITHOUT INTERRUPTION. YOU ASSUME THE ENTIRE
RISK FOR THE RESULTS OBTAINED USING THE PRODUCT. TO THE EXTENT THAT
WIREGUARD LLC MAY NOT DISCLAIM ANY WARRANTY AS A MATTER OF APPLICABLE LAW,
THE SCOPE AND DURATION OF SUCH WARRANTY WILL BE THE MINIMUM PERMITTED UNDER
SUCH LAW. ALL EXPRESS OR IMPLIED CONDITIONS, REPRESENTATIONS AND
WARRANTIES, INCLUDING ANY IMPLIED WARRANTY OF MERCHANTABILITY, FITNESS FOR
A PARTICULAR PURPOSE OR NON-INFRINGEMENT ARE DISCLAIMED, EXCEPT TO THE
EXTENT THAT THESE DISCLAIMERS ARE HELD TO BE LEGALLY INVALID.
5. LIMITATION OF LIABILITY. To the extent not prohibited by law, in no event
WireGuard LLC or any third-party-developer will be liable for any lost
revenue, profit or data or for special, indirect, consequential, incidental
or punitive damages, however caused regardless of the theory of liability,
arising out of or related to the use of or inability to use Software, even
if WireGuard LLC has been advised of the possibility of such damages.
Solely you are responsible for determining the appropriateness of using
Software and accept full responsibility for all risks associated with its
exercise of rights under this agreement, including but not limited to the
risks and costs of program errors, compliance with applicable laws, damage
to or loss of data, programs or equipment, and unavailability or
interruption of operations. The foregoing limitations will apply even if
the above stated warranty fails of its essential purpose. You acknowledge,
that it is in the nature of software that software is complex and not
completely free of errors. In no event shall WireGuard LLC or any
third-party-developer be liable to you under any theory for any damages
suffered by you or any user of Software or for any special, incidental,
indirect, consequential or similar damages (including without limitation
damages for loss of business profits, business interruption, loss of
business information or any other pecuniary loss) arising out of the use or
inability to use Software, even if WireGuard LLC has been advised of the
possibility of such damages and regardless of the legal or quitable theory
(contract, tort, or otherwise) upon which the claim is based.
6. TERMINATION. This agreement is affected until terminated. You may
terminate this agreement at any time. This agreement will terminate
immediately without notice from WireGuard LLC if you fail to comply with
the terms and conditions of this agreement. Upon termination, you must
delete Software and all copies of Software and cease all forms of
distribution of Software.
7. SEVERABILITY. If any provision of this agreement is held to be
unenforceable, this agreement will remain in effect with the provision
omitted, unless omission would frustrate the intent of the parties, in
which case this agreement will immediately terminate.
8. RESERVATION OF RIGHTS. All rights not expressly granted in this agreement
are reserved by WireGuard LLC. For example, WireGuard LLC reserves the
right at any time to cease development of Software, to alter distribution
details, features, specifications, capabilities, functions, licensing
terms, release dates, APIs, ABIs, general availability, or other
characteristics of the Software.

339
dist/windows/wintun/README.md vendored Normal file
View File

@@ -0,0 +1,339 @@
# [Wintun Network Adapter](https://www.wintun.net/)
### TUN Device Driver for Windows
This is a layer 3 TUN driver for Windows 7, 8, 8.1, and 10. Originally created for [WireGuard](https://www.wireguard.com/), it is intended to be useful to a wide variety of projects that require layer 3 tunneling devices with implementations primarily in userspace.
## Installation
Wintun is deployed as a platform-specific `wintun.dll` file. Install the `wintun.dll` file side-by-side with your application. Download the dll from [wintun.net](https://www.wintun.net/), alongside the header file for your application described below.
## Usage
Include the [`wintun.h` file](https://git.zx2c4.com/wintun/tree/api/wintun.h) in your project simply by copying it there and dynamically load the `wintun.dll` using [`LoadLibraryEx()`](https://docs.microsoft.com/en-us/windows/win32/api/libloaderapi/nf-libloaderapi-loadlibraryexa) and [`GetProcAddress()`](https://docs.microsoft.com/en-us/windows/win32/api/libloaderapi/nf-libloaderapi-getprocaddress) to resolve each function, using the typedefs provided in the header file. The [`InitializeWintun` function in the example.c code](https://git.zx2c4.com/wintun/tree/example/example.c) provides this in a function that you can simply copy and paste.
With the library setup, Wintun can then be used by first creating an adapter, configuring it, and then setting its status to "up". Adapters have names (e.g. "OfficeNet") and types (e.g. "Wintun").
```C
WINTUN_ADAPTER_HANDLE Adapter1 = WintunCreateAdapter(L"OfficeNet", L"Wintun", &SomeFixedGUID1);
WINTUN_ADAPTER_HANDLE Adapter2 = WintunCreateAdapter(L"HomeNet", L"Wintun", &SomeFixedGUID2);
WINTUN_ADAPTER_HANDLE Adapter3 = WintunCreateAdapter(L"Data Center", L"Wintun", &SomeFixedGUID3);
```
After creating an adapter, we can use it by starting a session:
```C
WINTUN_SESSION_HANDLE Session = WintunStartSession(Adapter2, 0x400000);
```
Then, the `WintunAllocateSendPacket` and `WintunSendPacket` functions can be used for sending packets ([used by `SendPackets` in the example.c code](https://git.zx2c4.com/wintun/tree/example/example.c)):
```C
BYTE *OutgoingPacket = WintunAllocateSendPacket(Session, PacketDataSize);
if (OutgoingPacket)
{
memcpy(OutgoingPacket, PacketData, PacketDataSize);
WintunSendPacket(Session, OutgoingPacket);
}
else if (GetLastError() != ERROR_BUFFER_OVERFLOW) // Silently drop packets if the ring is full
Log(L"Packet write failed");
```
And the `WintunReceivePacket` and `WintunReleaseReceivePacket` functions can be used for receiving packets ([used by `ReceivePackets` in the example.c code](https://git.zx2c4.com/wintun/tree/example/example.c)):
```C
for (;;)
{
DWORD IncomingPacketSize;
BYTE *IncomingPacket = WintunReceivePacket(Session, &IncomingPacketSize);
if (IncomingPacket)
{
DoSomethingWithPacket(IncomingPacket, IncomingPacketSize);
WintunReleaseReceivePacket(Session, IncomingPacket);
}
else if (GetLastError() == ERROR_NO_MORE_ITEMS)
WaitForSingleObject(WintunGetReadWaitEvent(Session), INFINITE);
else
{
Log(L"Packet read failed");
break;
}
}
```
Some high performance use cases may want to spin on `WintunReceivePackets` for a number of cycles before falling back to waiting on the read-wait event.
You are **highly encouraged** to read the [**example.c short example**](https://git.zx2c4.com/wintun/tree/example/example.c) to see how to put together a simple userspace network tunnel.
The various functions and definitions are [documented in the reference below](#Reference).
## Reference
### Macro Definitions
#### WINTUN\_MAX\_POOL
`#define WINTUN_MAX_POOL 256`
Maximum pool name length including zero terminator
#### WINTUN\_MIN\_RING\_CAPACITY
`#define WINTUN_MIN_RING_CAPACITY 0x20000 /* 128kiB */`
Minimum ring capacity.
#### WINTUN\_MAX\_RING\_CAPACITY
`#define WINTUN_MAX_RING_CAPACITY 0x4000000 /* 64MiB */`
Maximum ring capacity.
#### WINTUN\_MAX\_IP\_PACKET\_SIZE
`#define WINTUN_MAX_IP_PACKET_SIZE 0xFFFF`
Maximum IP packet size
### Typedefs
#### WINTUN\_ADAPTER\_HANDLE
`typedef void* WINTUN_ADAPTER_HANDLE`
A handle representing Wintun adapter
#### WINTUN\_ENUM\_CALLBACK
`typedef BOOL(* WINTUN_ENUM_CALLBACK) (WINTUN_ADAPTER_HANDLE Adapter, LPARAM Param)`
Called by WintunEnumAdapters for each adapter in the pool.
**Parameters**
- *Adapter*: Adapter handle, which will be freed when this function returns.
- *Param*: An application-defined value passed to the WintunEnumAdapters.
**Returns**
Non-zero to continue iterating adapters; zero to stop.
#### WINTUN\_LOGGER\_CALLBACK
`typedef void(* WINTUN_LOGGER_CALLBACK) (WINTUN_LOGGER_LEVEL Level, DWORD64 Timestamp, const WCHAR *Message)`
Called by internal logger to report diagnostic messages
**Parameters**
- *Level*: Message level.
- *Timestamp*: Message timestamp in in 100ns intervals since 1601-01-01 UTC.
- *Message*: Message text.
#### WINTUN\_SESSION\_HANDLE
`typedef void* WINTUN_SESSION_HANDLE`
A handle representing Wintun session
### Enumeration Types
#### WINTUN\_LOGGER\_LEVEL
`enum WINTUN_LOGGER_LEVEL`
Determines the level of logging, passed to WINTUN\_LOGGER\_CALLBACK.
- *WINTUN\_LOG\_INFO*: Informational
- *WINTUN\_LOG\_WARN*: Warning
- *WINTUN\_LOG\_ERR*: Error
Enumerator
### Functions
#### WintunCreateAdapter()
`WINTUN_ADAPTER_HANDLE WintunCreateAdapter (const WCHAR * Name, const WCHAR * TunnelType, const GUID * RequestedGUID)`
Creates a new Wintun adapter.
**Parameters**
- *Name*: The requested name of the adapter. Zero-terminated string of up to MAX\_ADAPTER\_NAME-1 characters.
- *Name*: Name of the adapter tunnel type. Zero-terminated string of up to MAX\_ADAPTER\_NAME-1 characters.
- *RequestedGUID*: The GUID of the created network adapter, which then influences NLA generation deterministically. If it is set to NULL, the GUID is chosen by the system at random, and hence a new NLA entry is created for each new adapter. It is called "requested" GUID because the API it uses is completely undocumented, and so there could be minor interesting complications with its usage.
**Returns**
If the function succeeds, the return value is the adapter handle. Must be released with WintunCloseAdapter. If the function fails, the return value is NULL. To get extended error information, call GetLastError.
#### WintunOpenAdapter()
`WINTUN_ADAPTER_HANDLE WintunOpenAdapter (const WCHAR * Name)`
Opens an existing Wintun adapter.
**Parameters**
- *Name*: The requested name of the adapter. Zero-terminated string of up to MAX\_ADAPTER\_NAME-1 characters.
**Returns**
If the function succeeds, the return value is adapter handle. Must be released with WintunCloseAdapter. If the function fails, the return value is NULL. To get extended error information, call GetLastError.
#### WintunCloseAdapter()
`void WintunCloseAdapter (WINTUN_ADAPTER_HANDLE Adapter)`
Releases Wintun adapter resources and, if adapter was created with WintunCreateAdapter, removes adapter.
**Parameters**
- *Adapter*: Adapter handle obtained with WintunCreateAdapter or WintunOpenAdapter.
#### WintunDeleteDriver()
`BOOL WintunDeleteDriver ()`
Deletes the Wintun driver if there are no more adapters in use.
**Returns**
If the function succeeds, the return value is nonzero. If the function fails, the return value is zero. To get extended error information, call GetLastError.
#### WintunGetAdapterLuid()
`void WintunGetAdapterLuid (WINTUN_ADAPTER_HANDLE Adapter, NET_LUID * Luid)`
Returns the LUID of the adapter.
**Parameters**
- *Adapter*: Adapter handle obtained with WintunOpenAdapter or WintunCreateAdapter
- *Luid*: Pointer to LUID to receive adapter LUID.
#### WintunGetRunningDriverVersion()
`DWORD WintunGetRunningDriverVersion (void )`
Determines the version of the Wintun driver currently loaded.
**Returns**
If the function succeeds, the return value is the version number. If the function fails, the return value is zero. To get extended error information, call GetLastError. Possible errors include the following: ERROR\_FILE\_NOT\_FOUND Wintun not loaded
#### WintunSetLogger()
`void WintunSetLogger (WINTUN_LOGGER_CALLBACK NewLogger)`
Sets logger callback function.
**Parameters**
- *NewLogger*: Pointer to callback function to use as a new global logger. NewLogger may be called from various threads concurrently. Should the logging require serialization, you must handle serialization in NewLogger. Set to NULL to disable.
#### WintunStartSession()
`WINTUN_SESSION_HANDLE WintunStartSession (WINTUN_ADAPTER_HANDLE Adapter, DWORD Capacity)`
Starts Wintun session.
**Parameters**
- *Adapter*: Adapter handle obtained with WintunOpenAdapter or WintunCreateAdapter
- *Capacity*: Rings capacity. Must be between WINTUN\_MIN\_RING\_CAPACITY and WINTUN\_MAX\_RING\_CAPACITY (incl.) Must be a power of two.
**Returns**
Wintun session handle. Must be released with WintunEndSession. If the function fails, the return value is NULL. To get extended error information, call GetLastError.
#### WintunEndSession()
`void WintunEndSession (WINTUN_SESSION_HANDLE Session)`
Ends Wintun session.
**Parameters**
- *Session*: Wintun session handle obtained with WintunStartSession
#### WintunGetReadWaitEvent()
`HANDLE WintunGetReadWaitEvent (WINTUN_SESSION_HANDLE Session)`
Gets Wintun session's read-wait event handle.
**Parameters**
- *Session*: Wintun session handle obtained with WintunStartSession
**Returns**
Pointer to receive event handle to wait for available data when reading. Should WintunReceivePackets return ERROR\_NO\_MORE\_ITEMS (after spinning on it for a while under heavy load), wait for this event to become signaled before retrying WintunReceivePackets. Do not call CloseHandle on this event - it is managed by the session.
#### WintunReceivePacket()
`BYTE* WintunReceivePacket (WINTUN_SESSION_HANDLE Session, DWORD * PacketSize)`
Retrieves one or packet. After the packet content is consumed, call WintunReleaseReceivePacket with Packet returned from this function to release internal buffer. This function is thread-safe.
**Parameters**
- *Session*: Wintun session handle obtained with WintunStartSession
- *PacketSize*: Pointer to receive packet size.
**Returns**
Pointer to layer 3 IPv4 or IPv6 packet. Client may modify its content at will. If the function fails, the return value is NULL. To get extended error information, call GetLastError. Possible errors include the following: ERROR\_HANDLE\_EOF Wintun adapter is terminating; ERROR\_NO\_MORE\_ITEMS Wintun buffer is exhausted; ERROR\_INVALID\_DATA Wintun buffer is corrupt
#### WintunReleaseReceivePacket()
`void WintunReleaseReceivePacket (WINTUN_SESSION_HANDLE Session, const BYTE * Packet)`
Releases internal buffer after the received packet has been processed by the client. This function is thread-safe.
**Parameters**
- *Session*: Wintun session handle obtained with WintunStartSession
- *Packet*: Packet obtained with WintunReceivePacket
#### WintunAllocateSendPacket()
`BYTE* WintunAllocateSendPacket (WINTUN_SESSION_HANDLE Session, DWORD PacketSize)`
Allocates memory for a packet to send. After the memory is filled with packet data, call WintunSendPacket to send and release internal buffer. WintunAllocateSendPacket is thread-safe and the WintunAllocateSendPacket order of calls define the packet sending order.
**Parameters**
- *Session*: Wintun session handle obtained with WintunStartSession
- *PacketSize*: Exact packet size. Must be less or equal to WINTUN\_MAX\_IP\_PACKET\_SIZE.
**Returns**
Returns pointer to memory where to prepare layer 3 IPv4 or IPv6 packet for sending. If the function fails, the return value is NULL. To get extended error information, call GetLastError. Possible errors include the following: ERROR\_HANDLE\_EOF Wintun adapter is terminating; ERROR\_BUFFER\_OVERFLOW Wintun buffer is full;
#### WintunSendPacket()
`void WintunSendPacket (WINTUN_SESSION_HANDLE Session, const BYTE * Packet)`
Sends the packet and releases internal buffer. WintunSendPacket is thread-safe, but the WintunAllocateSendPacket order of calls define the packet sending order. This means the packet is not guaranteed to be sent in the WintunSendPacket yet.
**Parameters**
- *Session*: Wintun session handle obtained with WintunStartSession
- *Packet*: Packet obtained with WintunAllocateSendPacket
## Building
**Do not distribute drivers or files named "Wintun", as they will most certainly clash with official deployments. Instead distribute [`wintun.dll` as downloaded from wintun.net](https://www.wintun.net).**
General requirements:
- [Visual Studio 2019](https://visualstudio.microsoft.com/downloads/) with Windows SDK
- [Windows Driver Kit](https://docs.microsoft.com/en-us/windows-hardware/drivers/download-the-wdk)
`wintun.sln` may be opened in Visual Studio for development and building. Be sure to run `bcdedit /set testsigning on` and then reboot before to enable unsigned driver loading. The default run sequence (F5) in Visual Studio will build the example project and its dependencies.
## License
The entire contents of [the repository](https://git.zx2c4.com/wintun/), including all documentation and example code, is "Copyright © 2018-2021 WireGuard LLC. All Rights Reserved." Source code is licensed under the [GPLv2](COPYING). Prebuilt binaries from [wintun.net](https://www.wintun.net/) are released under a more permissive license suitable for more forms of software contained inside of the .zip files distributed there.

BIN
dist/windows/wintun/bin/amd64/wintun.dll vendored Normal file

Binary file not shown.

BIN
dist/windows/wintun/bin/arm/wintun.dll vendored Normal file

Binary file not shown.

BIN
dist/windows/wintun/bin/arm64/wintun.dll vendored Normal file

Binary file not shown.

BIN
dist/windows/wintun/bin/x86/wintun.dll vendored Normal file

Binary file not shown.

270
dist/windows/wintun/include/wintun.h vendored Normal file
View File

@@ -0,0 +1,270 @@
/* SPDX-License-Identifier: GPL-2.0 OR MIT
*
* Copyright (C) 2018-2021 WireGuard LLC. All Rights Reserved.
*/
#pragma once
#include <winsock2.h>
#include <windows.h>
#include <ipexport.h>
#include <ifdef.h>
#include <ws2ipdef.h>
#ifdef __cplusplus
extern "C" {
#endif
#ifndef ALIGNED
# if defined(_MSC_VER)
# define ALIGNED(n) __declspec(align(n))
# elif defined(__GNUC__)
# define ALIGNED(n) __attribute__((aligned(n)))
# else
# error "Unable to define ALIGNED"
# endif
#endif
/* MinGW is missing this one, unfortunately. */
#ifndef _Post_maybenull_
# define _Post_maybenull_
#endif
#pragma warning(push)
#pragma warning(disable : 4324) /* structure was padded due to alignment specifier */
/**
* A handle representing Wintun adapter
*/
typedef struct _WINTUN_ADAPTER *WINTUN_ADAPTER_HANDLE;
/**
* Creates a new Wintun adapter.
*
* @param Name The requested name of the adapter. Zero-terminated string of up to MAX_ADAPTER_NAME-1
* characters.
*
* @param TunnelType Name of the adapter tunnel type. Zero-terminated string of up to MAX_ADAPTER_NAME-1
* characters.
*
* @param RequestedGUID The GUID of the created network adapter, which then influences NLA generation deterministically.
* If it is set to NULL, the GUID is chosen by the system at random, and hence a new NLA entry is
* created for each new adapter. It is called "requested" GUID because the API it uses is
* completely undocumented, and so there could be minor interesting complications with its usage.
*
* @return If the function succeeds, the return value is the adapter handle. Must be released with
* WintunCloseAdapter. If the function fails, the return value is NULL. To get extended error information, call
* GetLastError.
*/
typedef _Must_inspect_result_
_Return_type_success_(return != NULL)
_Post_maybenull_
WINTUN_ADAPTER_HANDLE(WINAPI WINTUN_CREATE_ADAPTER_FUNC)
(_In_z_ LPCWSTR Name, _In_z_ LPCWSTR TunnelType, _In_opt_ const GUID *RequestedGUID);
/**
* Opens an existing Wintun adapter.
*
* @param Name The requested name of the adapter. Zero-terminated string of up to MAX_ADAPTER_NAME-1
* characters.
*
* @return If the function succeeds, the return value is the adapter handle. Must be released with
* WintunCloseAdapter. If the function fails, the return value is NULL. To get extended error information, call
* GetLastError.
*/
typedef _Must_inspect_result_
_Return_type_success_(return != NULL)
_Post_maybenull_
WINTUN_ADAPTER_HANDLE(WINAPI WINTUN_OPEN_ADAPTER_FUNC)(_In_z_ LPCWSTR Name);
/**
* Releases Wintun adapter resources and, if adapter was created with WintunCreateAdapter, removes adapter.
*
* @param Adapter Adapter handle obtained with WintunCreateAdapter or WintunOpenAdapter.
*/
typedef VOID(WINAPI WINTUN_CLOSE_ADAPTER_FUNC)(_In_opt_ WINTUN_ADAPTER_HANDLE Adapter);
/**
* Deletes the Wintun driver if there are no more adapters in use.
*
* @return If the function succeeds, the return value is nonzero. If the function fails, the return value is zero. To
* get extended error information, call GetLastError.
*/
typedef _Return_type_success_(return != FALSE)
BOOL(WINAPI WINTUN_DELETE_DRIVER_FUNC)(VOID);
/**
* Returns the LUID of the adapter.
*
* @param Adapter Adapter handle obtained with WintunCreateAdapter or WintunOpenAdapter
*
* @param Luid Pointer to LUID to receive adapter LUID.
*/
typedef VOID(WINAPI WINTUN_GET_ADAPTER_LUID_FUNC)(_In_ WINTUN_ADAPTER_HANDLE Adapter, _Out_ NET_LUID *Luid);
/**
* Determines the version of the Wintun driver currently loaded.
*
* @return If the function succeeds, the return value is the version number. If the function fails, the return value is
* zero. To get extended error information, call GetLastError. Possible errors include the following:
* ERROR_FILE_NOT_FOUND Wintun not loaded
*/
typedef _Return_type_success_(return != 0)
DWORD(WINAPI WINTUN_GET_RUNNING_DRIVER_VERSION_FUNC)(VOID);
/**
* Determines the level of logging, passed to WINTUN_LOGGER_CALLBACK.
*/
typedef enum
{
WINTUN_LOG_INFO, /**< Informational */
WINTUN_LOG_WARN, /**< Warning */
WINTUN_LOG_ERR /**< Error */
} WINTUN_LOGGER_LEVEL;
/**
* Called by internal logger to report diagnostic messages
*
* @param Level Message level.
*
* @param Timestamp Message timestamp in in 100ns intervals since 1601-01-01 UTC.
*
* @param Message Message text.
*/
typedef VOID(CALLBACK *WINTUN_LOGGER_CALLBACK)(
_In_ WINTUN_LOGGER_LEVEL Level,
_In_ DWORD64 Timestamp,
_In_z_ LPCWSTR Message);
/**
* Sets logger callback function.
*
* @param NewLogger Pointer to callback function to use as a new global logger. NewLogger may be called from various
* threads concurrently. Should the logging require serialization, you must handle serialization in
* NewLogger. Set to NULL to disable.
*/
typedef VOID(WINAPI WINTUN_SET_LOGGER_FUNC)(_In_ WINTUN_LOGGER_CALLBACK NewLogger);
/**
* Minimum ring capacity.
*/
#define WINTUN_MIN_RING_CAPACITY 0x20000 /* 128kiB */
/**
* Maximum ring capacity.
*/
#define WINTUN_MAX_RING_CAPACITY 0x4000000 /* 64MiB */
/**
* A handle representing Wintun session
*/
typedef struct _TUN_SESSION *WINTUN_SESSION_HANDLE;
/**
* Starts Wintun session.
*
* @param Adapter Adapter handle obtained with WintunOpenAdapter or WintunCreateAdapter
*
* @param Capacity Rings capacity. Must be between WINTUN_MIN_RING_CAPACITY and WINTUN_MAX_RING_CAPACITY (incl.)
* Must be a power of two.
*
* @return Wintun session handle. Must be released with WintunEndSession. If the function fails, the return value is
* NULL. To get extended error information, call GetLastError.
*/
typedef _Must_inspect_result_
_Return_type_success_(return != NULL)
_Post_maybenull_
WINTUN_SESSION_HANDLE(WINAPI WINTUN_START_SESSION_FUNC)(_In_ WINTUN_ADAPTER_HANDLE Adapter, _In_ DWORD Capacity);
/**
* Ends Wintun session.
*
* @param Session Wintun session handle obtained with WintunStartSession
*/
typedef VOID(WINAPI WINTUN_END_SESSION_FUNC)(_In_ WINTUN_SESSION_HANDLE Session);
/**
* Gets Wintun session's read-wait event handle.
*
* @param Session Wintun session handle obtained with WintunStartSession
*
* @return Pointer to receive event handle to wait for available data when reading. Should
* WintunReceivePackets return ERROR_NO_MORE_ITEMS (after spinning on it for a while under heavy
* load), wait for this event to become signaled before retrying WintunReceivePackets. Do not call
* CloseHandle on this event - it is managed by the session.
*/
typedef HANDLE(WINAPI WINTUN_GET_READ_WAIT_EVENT_FUNC)(_In_ WINTUN_SESSION_HANDLE Session);
/**
* Maximum IP packet size
*/
#define WINTUN_MAX_IP_PACKET_SIZE 0xFFFF
/**
* Retrieves one or packet. After the packet content is consumed, call WintunReleaseReceivePacket with Packet returned
* from this function to release internal buffer. This function is thread-safe.
*
* @param Session Wintun session handle obtained with WintunStartSession
*
* @param PacketSize Pointer to receive packet size.
*
* @return Pointer to layer 3 IPv4 or IPv6 packet. Client may modify its content at will. If the function fails, the
* return value is NULL. To get extended error information, call GetLastError. Possible errors include the
* following:
* ERROR_HANDLE_EOF Wintun adapter is terminating;
* ERROR_NO_MORE_ITEMS Wintun buffer is exhausted;
* ERROR_INVALID_DATA Wintun buffer is corrupt
*/
typedef _Must_inspect_result_
_Return_type_success_(return != NULL)
_Post_maybenull_
_Post_writable_byte_size_(*PacketSize)
BYTE *(WINAPI WINTUN_RECEIVE_PACKET_FUNC)(_In_ WINTUN_SESSION_HANDLE Session, _Out_ DWORD *PacketSize);
/**
* Releases internal buffer after the received packet has been processed by the client. This function is thread-safe.
*
* @param Session Wintun session handle obtained with WintunStartSession
*
* @param Packet Packet obtained with WintunReceivePacket
*/
typedef VOID(
WINAPI WINTUN_RELEASE_RECEIVE_PACKET_FUNC)(_In_ WINTUN_SESSION_HANDLE Session, _In_ const BYTE *Packet);
/**
* Allocates memory for a packet to send. After the memory is filled with packet data, call WintunSendPacket to send
* and release internal buffer. WintunAllocateSendPacket is thread-safe and the WintunAllocateSendPacket order of
* calls define the packet sending order.
*
* @param Session Wintun session handle obtained with WintunStartSession
*
* @param PacketSize Exact packet size. Must be less or equal to WINTUN_MAX_IP_PACKET_SIZE.
*
* @return Returns pointer to memory where to prepare layer 3 IPv4 or IPv6 packet for sending. If the function fails,
* the return value is NULL. To get extended error information, call GetLastError. Possible errors include the
* following:
* ERROR_HANDLE_EOF Wintun adapter is terminating;
* ERROR_BUFFER_OVERFLOW Wintun buffer is full;
*/
typedef _Must_inspect_result_
_Return_type_success_(return != NULL)
_Post_maybenull_
_Post_writable_byte_size_(PacketSize)
BYTE *(WINAPI WINTUN_ALLOCATE_SEND_PACKET_FUNC)(_In_ WINTUN_SESSION_HANDLE Session, _In_ DWORD PacketSize);
/**
* Sends the packet and releases internal buffer. WintunSendPacket is thread-safe, but the WintunAllocateSendPacket
* order of calls define the packet sending order. This means the packet is not guaranteed to be sent in the
* WintunSendPacket yet.
*
* @param Session Wintun session handle obtained with WintunStartSession
*
* @param Packet Packet obtained with WintunAllocateSendPacket
*/
typedef VOID(WINAPI WINTUN_SEND_PACKET_FUNC)(_In_ WINTUN_SESSION_HANDLE Session, _In_ const BYTE *Packet);
#pragma warning(pop)
#ifdef __cplusplus
}
#endif

View File

@@ -3,13 +3,14 @@ package nebula
import (
"fmt"
"net"
"net/netip"
"strconv"
"strings"
"sync"
"github.com/miekg/dns"
"github.com/sirupsen/logrus"
"github.com/slackhq/nebula/config"
"github.com/slackhq/nebula/iputil"
)
// This whole thing should be rewritten to use context
@@ -33,37 +34,38 @@ func newDnsRecords(hostMap *HostMap) *dnsRecords {
func (d *dnsRecords) Query(data string) string {
d.RLock()
if r, ok := d.dnsMap[data]; ok {
d.RUnlock()
defer d.RUnlock()
if r, ok := d.dnsMap[strings.ToLower(data)]; ok {
return r
}
d.RUnlock()
return ""
}
func (d *dnsRecords) QueryCert(data string) string {
ip := net.ParseIP(data[:len(data)-1])
if ip == nil {
return ""
}
iip := iputil.Ip2VpnIp(ip)
hostinfo, err := d.hostMap.QueryVpnIp(iip)
ip, err := netip.ParseAddr(data[:len(data)-1])
if err != nil {
return ""
}
hostinfo := d.hostMap.QueryVpnIp(ip)
if hostinfo == nil {
return ""
}
q := hostinfo.GetCert()
if q == nil {
return ""
}
cert := q.Details
c := fmt.Sprintf("\"Name: %s\" \"Ips: %s\" \"Subnets %s\" \"Groups %s\" \"NotBefore %s\" \"NotAFter %s\" \"PublicKey %x\" \"IsCA %t\" \"Issuer %s\"", cert.Name, cert.Ips, cert.Subnets, cert.Groups, cert.NotBefore, cert.NotAfter, cert.PublicKey, cert.IsCA, cert.Issuer)
c := fmt.Sprintf("\"Name: %s\" \"Ips: %s\" \"Subnets %s\" \"Groups %s\" \"NotBefore %s\" \"NotAfter %s\" \"PublicKey %x\" \"IsCA %t\" \"Issuer %s\"", cert.Name, cert.Ips, cert.Subnets, cert.Groups, cert.NotBefore, cert.NotAfter, cert.PublicKey, cert.IsCA, cert.Issuer)
return c
}
func (d *dnsRecords) Add(host, data string) {
d.Lock()
d.dnsMap[host] = data
d.Unlock()
defer d.Unlock()
d.dnsMap[strings.ToLower(host)] = data
}
func parseQuery(l *logrus.Logger, m *dns.Msg, w dns.ResponseWriter) {
@@ -80,7 +82,11 @@ func parseQuery(l *logrus.Logger, m *dns.Msg, w dns.ResponseWriter) {
}
case dns.TypeTXT:
a, _, _ := net.SplitHostPort(w.RemoteAddr().String())
b := net.ParseIP(a)
b, err := netip.ParseAddr(a)
if err != nil {
return
}
// We don't answer these queries from non nebula nodes or localhost
//l.Debugf("Does %s contain %s", b, dnsR.hostMap.vpnCIDR)
if !dnsR.hostMap.vpnCIDR.Contains(b) && a != "127.0.0.1" {
@@ -96,6 +102,10 @@ func parseQuery(l *logrus.Logger, m *dns.Msg, w dns.ResponseWriter) {
}
}
}
if len(m.Answer) == 0 {
m.Rcode = dns.RcodeNameError
}
}
func handleDnsRequest(l *logrus.Logger, w dns.ResponseWriter, r *dns.Msg) {
@@ -129,13 +139,18 @@ func dnsMain(l *logrus.Logger, hostMap *HostMap, c *config.C) func() {
}
func getDnsServerAddr(c *config.C) string {
return c.GetString("lighthouse.dns.host", "") + ":" + strconv.Itoa(c.GetInt("lighthouse.dns.port", 53))
dnsHost := strings.TrimSpace(c.GetString("lighthouse.dns.host", ""))
// Old guidance was to provide the literal `[::]` in `lighthouse.dns.host` but that won't resolve.
if dnsHost == "[::]" {
dnsHost = "::"
}
return net.JoinHostPort(dnsHost, strconv.Itoa(c.GetInt("lighthouse.dns.port", 53)))
}
func startDns(l *logrus.Logger, c *config.C) {
dnsAddr = getDnsServerAddr(c)
dnsServer = &dns.Server{Addr: dnsAddr, Net: "udp"}
l.WithField("dnsListener", dnsAddr).Infof("Starting DNS responder")
l.WithField("dnsListener", dnsAddr).Info("Starting DNS responder")
err := dnsServer.ListenAndServe()
defer dnsServer.Shutdown()
if err != nil {

View File

@@ -4,6 +4,8 @@ import (
"testing"
"github.com/miekg/dns"
"github.com/slackhq/nebula/config"
"github.com/stretchr/testify/assert"
)
func TestParsequery(t *testing.T) {
@@ -17,3 +19,40 @@ func TestParsequery(t *testing.T) {
//parseQuery(m)
}
func Test_getDnsServerAddr(t *testing.T) {
c := config.NewC(nil)
c.Settings["lighthouse"] = map[interface{}]interface{}{
"dns": map[interface{}]interface{}{
"host": "0.0.0.0",
"port": "1",
},
}
assert.Equal(t, "0.0.0.0:1", getDnsServerAddr(c))
c.Settings["lighthouse"] = map[interface{}]interface{}{
"dns": map[interface{}]interface{}{
"host": "::",
"port": "1",
},
}
assert.Equal(t, "[::]:1", getDnsServerAddr(c))
c.Settings["lighthouse"] = map[interface{}]interface{}{
"dns": map[interface{}]interface{}{
"host": "[::]",
"port": "1",
},
}
assert.Equal(t, "[::]:1", getDnsServerAddr(c))
// Make sure whitespace doesn't mess us up
c.Settings["lighthouse"] = map[interface{}]interface{}{
"dns": map[interface{}]interface{}{
"host": "[::] ",
"port": "1",
},
}
assert.Equal(t, "[::]:1", getDnsServerAddr(c))
}

11
docker/Dockerfile Normal file
View File

@@ -0,0 +1,11 @@
FROM gcr.io/distroless/static:latest
ARG TARGETOS TARGETARCH
COPY build/$TARGETOS-$TARGETARCH/nebula /nebula
COPY build/$TARGETOS-$TARGETARCH/nebula-cert /nebula-cert
VOLUME ["/config"]
ENTRYPOINT ["/nebula"]
# Allow users to override the args passed to nebula
CMD ["-config", "/config/config.yml"]

24
docker/README.md Normal file
View File

@@ -0,0 +1,24 @@
# NebulaOSS/nebula Docker Image
## Building
From the root of the repository, run `make docker`.
## Running
To run the built image, use the following command:
```
docker run \
--name nebula \
--network host \
--cap-add NET_ADMIN \
--volume ./config:/config \
--rm \
nebulaoss/nebula
```
A few notes:
- The `NET_ADMIN` capability is necessary to create the tun adapter on the host (this is unnecessary if the tun device is disabled.)
- `--volume ./config:/config` should point to a directory that contains your `config.yml` and any other necessary files.

File diff suppressed because it is too large Load Diff

125
e2e/helpers.go Normal file
View File

@@ -0,0 +1,125 @@
package e2e
import (
"crypto/rand"
"io"
"net"
"net/netip"
"time"
"github.com/slackhq/nebula/cert"
"golang.org/x/crypto/curve25519"
"golang.org/x/crypto/ed25519"
)
// NewTestCaCert will generate a CA cert
func NewTestCaCert(before, after time.Time, ips, subnets []netip.Prefix, groups []string) (*cert.NebulaCertificate, []byte, []byte, []byte) {
pub, priv, err := ed25519.GenerateKey(rand.Reader)
if before.IsZero() {
before = time.Now().Add(time.Second * -60).Round(time.Second)
}
if after.IsZero() {
after = time.Now().Add(time.Second * 60).Round(time.Second)
}
nc := &cert.NebulaCertificate{
Details: cert.NebulaCertificateDetails{
Name: "test ca",
NotBefore: time.Unix(before.Unix(), 0),
NotAfter: time.Unix(after.Unix(), 0),
PublicKey: pub,
IsCA: true,
InvertedGroups: make(map[string]struct{}),
},
}
if len(ips) > 0 {
nc.Details.Ips = make([]*net.IPNet, len(ips))
for i, ip := range ips {
nc.Details.Ips[i] = &net.IPNet{IP: ip.Addr().AsSlice(), Mask: net.CIDRMask(ip.Bits(), ip.Addr().BitLen())}
}
}
if len(subnets) > 0 {
nc.Details.Subnets = make([]*net.IPNet, len(subnets))
for i, ip := range subnets {
nc.Details.Ips[i] = &net.IPNet{IP: ip.Addr().AsSlice(), Mask: net.CIDRMask(ip.Bits(), ip.Addr().BitLen())}
}
}
if len(groups) > 0 {
nc.Details.Groups = groups
}
err = nc.Sign(cert.Curve_CURVE25519, priv)
if err != nil {
panic(err)
}
pem, err := nc.MarshalToPEM()
if err != nil {
panic(err)
}
return nc, pub, priv, pem
}
// NewTestCert will generate a signed certificate with the provided details.
// Expiry times are defaulted if you do not pass them in
func NewTestCert(ca *cert.NebulaCertificate, key []byte, name string, before, after time.Time, ip netip.Prefix, subnets []netip.Prefix, groups []string) (*cert.NebulaCertificate, []byte, []byte, []byte) {
issuer, err := ca.Sha256Sum()
if err != nil {
panic(err)
}
if before.IsZero() {
before = time.Now().Add(time.Second * -60).Round(time.Second)
}
if after.IsZero() {
after = time.Now().Add(time.Second * 60).Round(time.Second)
}
pub, rawPriv := x25519Keypair()
ipb := ip.Addr().AsSlice()
nc := &cert.NebulaCertificate{
Details: cert.NebulaCertificateDetails{
Name: name,
Ips: []*net.IPNet{{IP: ipb[:], Mask: net.CIDRMask(ip.Bits(), ip.Addr().BitLen())}},
//Subnets: subnets,
Groups: groups,
NotBefore: time.Unix(before.Unix(), 0),
NotAfter: time.Unix(after.Unix(), 0),
PublicKey: pub,
IsCA: false,
Issuer: issuer,
InvertedGroups: make(map[string]struct{}),
},
}
err = nc.Sign(ca.Details.Curve, key)
if err != nil {
panic(err)
}
pem, err := nc.MarshalToPEM()
if err != nil {
panic(err)
}
return nc, pub, cert.MarshalX25519PrivateKey(rawPriv), pem
}
func x25519Keypair() ([]byte, []byte) {
privkey := make([]byte, 32)
if _, err := io.ReadFull(rand.Reader, privkey); err != nil {
panic(err)
}
pubkey, err := curve25519.X25519(privkey, curve25519.Basepoint)
if err != nil {
panic(err)
}
return pubkey, privkey
}

View File

@@ -4,44 +4,47 @@
package e2e
import (
"crypto/rand"
"fmt"
"io"
"io/ioutil"
"net"
"net/netip"
"os"
"testing"
"time"
"dario.cat/mergo"
"github.com/google/gopacket"
"github.com/google/gopacket/layers"
"github.com/imdario/mergo"
"github.com/sirupsen/logrus"
"github.com/slackhq/nebula"
"github.com/slackhq/nebula/cert"
"github.com/slackhq/nebula/config"
"github.com/slackhq/nebula/e2e/router"
"github.com/slackhq/nebula/iputil"
"github.com/stretchr/testify/assert"
"golang.org/x/crypto/curve25519"
"golang.org/x/crypto/ed25519"
"gopkg.in/yaml.v2"
)
type m map[string]interface{}
// newSimpleServer creates a nebula instance with many assumptions
func newSimpleServer(caCrt *cert.NebulaCertificate, caKey []byte, name string, udpIp net.IP, customConfig *m) (*nebula.Control, net.IP, *net.UDPAddr) {
func newSimpleServer(caCrt *cert.NebulaCertificate, caKey []byte, name string, sVpnIpNet string, overrides m) (*nebula.Control, netip.Prefix, netip.AddrPort, *config.C) {
l := NewTestLogger()
vpnIpNet := &net.IPNet{IP: make([]byte, len(udpIp)), Mask: net.IPMask{255, 255, 255, 0}}
copy(vpnIpNet.IP, udpIp)
vpnIpNet.IP[1] += 128
udpAddr := net.UDPAddr{
IP: udpIp,
Port: 4242,
vpnIpNet, err := netip.ParsePrefix(sVpnIpNet)
if err != nil {
panic(err)
}
_, _, myPrivKey, myPEM := newTestCert(caCrt, caKey, "test "+name, time.Now(), time.Now().Add(5*time.Minute), vpnIpNet, nil, []string{})
var udpAddr netip.AddrPort
if vpnIpNet.Addr().Is4() {
budpIp := vpnIpNet.Addr().As4()
budpIp[1] -= 128
udpAddr = netip.AddrPortFrom(netip.AddrFrom4(budpIp), 4242)
} else {
budpIp := vpnIpNet.Addr().As16()
budpIp[13] -= 128
udpAddr = netip.AddrPortFrom(netip.AddrFrom16(budpIp), 4242)
}
_, _, myPrivKey, myPEM := NewTestCert(caCrt, caKey, name, time.Now(), time.Now().Add(5*time.Minute), vpnIpNet, nil, []string{})
caB, err := caCrt.MarshalToPEM()
if err != nil {
@@ -71,14 +74,27 @@ func newSimpleServer(caCrt *cert.NebulaCertificate, caKey []byte, name string, u
// "try_interval": "1s",
//},
"listen": m{
"host": udpAddr.IP.String(),
"port": udpAddr.Port,
"host": udpAddr.Addr().String(),
"port": udpAddr.Port(),
},
"logging": m{
"timestamp_format": fmt.Sprintf("%v 15:04:05.000000", name),
"level": l.Level.String(),
},
"timers": m{
"pending_deletion_interval": 2,
"connection_alive_interval": 2,
},
}
if overrides != nil {
err = mergo.Merge(&overrides, mc, mergo.WithAppendSlice)
if err != nil {
panic(err)
}
mc = overrides
}
cb, err := yaml.Marshal(mc)
if err != nil {
panic(err)
@@ -87,137 +103,13 @@ func newSimpleServer(caCrt *cert.NebulaCertificate, caKey []byte, name string, u
c := config.NewC(l)
c.LoadString(string(cb))
if customConfig != nil {
ccb, err := yaml.Marshal(customConfig)
if err != nil {
panic(err)
}
ccm := map[interface{}]interface{}{}
err = yaml.Unmarshal(ccb, &ccm)
if err != nil {
panic(err)
}
err = mergo.Merge(&c.Settings, ccm, mergo.WithAppendSlice)
if err != nil {
panic(err)
}
}
control, err := nebula.Main(c, false, "e2e-test", l, nil)
if err != nil {
panic(err)
}
return control, vpnIpNet.IP, &udpAddr
}
// newTestCaCert will generate a CA cert
func newTestCaCert(before, after time.Time, ips, subnets []*net.IPNet, groups []string) (*cert.NebulaCertificate, []byte, []byte, []byte) {
pub, priv, err := ed25519.GenerateKey(rand.Reader)
if before.IsZero() {
before = time.Now().Add(time.Second * -60).Round(time.Second)
}
if after.IsZero() {
after = time.Now().Add(time.Second * 60).Round(time.Second)
}
nc := &cert.NebulaCertificate{
Details: cert.NebulaCertificateDetails{
Name: "test ca",
NotBefore: time.Unix(before.Unix(), 0),
NotAfter: time.Unix(after.Unix(), 0),
PublicKey: pub,
IsCA: true,
InvertedGroups: make(map[string]struct{}),
},
}
if len(ips) > 0 {
nc.Details.Ips = ips
}
if len(subnets) > 0 {
nc.Details.Subnets = subnets
}
if len(groups) > 0 {
nc.Details.Groups = groups
}
err = nc.Sign(priv)
if err != nil {
panic(err)
}
pem, err := nc.MarshalToPEM()
if err != nil {
panic(err)
}
return nc, pub, priv, pem
}
// newTestCert will generate a signed certificate with the provided details.
// Expiry times are defaulted if you do not pass them in
func newTestCert(ca *cert.NebulaCertificate, key []byte, name string, before, after time.Time, ip *net.IPNet, subnets []*net.IPNet, groups []string) (*cert.NebulaCertificate, []byte, []byte, []byte) {
issuer, err := ca.Sha256Sum()
if err != nil {
panic(err)
}
if before.IsZero() {
before = time.Now().Add(time.Second * -60).Round(time.Second)
}
if after.IsZero() {
after = time.Now().Add(time.Second * 60).Round(time.Second)
}
pub, rawPriv := x25519Keypair()
nc := &cert.NebulaCertificate{
Details: cert.NebulaCertificateDetails{
Name: name,
Ips: []*net.IPNet{ip},
Subnets: subnets,
Groups: groups,
NotBefore: time.Unix(before.Unix(), 0),
NotAfter: time.Unix(after.Unix(), 0),
PublicKey: pub,
IsCA: false,
Issuer: issuer,
InvertedGroups: make(map[string]struct{}),
},
}
err = nc.Sign(key)
if err != nil {
panic(err)
}
pem, err := nc.MarshalToPEM()
if err != nil {
panic(err)
}
return nc, pub, cert.MarshalX25519PrivateKey(rawPriv), pem
}
func x25519Keypair() ([]byte, []byte) {
privkey := make([]byte, 32)
if _, err := io.ReadFull(rand.Reader, privkey); err != nil {
panic(err)
}
pubkey, err := curve25519.X25519(privkey, curve25519.Basepoint)
if err != nil {
panic(err)
}
return pubkey, privkey
return control, vpnIpNet, udpAddr, c
}
type doneCb func()
@@ -238,35 +130,32 @@ func deadline(t *testing.T, seconds time.Duration) doneCb {
}
}
func assertTunnel(t *testing.T, vpnIpA, vpnIpB net.IP, controlA, controlB *nebula.Control, r *router.R) {
func assertTunnel(t *testing.T, vpnIpA, vpnIpB netip.Addr, controlA, controlB *nebula.Control, r *router.R) {
// Send a packet from them to me
controlB.InjectTunUDPPacket(vpnIpA, 80, 90, []byte("Hi from B"))
bPacket := r.RouteUntilTxTun(controlB, controlA)
bPacket := r.RouteForAllUntilTxTun(controlA)
assertUdpPacket(t, []byte("Hi from B"), bPacket, vpnIpB, vpnIpA, 90, 80)
// And once more from me to them
controlA.InjectTunUDPPacket(vpnIpB, 80, 90, []byte("Hello from A"))
aPacket := r.RouteUntilTxTun(controlA, controlB)
aPacket := r.RouteForAllUntilTxTun(controlB)
assertUdpPacket(t, []byte("Hello from A"), aPacket, vpnIpA, vpnIpB, 90, 80)
}
func assertHostInfoPair(t *testing.T, addrA, addrB *net.UDPAddr, vpnIpA, vpnIpB net.IP, controlA, controlB *nebula.Control) {
func assertHostInfoPair(t *testing.T, addrA, addrB netip.AddrPort, vpnIpA, vpnIpB netip.Addr, controlA, controlB *nebula.Control) {
// Get both host infos
hBinA := controlA.GetHostInfoByVpnIp(iputil.Ip2VpnIp(vpnIpB), false)
hBinA := controlA.GetHostInfoByVpnIp(vpnIpB, false)
assert.NotNil(t, hBinA, "Host B was not found by vpnIp in controlA")
hAinB := controlB.GetHostInfoByVpnIp(iputil.Ip2VpnIp(vpnIpA), false)
hAinB := controlB.GetHostInfoByVpnIp(vpnIpA, false)
assert.NotNil(t, hAinB, "Host A was not found by vpnIp in controlB")
// Check that both vpn and real addr are correct
assert.Equal(t, vpnIpB, hBinA.VpnIp, "Host B VpnIp is wrong in control A")
assert.Equal(t, vpnIpA, hAinB.VpnIp, "Host A VpnIp is wrong in control B")
assert.Equal(t, addrB.IP.To16(), hBinA.CurrentRemote.IP.To16(), "Host B remote ip is wrong in control A")
assert.Equal(t, addrA.IP.To16(), hAinB.CurrentRemote.IP.To16(), "Host A remote ip is wrong in control B")
assert.Equal(t, addrB.Port, int(hBinA.CurrentRemote.Port), "Host B remote port is wrong in control A")
assert.Equal(t, addrA.Port, int(hAinB.CurrentRemote.Port), "Host A remote port is wrong in control B")
assert.Equal(t, addrB, hBinA.CurrentRemote, "Host B remote is wrong in control A")
assert.Equal(t, addrA, hAinB.CurrentRemote, "Host A remote is wrong in control B")
// Check that our indexes match
assert.Equal(t, hBinA.LocalIndex, hAinB.RemoteIndex, "Host B local index does not match host A remote index")
@@ -289,13 +178,13 @@ func assertHostInfoPair(t *testing.T, addrA, addrB *net.UDPAddr, vpnIpA, vpnIpB
//checkIndexes("hmB", hmB, hAinB)
}
func assertUdpPacket(t *testing.T, expected, b []byte, fromIp, toIp net.IP, fromPort, toPort uint16) {
func assertUdpPacket(t *testing.T, expected, b []byte, fromIp, toIp netip.Addr, fromPort, toPort uint16) {
packet := gopacket.NewPacket(b, layers.LayerTypeIPv4, gopacket.Lazy)
v4 := packet.Layer(layers.LayerTypeIPv4).(*layers.IPv4)
assert.NotNil(t, v4, "No ipv4 data found")
assert.Equal(t, fromIp, v4.SrcIP, "Source ip was incorrect")
assert.Equal(t, toIp, v4.DstIP, "Dest ip was incorrect")
assert.Equal(t, fromIp.AsSlice(), []byte(v4.SrcIP), "Source ip was incorrect")
assert.Equal(t, toIp.AsSlice(), []byte(v4.DstIP), "Dest ip was incorrect")
udp := packet.Layer(layers.LayerTypeUDP).(*layers.UDP)
assert.NotNil(t, udp, "No udp data found")
@@ -313,7 +202,8 @@ func NewTestLogger() *logrus.Logger {
v := os.Getenv("TEST_LOGS")
if v == "" {
l.SetOutput(ioutil.Discard)
l.SetOutput(io.Discard)
l.SetLevel(logrus.PanicLevel)
return l
}

145
e2e/router/hostmap.go Normal file
View File

@@ -0,0 +1,145 @@
//go:build e2e_testing
// +build e2e_testing
package router
import (
"fmt"
"net/netip"
"sort"
"strings"
"github.com/slackhq/nebula"
)
type edge struct {
from string
to string
dual bool
}
func renderHostmaps(controls ...*nebula.Control) string {
var lines []*edge
r := "graph TB\n"
for _, c := range controls {
sr, se := renderHostmap(c)
r += sr
for _, e := range se {
add := true
// Collapse duplicate edges into a bi-directionally connected edge
for _, ge := range lines {
if e.to == ge.from && e.from == ge.to {
add = false
ge.dual = true
break
}
}
if add {
lines = append(lines, e)
}
}
}
for _, line := range lines {
if line.dual {
r += fmt.Sprintf("\t%v <--> %v\n", line.from, line.to)
} else {
r += fmt.Sprintf("\t%v --> %v\n", line.from, line.to)
}
}
return r
}
func renderHostmap(c *nebula.Control) (string, []*edge) {
var lines []string
var globalLines []*edge
clusterName := strings.Trim(c.GetCert().Details.Name, " ")
clusterVpnIp := c.GetCert().Details.Ips[0].IP
r := fmt.Sprintf("\tsubgraph %s[\"%s (%s)\"]\n", clusterName, clusterName, clusterVpnIp)
hm := c.GetHostmap()
hm.RLock()
defer hm.RUnlock()
// Draw the vpn to index nodes
r += fmt.Sprintf("\t\tsubgraph %s.hosts[\"Hosts (vpn ip to index)\"]\n", clusterName)
hosts := sortedHosts(hm.Hosts)
for _, vpnIp := range hosts {
hi := hm.Hosts[vpnIp]
r += fmt.Sprintf("\t\t\t%v.%v[\"%v\"]\n", clusterName, vpnIp, vpnIp)
lines = append(lines, fmt.Sprintf("%v.%v --> %v.%v", clusterName, vpnIp, clusterName, hi.GetLocalIndex()))
rs := hi.GetRelayState()
for _, relayIp := range rs.CopyRelayIps() {
lines = append(lines, fmt.Sprintf("%v.%v --> %v.%v", clusterName, vpnIp, clusterName, relayIp))
}
for _, relayIp := range rs.CopyRelayForIdxs() {
lines = append(lines, fmt.Sprintf("%v.%v --> %v.%v", clusterName, vpnIp, clusterName, relayIp))
}
}
r += "\t\tend\n"
// Draw the relay hostinfos
if len(hm.Relays) > 0 {
r += fmt.Sprintf("\t\tsubgraph %s.relays[\"Relays (relay index to hostinfo)\"]\n", clusterName)
for relayIndex, hi := range hm.Relays {
r += fmt.Sprintf("\t\t\t%v.%v[\"%v\"]\n", clusterName, relayIndex, relayIndex)
lines = append(lines, fmt.Sprintf("%v.%v --> %v.%v", clusterName, relayIndex, clusterName, hi.GetLocalIndex()))
}
r += "\t\tend\n"
}
// Draw the local index to relay or remote index nodes
r += fmt.Sprintf("\t\tsubgraph indexes.%s[\"Indexes (index to hostinfo)\"]\n", clusterName)
indexes := sortedIndexes(hm.Indexes)
for _, idx := range indexes {
hi, ok := hm.Indexes[idx]
if ok {
r += fmt.Sprintf("\t\t\t%v.%v[\"%v (%v)\"]\n", clusterName, idx, idx, hi.GetVpnIp())
remoteClusterName := strings.Trim(hi.GetCert().Details.Name, " ")
globalLines = append(globalLines, &edge{from: fmt.Sprintf("%v.%v", clusterName, idx), to: fmt.Sprintf("%v.%v", remoteClusterName, hi.GetRemoteIndex())})
_ = hi
}
}
r += "\t\tend\n"
// Add the edges inside this host
for _, line := range lines {
r += fmt.Sprintf("\t\t%v\n", line)
}
r += "\tend\n"
return r, globalLines
}
func sortedHosts(hosts map[netip.Addr]*nebula.HostInfo) []netip.Addr {
keys := make([]netip.Addr, 0, len(hosts))
for key := range hosts {
keys = append(keys, key)
}
sort.SliceStable(keys, func(i, j int) bool {
return keys[i].Compare(keys[j]) > 0
})
return keys
}
func sortedIndexes(indexes map[uint32]*nebula.HostInfo) []uint32 {
keys := make([]uint32, 0, len(indexes))
for key := range indexes {
keys = append(keys, key)
}
sort.SliceStable(keys, func(i, j int) bool {
return keys[i] > keys[j]
})
return keys
}

View File

@@ -4,62 +4,158 @@
package router
import (
"context"
"fmt"
"net"
"net/netip"
"os"
"path/filepath"
"reflect"
"strconv"
"sort"
"strings"
"sync"
"testing"
"time"
"github.com/google/gopacket"
"github.com/google/gopacket/layers"
"github.com/slackhq/nebula"
"github.com/slackhq/nebula/header"
"github.com/slackhq/nebula/udp"
"golang.org/x/exp/maps"
)
type R struct {
// Simple map of the ip:port registered on a control to the control
// Basically a router, right?
controls map[string]*nebula.Control
controls map[netip.AddrPort]*nebula.Control
// A map for inbound packets for a control that doesn't know about this address
inNat map[string]*nebula.Control
inNat map[netip.AddrPort]*nebula.Control
// A last used map, if an inbound packet hit the inNat map then
// all return packets should use the same last used inbound address for the outbound sender
// map[from address + ":" + to address] => ip:port to rewrite in the udp packet to receiver
outNat map[string]net.UDPAddr
outNat map[string]netip.AddrPort
// A map of vpn ip to the nebula control it belongs to
vpnControls map[netip.Addr]*nebula.Control
ignoreFlows []ignoreFlow
flow []flowEntry
// A set of additional mermaid graphs to draw in the flow log markdown file
// Currently consisting only of hostmap renders
additionalGraphs []mermaidGraph
// All interactions are locked to help serialize behavior
sync.Mutex
fn string
cancelRender context.CancelFunc
t testing.TB
}
type ignoreFlow struct {
tun NullBool
messageType header.MessageType
subType header.MessageSubType
//from
//to
}
type mermaidGraph struct {
title string
content string
}
type NullBool struct {
HasValue bool
IsTrue bool
}
type flowEntry struct {
note string
packet *packet
}
type packet struct {
from *nebula.Control
to *nebula.Control
packet *udp.Packet
tun bool // a packet pulled off a tun device
rx bool // the packet was received by a udp device
}
func (p *packet) WasReceived() {
if p != nil {
p.rx = true
}
}
type ExitType int
const (
// Keeps routing, the function will get called again on the next packet
// KeepRouting the function will get called again on the next packet
KeepRouting ExitType = 0
// Does not route this packet and exits immediately
// ExitNow does not route this packet and exits immediately
ExitNow ExitType = 1
// Routes this packet and exits immediately afterwards
// RouteAndExit routes this packet and exits immediately afterwards
RouteAndExit ExitType = 2
)
type ExitFunc func(packet *udp.Packet, receiver *nebula.Control) ExitType
func NewR(controls ...*nebula.Control) *R {
r := &R{
controls: make(map[string]*nebula.Control),
inNat: make(map[string]*nebula.Control),
outNat: make(map[string]net.UDPAddr),
// NewR creates a new router to pass packets in a controlled fashion between the provided controllers.
// The packet flow will be recorded in a file within the mermaid directory under the same name as the test.
// Renders will occur automatically, roughly every 100ms, until a call to RenderFlow() is made
func NewR(t testing.TB, controls ...*nebula.Control) *R {
ctx, cancel := context.WithCancel(context.Background())
if err := os.MkdirAll("mermaid", 0755); err != nil {
panic(err)
}
r := &R{
controls: make(map[netip.AddrPort]*nebula.Control),
vpnControls: make(map[netip.Addr]*nebula.Control),
inNat: make(map[netip.AddrPort]*nebula.Control),
outNat: make(map[string]netip.AddrPort),
flow: []flowEntry{},
ignoreFlows: []ignoreFlow{},
fn: filepath.Join("mermaid", fmt.Sprintf("%s.md", t.Name())),
t: t,
cancelRender: cancel,
}
// Try to remove our render file
os.Remove(r.fn)
for _, c := range controls {
addr := c.GetUDPAddr()
if _, ok := r.controls[addr]; ok {
panic("Duplicate listen address: " + addr)
panic("Duplicate listen address: " + addr.String())
}
r.vpnControls[c.GetVpnIp()] = c
r.controls[addr] = c
}
// Spin the renderer in case we go nuts and the test never completes
go func() {
clockSource := time.NewTicker(time.Millisecond * 100)
defer clockSource.Stop()
for {
select {
case <-ctx.Done():
return
case <-clockSource.C:
r.renderHostmaps("clock tick")
r.renderFlow()
}
}
}()
return r
}
@@ -67,17 +163,234 @@ func NewR(controls ...*nebula.Control) *R {
// It does not look at the addr attached to the instance.
// If a route is used, this will behave like a NAT for the return path.
// Rewriting the source ip:port to what was last sent to from the origin
func (r *R) AddRoute(ip net.IP, port uint16, c *nebula.Control) {
func (r *R) AddRoute(ip netip.Addr, port uint16, c *nebula.Control) {
r.Lock()
defer r.Unlock()
inAddr := net.JoinHostPort(ip.String(), fmt.Sprintf("%v", port))
inAddr := netip.AddrPortFrom(ip, port)
if _, ok := r.inNat[inAddr]; ok {
panic("Duplicate listen address inNat: " + inAddr)
panic("Duplicate listen address inNat: " + inAddr.String())
}
r.inNat[inAddr] = c
}
// RenderFlow renders the packet flow seen up until now and stops further automatic renders from happening.
func (r *R) RenderFlow() {
r.cancelRender()
r.renderFlow()
}
// CancelFlowLogs stops flow logs from being tracked and destroys any logs already collected
func (r *R) CancelFlowLogs() {
r.cancelRender()
r.flow = nil
}
func (r *R) renderFlow() {
if r.flow == nil {
return
}
f, err := os.OpenFile(r.fn, os.O_CREATE|os.O_TRUNC|os.O_RDWR, 0644)
if err != nil {
panic(err)
}
var participants = map[netip.AddrPort]struct{}{}
var participantsVals []string
fmt.Fprintln(f, "```mermaid")
fmt.Fprintln(f, "sequenceDiagram")
// Assemble participants
for _, e := range r.flow {
if e.packet == nil {
continue
}
addr := e.packet.from.GetUDPAddr()
if _, ok := participants[addr]; ok {
continue
}
participants[addr] = struct{}{}
sanAddr := strings.Replace(addr.String(), ":", "-", 1)
participantsVals = append(participantsVals, sanAddr)
fmt.Fprintf(
f, " participant %s as Nebula: %s<br/>UDP: %s\n",
sanAddr, e.packet.from.GetVpnIp(), sanAddr,
)
}
if len(participantsVals) > 2 {
// Get the first and last participantVals for notes
participantsVals = []string{participantsVals[0], participantsVals[len(participantsVals)-1]}
}
// Print packets
h := &header.H{}
for _, e := range r.flow {
if e.packet == nil {
//fmt.Fprintf(f, " note over %s: %s\n", strings.Join(participantsVals, ", "), e.note)
continue
}
p := e.packet
if p.tun {
fmt.Fprintln(f, r.formatUdpPacket(p))
} else {
if err := h.Parse(p.packet.Data); err != nil {
panic(err)
}
line := "--x"
if p.rx {
line = "->>"
}
fmt.Fprintf(f,
" %s%s%s: %s(%s), index %v, counter: %v\n",
strings.Replace(p.from.GetUDPAddr().String(), ":", "-", 1),
line,
strings.Replace(p.to.GetUDPAddr().String(), ":", "-", 1),
h.TypeName(), h.SubTypeName(), h.RemoteIndex, h.MessageCounter,
)
}
}
fmt.Fprintln(f, "```")
for _, g := range r.additionalGraphs {
fmt.Fprintf(f, "## %s\n", g.title)
fmt.Fprintln(f, "```mermaid")
fmt.Fprintln(f, g.content)
fmt.Fprintln(f, "```")
}
}
// IgnoreFlow tells the router to stop recording future flows that matches the provided criteria.
// messageType and subType will target nebula underlay packets while tun will target nebula overlay packets
// NOTE: This is a very broad system, if you set tun to true then no more tun traffic will be rendered
func (r *R) IgnoreFlow(messageType header.MessageType, subType header.MessageSubType, tun NullBool) {
r.Lock()
defer r.Unlock()
r.ignoreFlows = append(r.ignoreFlows, ignoreFlow{
tun,
messageType,
subType,
})
}
func (r *R) RenderHostmaps(title string, controls ...*nebula.Control) {
r.Lock()
defer r.Unlock()
s := renderHostmaps(controls...)
if len(r.additionalGraphs) > 0 {
lastGraph := r.additionalGraphs[len(r.additionalGraphs)-1]
if lastGraph.content == s && lastGraph.title == title {
// Ignore this rendering if it matches the last rendering added
// This is useful if you want to track rendering changes
return
}
}
r.additionalGraphs = append(r.additionalGraphs, mermaidGraph{
title: title,
content: s,
})
}
func (r *R) renderHostmaps(title string) {
c := maps.Values(r.controls)
sort.SliceStable(c, func(i, j int) bool {
return c[i].GetVpnIp().Compare(c[j].GetVpnIp()) > 0
})
s := renderHostmaps(c...)
if len(r.additionalGraphs) > 0 {
lastGraph := r.additionalGraphs[len(r.additionalGraphs)-1]
if lastGraph.content == s {
// Ignore this rendering if it matches the last rendering added
// This is useful if you want to track rendering changes
return
}
}
r.additionalGraphs = append(r.additionalGraphs, mermaidGraph{
title: title,
content: s,
})
}
// InjectFlow can be used to record packet flow if the test is handling the routing on its own.
// The packet is assumed to have been received
func (r *R) InjectFlow(from, to *nebula.Control, p *udp.Packet) {
r.Lock()
defer r.Unlock()
r.unlockedInjectFlow(from, to, p, false)
}
func (r *R) Log(arg ...any) {
if r.flow == nil {
return
}
r.Lock()
r.flow = append(r.flow, flowEntry{note: fmt.Sprint(arg...)})
r.t.Log(arg...)
r.Unlock()
}
func (r *R) Logf(format string, arg ...any) {
if r.flow == nil {
return
}
r.Lock()
r.flow = append(r.flow, flowEntry{note: fmt.Sprintf(format, arg...)})
r.t.Logf(format, arg...)
r.Unlock()
}
// unlockedInjectFlow is used by the router to record a packet has been transmitted, the packet is returned and
// should be marked as received AFTER it has been placed on the receivers channel.
// If flow logs have been disabled this function will return nil
func (r *R) unlockedInjectFlow(from, to *nebula.Control, p *udp.Packet, tun bool) *packet {
if r.flow == nil {
return nil
}
r.renderHostmaps(fmt.Sprintf("Packet %v", len(r.flow)))
if len(r.ignoreFlows) > 0 {
var h header.H
err := h.Parse(p.Data)
if err != nil {
panic(err)
}
for _, i := range r.ignoreFlows {
if !tun {
if i.messageType == h.Type && i.subType == h.Subtype {
return nil
}
} else if i.tun.HasValue && i.tun.IsTrue {
return nil
}
}
}
fp := &packet{
from: from,
to: to,
packet: p.Copy(),
tun: tun,
}
r.flow = append(r.flow, flowEntry{packet: fp})
return fp
}
// OnceFrom will route a single packet from sender then return
// If the router doesn't have the nebula controller for that address, we panic
func (r *R) OnceFrom(sender *nebula.Control) {
@@ -96,25 +409,85 @@ func (r *R) RouteUntilTxTun(sender *nebula.Control, receiver *nebula.Control) []
select {
// Maybe we already have something on the tun for us
case b := <-tunTx:
r.Lock()
np := udp.Packet{Data: make([]byte, len(b))}
copy(np.Data, b)
r.unlockedInjectFlow(receiver, receiver, &np, true)
r.Unlock()
return b
// Nope, lets push the sender along
case p := <-udpTx:
outAddr := sender.GetUDPAddr()
r.Lock()
inAddr := net.JoinHostPort(p.ToIp.String(), fmt.Sprintf("%v", p.ToPort))
c := r.getControl(outAddr, inAddr, p)
c := r.getControl(sender.GetUDPAddr(), p.To, p)
if c == nil {
r.Unlock()
panic("No control for udp tx")
}
fp := r.unlockedInjectFlow(sender, c, p, false)
c.InjectUDPPacket(p)
fp.WasReceived()
r.Unlock()
}
}
}
// RouteForAllUntilTxTun will route for everyone and return when a packet is seen on receivers tun
// If the router doesn't have the nebula controller for that address, we panic
func (r *R) RouteForAllUntilTxTun(receiver *nebula.Control) []byte {
sc := make([]reflect.SelectCase, len(r.controls)+1)
cm := make([]*nebula.Control, len(r.controls)+1)
i := 0
sc[i] = reflect.SelectCase{
Dir: reflect.SelectRecv,
Chan: reflect.ValueOf(receiver.GetTunTxChan()),
Send: reflect.Value{},
}
cm[i] = receiver
i++
for _, c := range r.controls {
sc[i] = reflect.SelectCase{
Dir: reflect.SelectRecv,
Chan: reflect.ValueOf(c.GetUDPTxChan()),
Send: reflect.Value{},
}
cm[i] = c
i++
}
for {
x, rx, _ := reflect.Select(sc)
r.Lock()
if x == 0 {
// we are the tun tx, we can exit
p := rx.Interface().([]byte)
np := udp.Packet{Data: make([]byte, len(p))}
copy(np.Data, p)
r.unlockedInjectFlow(cm[x], cm[x], &np, true)
r.Unlock()
return p
} else {
// we are a udp tx, route and continue
p := rx.Interface().(*udp.Packet)
c := r.getControl(cm[x].GetUDPAddr(), p.To, p)
if c == nil {
r.Unlock()
panic("No control for udp tx")
}
fp := r.unlockedInjectFlow(cm[x], c, p, false)
c.InjectUDPPacket(p)
fp.WasReceived()
}
r.Unlock()
}
}
// RouteExitFunc will call the whatDo func with each udp packet from sender.
// whatDo can return:
// - exitNow: the packet will not be routed and this call will return immediately
@@ -129,12 +502,10 @@ func (r *R) RouteExitFunc(sender *nebula.Control, whatDo ExitFunc) {
panic(err)
}
outAddr := sender.GetUDPAddr()
inAddr := net.JoinHostPort(p.ToIp.String(), fmt.Sprintf("%v", p.ToPort))
receiver := r.getControl(outAddr, inAddr, p)
receiver := r.getControl(sender.GetUDPAddr(), p.To, p)
if receiver == nil {
r.Unlock()
panic("Can't route for host: " + inAddr)
panic("Can't RouteExitFunc for host: " + p.To.String())
}
e := whatDo(p, receiver)
@@ -144,12 +515,16 @@ func (r *R) RouteExitFunc(sender *nebula.Control, whatDo ExitFunc) {
return
case RouteAndExit:
fp := r.unlockedInjectFlow(sender, receiver, p, false)
receiver.InjectUDPPacket(p)
fp.WasReceived()
r.Unlock()
return
case KeepRouting:
fp := r.unlockedInjectFlow(sender, receiver, p, false)
receiver.InjectUDPPacket(p)
fp.WasReceived()
default:
panic(fmt.Sprintf("Unknown exitFunc return: %v", e))
@@ -175,16 +550,44 @@ func (r *R) RouteUntilAfterMsgType(sender *nebula.Control, msgType header.Messag
})
}
func (r *R) RouteForAllUntilAfterMsgTypeTo(receiver *nebula.Control, msgType header.MessageType, subType header.MessageSubType) {
h := &header.H{}
r.RouteForAllExitFunc(func(p *udp.Packet, r *nebula.Control) ExitType {
if r != receiver {
return KeepRouting
}
if err := h.Parse(p.Data); err != nil {
panic(err)
}
if h.Type == msgType && h.Subtype == subType {
return RouteAndExit
}
return KeepRouting
})
}
func (r *R) InjectUDPPacket(sender, receiver *nebula.Control, packet *udp.Packet) {
r.Lock()
defer r.Unlock()
fp := r.unlockedInjectFlow(sender, receiver, packet, false)
receiver.InjectUDPPacket(packet)
fp.WasReceived()
}
// RouteForUntilAfterToAddr will route for sender and return only after it sees and sends a packet destined for toAddr
// finish can be any of the exitType values except `keepRouting`, the default value is `routeAndExit`
// If the router doesn't have the nebula controller for that address, we panic
func (r *R) RouteForUntilAfterToAddr(sender *nebula.Control, toAddr *net.UDPAddr, finish ExitType) {
func (r *R) RouteForUntilAfterToAddr(sender *nebula.Control, toAddr netip.AddrPort, finish ExitType) {
if finish == KeepRouting {
finish = RouteAndExit
}
r.RouteExitFunc(sender, func(p *udp.Packet, r *nebula.Control) ExitType {
if p.ToIp.Equal(toAddr.IP) && p.ToPort == uint16(toAddr.Port) {
if p.To == toAddr {
return finish
}
@@ -218,13 +621,10 @@ func (r *R) RouteForAllExitFunc(whatDo ExitFunc) {
r.Lock()
p := rx.Interface().(*udp.Packet)
outAddr := cm[x].GetUDPAddr()
inAddr := net.JoinHostPort(p.ToIp.String(), fmt.Sprintf("%v", p.ToPort))
receiver := r.getControl(outAddr, inAddr, p)
receiver := r.getControl(cm[x].GetUDPAddr(), p.To, p)
if receiver == nil {
r.Unlock()
panic("Can't route for host: " + inAddr)
panic("Can't RouteForAllExitFunc for host: " + p.To.String())
}
e := whatDo(p, receiver)
@@ -234,12 +634,16 @@ func (r *R) RouteForAllExitFunc(whatDo ExitFunc) {
return
case RouteAndExit:
fp := r.unlockedInjectFlow(cm[x], receiver, p, false)
receiver.InjectUDPPacket(p)
fp.WasReceived()
r.Unlock()
return
case KeepRouting:
fp := r.unlockedInjectFlow(cm[x], receiver, p, false)
receiver.InjectUDPPacket(p)
fp.WasReceived()
default:
panic(fmt.Sprintf("Unknown exitFunc return: %v", e))
@@ -281,12 +685,10 @@ func (r *R) FlushAll() {
p := rx.Interface().(*udp.Packet)
outAddr := cm[x].GetUDPAddr()
inAddr := net.JoinHostPort(p.ToIp.String(), fmt.Sprintf("%v", p.ToPort))
receiver := r.getControl(outAddr, inAddr, p)
receiver := r.getControl(cm[x].GetUDPAddr(), p.To, p)
if receiver == nil {
r.Unlock()
panic("Can't route for host: " + inAddr)
panic("Can't FlushAll for host: " + p.To.String())
}
r.Unlock()
}
@@ -294,30 +696,45 @@ func (r *R) FlushAll() {
// getControl performs or seeds NAT translation and returns the control for toAddr, p from fields may change
// This is an internal router function, the caller must hold the lock
func (r *R) getControl(fromAddr, toAddr string, p *udp.Packet) *nebula.Control {
if newAddr, ok := r.outNat[fromAddr+":"+toAddr]; ok {
p.FromIp = newAddr.IP
p.FromPort = uint16(newAddr.Port)
func (r *R) getControl(fromAddr, toAddr netip.AddrPort, p *udp.Packet) *nebula.Control {
if newAddr, ok := r.outNat[fromAddr.String()+":"+toAddr.String()]; ok {
p.From = newAddr
}
c, ok := r.inNat[toAddr]
if ok {
sHost, sPort, err := net.SplitHostPort(toAddr)
if err != nil {
panic(err)
}
port, err := strconv.Atoi(sPort)
if err != nil {
panic(err)
}
r.outNat[c.GetUDPAddr()+":"+fromAddr] = net.UDPAddr{
IP: net.ParseIP(sHost),
Port: port,
}
r.outNat[c.GetUDPAddr().String()+":"+fromAddr.String()] = toAddr
return c
}
return r.controls[toAddr]
}
func (r *R) formatUdpPacket(p *packet) string {
packet := gopacket.NewPacket(p.packet.Data, layers.LayerTypeIPv4, gopacket.Lazy)
v4 := packet.Layer(layers.LayerTypeIPv4).(*layers.IPv4)
if v4 == nil {
panic("not an ipv4 packet")
}
from := "unknown"
srcAddr, _ := netip.AddrFromSlice(v4.SrcIP)
if c, ok := r.vpnControls[srcAddr]; ok {
from = c.GetUDPAddr().String()
}
udp := packet.Layer(layers.LayerTypeUDP).(*layers.UDP)
if udp == nil {
panic("not a udp packet")
}
data := packet.ApplicationLayer()
return fmt.Sprintf(
" %s-->>%s: src port: %v<br/>dest port: %v<br/>data: \"%v\"\n",
strings.Replace(from, ":", "-", 1),
strings.Replace(p.to.GetUDPAddr().String(), ":", "-", 1),
udp.SrcPort,
udp.DstPort,
string(data.Payload()),
)
}

View File

@@ -11,7 +11,7 @@ pki:
#blocklist:
# - c99d4e650533b92061b09918e838a5a0a6aaee21eed1d12fd937682865936c72
# disconnect_invalid is a toggle to force a client to be disconnected if the certificate is expired or invalid.
#disconnect_invalid: false
#disconnect_invalid: true
# The static host map defines a set of hosts with fixed IP addresses on the internet (or any network).
# A host can have multiple fixed IP addresses defined here, and nebula will try each when establishing a tunnel.
@@ -21,6 +21,19 @@ pki:
static_host_map:
"192.168.100.1": ["100.64.22.11:4242"]
# The static_map config stanza can be used to configure how the static_host_map behaves.
#static_map:
# cadence determines how frequently DNS is re-queried for updated IP addresses when a static_host_map entry contains
# a DNS name.
#cadence: 30s
# network determines the type of IP addresses to ask the DNS server for. The default is "ip4" because nodes typically
# do not know their public IPv4 address. Connecting to the Lighthouse via IPv4 allows the Lighthouse to detect the
# public address. Other valid options are "ip6" and "ip" (returns both.)
#network: ip4
# lookup_timeout is the DNS query timeout.
#lookup_timeout: 250ms
lighthouse:
# am_lighthouse is used to enable lighthouse functionality for a node. This should ONLY be true on nodes
@@ -47,8 +60,9 @@ lighthouse:
# allowed. You can provide CIDRs here with `true` to allow and `false` to
# deny. The most specific CIDR rule applies to each remote. If all rules are
# "allow", the default will be "deny", and vice-versa. If both "allow" and
# "deny" rules are present, then you MUST set a rule for "0.0.0.0/0" as the
# default.
# "deny" IPv4 rules are present, then you MUST set a rule for "0.0.0.0/0" as
# the default. Similarly if both "allow" and "deny" IPv6 rules are present,
# then you MUST set a rule for "::/0" as the default.
#remote_allow_list:
# Example to block IPs from this subnet from being used for remote IPs.
#"172.16.0.0/12": false
@@ -58,7 +72,7 @@ lighthouse:
#"10.0.0.0/8": false
#"10.42.42.0/24": true
# EXPERIMENTAL: This option my change or disappear in the future.
# EXPERIMENTAL: This option may change or disappear in the future.
# Optionally allows the definition of remote_allow_list blocks
# specific to an inside VPN IP CIDR.
#remote_allow_ranges:
@@ -81,10 +95,32 @@ lighthouse:
# Example to only advertise this subnet to the lighthouse.
#"10.0.0.0/8": true
# advertise_addrs are routable addresses that will be included along with discovered addresses to report to the
# lighthouse, the format is "ip:port". `port` can be `0`, in which case the actual listening port will be used in its
# place, useful if `listen.port` is set to 0.
# This option is mainly useful when there are static ip addresses the host can be reached at that nebula can not
# typically discover on its own. Examples being port forwarding or multiple paths to the internet.
#advertise_addrs:
#- "1.1.1.1:4242"
#- "1.2.3.4:0" # port will be replaced with the real listening port
# EXPERIMENTAL: This option may change or disappear in the future.
# This setting allows us to "guess" what the remote might be for a host
# while we wait for the lighthouse response.
#calculated_remotes:
# For any Nebula IPs in 10.0.10.0/24, this will apply the mask and add
# the calculated IP as an initial remote (while we wait for the response
# from the lighthouse). Both CIDRs must have the same mask size.
# For example, Nebula IP 10.0.10.123 will have a calculated remote of
# 192.168.1.123
#10.0.10.0/24:
#- mask: 192.168.1.0/24
# port: 4242
# Port Nebula will be listening on. The default here is 4242. For a lighthouse node, the port should be defined,
# however using port 0 will dynamically assign a port and is recommended for roaming nodes.
listen:
# To listen on both any ipv4 and ipv6 use "[::]"
# To listen on both any ipv4 and ipv6 use "::"
host: 0.0.0.0
port: 4242
# Sets the max number of packets to pull from the kernel for each syscall (under systems that support recvmmsg)
@@ -96,14 +132,18 @@ listen:
# max, net.core.rmem_max and net.core.wmem_max
#read_buffer: 10485760
#write_buffer: 10485760
# By default, Nebula replies to packets it has no tunnel for with a "recv_error" packet. This packet helps speed up reconnection
# in the case that Nebula on either side did not shut down cleanly. This response can be abused as a way to discover if Nebula is running
# on a host though. This option lets you configure if you want to send "recv_error" packets always, never, or only to private network remotes.
# valid values: always, never, private
# This setting is reloadable.
#send_recv_error: always
# EXPERIMENTAL: This option is currently only supported on linux and may
# change in future minor releases.
#
# Routines is the number of thread pairs to run that consume from the tun and UDP queues.
# Currently, this defaults to 1 which means we have 1 tun queue reader and 1
# UDP queue reader. Setting this above one will set IFF_MULTI_QUEUE on the tun
# device and SO_REUSEPORT on the UDP socket to allow multiple queues.
# This option is only supported on Linux.
#routines: 1
punchy:
@@ -115,20 +155,23 @@ punchy:
# Default is false
#respond: true
# delays a punch response for misbehaving NATs, default is 1 second, respond must be true to take effect
# delays a punch response for misbehaving NATs, default is 1 second.
#delay: 1s
# set the delay before attempting punchy.respond. Default is 5 seconds. respond must be true to take effect.
#respond_delay: 5s
# Cipher allows you to choose between the available ciphers for your network. Options are chachapoly or aes
# IMPORTANT: this value must be identical on ALL NODES/LIGHTHOUSES. We do not/will not support use of different ciphers simultaneously!
#cipher: chachapoly
#cipher: aes
# Preferred ranges is used to define a hint about the local network ranges, which speeds up discovering the fastest
# path to a network adjacent nebula node.
# NOTE: the previous option "local_range" only allowed definition of a single range
# and has been deprecated for "preferred_ranges"
# This setting is reloadable.
#preferred_ranges: ["172.16.0.0/24"]
# sshd can expose informational and administrative functions via ssh this is a
# sshd can expose informational and administrative functions via ssh. This can expose informational and administrative
# functions, and allows manual tweaking of various network settings when debugging or testing.
#sshd:
# Toggles the feature
#enabled: true
@@ -137,18 +180,37 @@ punchy:
# A file containing the ssh host private key to use
# A decent way to generate one: ssh-keygen -t ed25519 -f ssh_host_ed25519_key -N "" < /dev/null
#host_key: ./ssh_host_ed25519_key
# A file containing a list of authorized public keys
# Authorized users and their public keys
#authorized_users:
#- user: steeeeve
# keys can be an array of strings or single string
#keys:
#- "ssh public key string"
# Trusted SSH CA public keys. These are the public keys of the CAs that are allowed to sign SSH keys for access.
#trusted_cas:
#- "ssh public key string"
# EXPERIMENTAL: relay support for networks that can't establish direct connections.
relay:
# Relays are a list of Nebula IP's that peers can use to relay packets to me.
# IPs in this list must have am_relay set to true in their configs, otherwise
# they will reject relay requests.
#relays:
#- 192.168.100.1
#- <other Nebula VPN IPs of hosts used as relays to access me>
# Set am_relay to true to permit other hosts to list my IP in their relays config. Default false.
am_relay: false
# Set use_relays to false to prevent this instance from attempting to establish connections through relays.
# default true
use_relays: true
# Configure the private interface. Note: addr is baked into the nebula certificate
tun:
# When tun is disabled, a lighthouse can be started without a local tun interface (and therefore without root)
disabled: false
# Name of the device
# Name of the device. If not set, a default will be chosen by the OS.
# For macOS: if set, must be in the form `utun[0-9]+`.
# For NetBSD: Required to be set, must be in the form `tun[0-9]+`
dev: nebula1
# Toggles forwarding of local broadcast packets, the address of which depends on the ip/mask encoded in pki.cert
drop_local_broadcast: false
@@ -158,26 +220,37 @@ tun:
tx_queue: 500
# Default MTU for every packet, safe setting is (and the default) 1300 for internet based traffic
mtu: 1300
# Route based MTU overrides, you have known vpn ip paths that can support larger MTUs you can increase/decrease them here
routes:
#- mtu: 8800
# route: 10.0.0.0/16
# Unsafe routes allows you to route traffic over nebula to non-nebula nodes
# Unsafe routes should be avoided unless you have hosts/services that cannot run nebula
# NOTE: The nebula certificate of the "via" node *MUST* have the "route" defined as a subnet in its certificate
# `mtu` will default to tun mtu if this option is not specified
# `metric` will default to 0 if this option is not specified
# `mtu`: will default to tun mtu if this option is not specified
# `metric`: will default to 0 if this option is not specified
# `install`: will default to true, controls whether this route is installed in the systems routing table.
# This setting is reloadable.
unsafe_routes:
#- route: 172.16.1.0/24
# via: 192.168.100.99
# mtu: 1300
# metric: 100
# install: true
# On linux only, set to true to manage unsafe routes directly on the system route table with gateway routes instead of
# in nebula configuration files. Default false, not reloadable.
#use_system_route_table: false
# TODO
# Configure logging level
logging:
# panic, fatal, error, warning, info, or debug. Default is info
# panic, fatal, error, warning, info, or debug. Default is info and is reloadable.
#NOTE: Debug mode can log remotely controlled/untrusted data which can quickly fill a disk in some
# scenarios. Debug logging is also CPU intensive and will decrease performance overall.
# Only enable debug logging while actively investigating an issue.
level: info
# json or text formats currently available. Default is text
format: text
@@ -215,64 +288,58 @@ logging:
# e.g.: `lighthouse.rx.HostQuery`
#lighthouse_metrics: false
# Handshake Manger Settings
handshakes:
# Handshake Manager Settings
#handshakes:
# Handshakes are sent to all known addresses at each interval with a linear backoff,
# Wait try_interval after the 1st attempt, 2 * try_interval after the 2nd, etc, until the handshake is older than timeout
# A 100ms interval with the default 10 retries will give a handshake 5.5 seconds to resolve before timing out
#try_interval: 100ms
#retries: 20
# query_buffer is the size of the buffer channel for querying lighthouses
#query_buffer: 64
# trigger_buffer is the size of the buffer channel for quickly sending handshakes
# after receiving the response for lighthouse queries
#trigger_buffer: 64
# psk can be used to mask the contents of handshakes and makes handshaking with unintended recipients more difficult
# all settings respond to a reload
psk:
# mode defines how the pre shared keys can be used in a handshake
# `none` (the default) does not send or receive using a psk. Ideally `enforced` is used
# `transitional-accepting` will send handshakes without using a psk and can receive handshakes using a psk we know about
# `transitional-sending` will send handshakes using a psk but will still accept handshakes without them
# `enforced` enforces the use of a psk for all tunnels. Any node not also using `enforced` or `transitional-sending` can not handshake with us
#
# When moving from `none` to `enforced` you will want to change every node in the mesh to `transitional-accepting` and reload
# then move every node to `transitional-sending` then reload, and finally `enforced` then reload. This allows you to
# avoid stopping the world to use psk. You must ensure at `transitional-accepting` that all nodes have the same psks.
#mode: none
# In `transitional-accepting`, `transitional-sending` and `enforced` modes, the keys provided here are sent through
# hkdf with the intended recipients ip used in the info section. This helps guard against handshaking with the wrong
# host if your static_host_map or lighthouse(s) has incorrect information.
#
# Setting keys if mode is `none` has no effect.
#
# Only the first key is used for outbound handshakes but all keys provided will be tried in the order specified, on
# incoming handshakes. This is to allow for psk rotation.
#keys:
# - shared secret string, this one is used in all outbound handshakes
# - this is a fallback key, received handshakes can use this
# - another fallback, received handshakes can use this one too
# - "\x68\x65\x6c\x6c\x6f\x20\x66\x72\x69\x65\x6e\x64\x73" # for raw bytes if you desire
# Nebula security group configuration
firewall:
# Action to take when a packet is not allowed by the firewall rules.
# Can be one of:
# `drop` (default): silently drop the packet.
# `reject`: send a reject reply.
# - For TCP, this will be a RST "Connection Reset" packet.
# - For other protocols, this will be an ICMP port unreachable packet.
outbound_action: drop
inbound_action: drop
# Controls the default value for local_cidr. Default is true, will be deprecated after v1.9 and defaulted to false.
# This setting only affects nebula hosts with subnets encoded in their certificate. A nebula host acting as an
# unsafe router with `default_local_cidr_any: true` will expose their unsafe routes to every inbound rule regardless
# of the actual destination for the packet. Setting this to false requires each inbound rule to contain a `local_cidr`
# if the intention is to allow traffic to flow to an unsafe route.
#default_local_cidr_any: false
conntrack:
tcp_timeout: 12m
udp_timeout: 3m
default_timeout: 10m
max_connections: 100000
# The firewall is default deny. There is no way to write a deny rule.
# Rules are comprised of a protocol, port, and one or more of host, group, or CIDR
# Logical evaluation is roughly: port AND proto AND (ca_sha OR ca_name) AND (host OR group OR groups OR cidr)
# Logical evaluation is roughly: port AND proto AND (ca_sha OR ca_name) AND (host OR group OR groups OR cidr) AND (local cidr)
# - port: Takes `0` or `any` as any, a single number `80`, a range `200-901`, or `fragment` to match second and further fragments of fragmented packets (since there is no port available).
# code: same as port but makes more sense when talking about ICMP, TODO: this is not currently implemented in a way that works, use `any`
# proto: `any`, `tcp`, `udp`, or `icmp`
# host: `any` or a literal hostname, ie `test-host`
# group: `any` or a literal group name, ie `default-group`
# groups: Same as group but accepts a list of values. Multiple values are AND'd together and a certificate would have to contain all groups to pass
# cidr: a CIDR, `0.0.0.0/0` is any.
# cidr: a remote CIDR, `0.0.0.0/0` is any.
# local_cidr: a local CIDR, `0.0.0.0/0` is any. This could be used to filter destinations when using unsafe_routes.
# Default is `any` unless the certificate contains subnets and then the default is the ip issued in the certificate
# if `default_local_cidr_any` is false, otherwise its `any`.
# ca_name: An issuing CA name
# ca_sha: An issuing CA shasum
@@ -294,3 +361,10 @@ firewall:
groups:
- laptop
- home
# Expose a subnet (unsafe route) to hosts with the group remote_client
# This example assume you have a subnet of 192.168.100.1/24 or larger encoded in the certificate
- port: 8080
proto: tcp
group: remote_client
local_cidr: 192.168.100.1/24

100
examples/go_service/main.go Normal file
View File

@@ -0,0 +1,100 @@
package main
import (
"bufio"
"fmt"
"log"
"github.com/slackhq/nebula/config"
"github.com/slackhq/nebula/service"
)
func main() {
if err := run(); err != nil {
log.Fatalf("%+v", err)
}
}
func run() error {
configStr := `
tun:
user: true
static_host_map:
'192.168.100.1': ['localhost:4242']
listen:
host: 0.0.0.0
port: 4241
lighthouse:
am_lighthouse: false
interval: 60
hosts:
- '192.168.100.1'
firewall:
outbound:
# Allow all outbound traffic from this node
- port: any
proto: any
host: any
inbound:
# Allow icmp between any nebula hosts
- port: any
proto: icmp
host: any
- port: any
proto: any
host: any
pki:
ca: /home/rice/Developer/nebula-config/ca.crt
cert: /home/rice/Developer/nebula-config/app.crt
key: /home/rice/Developer/nebula-config/app.key
`
var config config.C
if err := config.LoadString(configStr); err != nil {
return err
}
service, err := service.New(&config)
if err != nil {
return err
}
ln, err := service.Listen("tcp", ":1234")
if err != nil {
return err
}
for {
conn, err := ln.Accept()
if err != nil {
log.Printf("accept error: %s", err)
break
}
defer conn.Close()
log.Printf("got connection")
conn.Write([]byte("hello world\n"))
scanner := bufio.NewScanner(conn)
for scanner.Scan() {
message := scanner.Text()
fmt.Fprintf(conn, "echo: %q\n", message)
log.Printf("got message %q", message)
}
if err := scanner.Err(); err != nil {
log.Printf("scanner error: %s", err)
break
}
}
service.Close()
if err := service.Wait(); err != nil {
return err
}
return nil
}

View File

@@ -1,139 +0,0 @@
# Quickstart Guide
This guide is intended to bring up a vagrant environment with 1 lighthouse and 2 generic hosts running nebula.
## Creating the virtualenv for ansible
Within the `quickstart/` directory, do the following
```
# make a virtual environment
virtualenv venv
# get into the virtualenv
source venv/bin/activate
# install ansible
pip install -r requirements.yml
```
## Bringing up the vagrant environment
A plugin that is used for the Vagrant environment is `vagrant-hostmanager`
To install, run
```
vagrant plugin install vagrant-hostmanager
```
All hosts within the Vagrantfile are brought up with
`vagrant up`
Once the boxes are up, go into the `ansible/` directory and deploy the playbook by running
`ansible-playbook playbook.yml -i inventory -u vagrant`
## Testing within the vagrant env
Once the ansible run is done, hop onto a vagrant box
`vagrant ssh generic1.vagrant`
or specifically
`ssh vagrant@<ip-address-in-vagrant-file` (password for the vagrant user on the boxes is `vagrant`)
Some quick tests once the vagrant boxes are up are to ping from `generic1.vagrant` to `generic2.vagrant` using
their respective nebula ip address.
```
vagrant@generic1:~$ ping 10.168.91.220
PING 10.168.91.220 (10.168.91.220) 56(84) bytes of data.
64 bytes from 10.168.91.220: icmp_seq=1 ttl=64 time=241 ms
64 bytes from 10.168.91.220: icmp_seq=2 ttl=64 time=0.704 ms
```
You can further verify that the allowed nebula firewall rules work by ssh'ing from 1 generic box to the other.
`ssh vagrant@<nebula-ip-address>` (password for the vagrant user on the boxes is `vagrant`)
See `/etc/nebula/config.yml` on a box for firewall rules.
To see full handshakes and hostmaps, change the logging config of `/etc/nebula/config.yml` on the vagrant boxes from
info to debug.
You can watch nebula logs by running
```
sudo journalctl -fu nebula
```
Refer to the nebula src code directory's README for further instructions on configuring nebula.
## Troubleshooting
### Is nebula up and running?
Run and verify that
```
ifconfig
```
shows you an interface with the name `nebula1` being up.
```
vagrant@generic1:~$ ifconfig nebula1
nebula1: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST> mtu 1300
inet 10.168.91.210 netmask 255.128.0.0 destination 10.168.91.210
inet6 fe80::aeaf:b105:e6dc:936c prefixlen 64 scopeid 0x20<link>
unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen 500 (UNSPEC)
RX packets 2 bytes 168 (168.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 11 bytes 600 (600.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
```
### Connectivity
Are you able to ping other boxes on the private nebula network?
The following are the private nebula ip addresses of the vagrant env
```
generic1.vagrant [nebula_ip] 10.168.91.210
generic2.vagrant [nebula_ip] 10.168.91.220
lighthouse1.vagrant [nebula_ip] 10.168.91.230
```
Try pinging generic1.vagrant to and from any other box using its nebula ip above.
Double check the nebula firewall rules under /etc/nebula/config.yml to make sure that connectivity is allowed for your use-case if on a specific port.
```
vagrant@lighthouse1:~$ grep -A21 firewall /etc/nebula/config.yml
firewall:
conntrack:
tcp_timeout: 12m
udp_timeout: 3m
default_timeout: 10m
max_connections: 100,000
inbound:
- proto: icmp
port: any
host: any
- proto: any
port: 22
host: any
- proto: any
port: 53
host: any
outbound:
- proto: any
port: any
host: any
```

View File

@@ -1,40 +0,0 @@
Vagrant.require_version ">= 2.2.6"
nodes = [
{ :hostname => 'generic1.vagrant', :ip => '172.11.91.210', :box => 'bento/ubuntu-18.04', :ram => '512', :cpus => 1},
{ :hostname => 'generic2.vagrant', :ip => '172.11.91.220', :box => 'bento/ubuntu-18.04', :ram => '512', :cpus => 1},
{ :hostname => 'lighthouse1.vagrant', :ip => '172.11.91.230', :box => 'bento/ubuntu-18.04', :ram => '512', :cpus => 1},
]
Vagrant.configure("2") do |config|
config.ssh.insert_key = false
if Vagrant.has_plugin?('vagrant-cachier')
config.cache.enable :apt
else
printf("** Install vagrant-cachier plugin to speedup deploy: `vagrant plugin install vagrant-cachier`.**\n")
end
if Vagrant.has_plugin?('vagrant-hostmanager')
config.hostmanager.enabled = true
config.hostmanager.manage_host = true
config.hostmanager.include_offline = true
else
config.vagrant.plugins = "vagrant-hostmanager"
end
nodes.each do |node|
config.vm.define node[:hostname] do |node_config|
node_config.vm.box = node[:box]
node_config.vm.hostname = node[:hostname]
node_config.vm.network :private_network, ip: node[:ip]
node_config.vm.provider :virtualbox do |vb|
vb.memory = node[:ram]
vb.cpus = node[:cpus]
vb.customize ["modifyvm", :id, "--natdnshostresolver1", "on"]
vb.customize ['guestproperty', 'set', :id, '/VirtualBox/GuestAdd/VBoxService/--timesync-set-threshold', 10000]
end
end
end
end

View File

@@ -1,4 +0,0 @@
[defaults]
host_key_checking = False
private_key_file = ~/.vagrant.d/insecure_private_key
become = yes

View File

@@ -1,21 +0,0 @@
#!/usr/bin/python
class FilterModule(object):
def filters(self):
return {
'to_nebula_ip': self.to_nebula_ip,
'map_to_nebula_ips': self.map_to_nebula_ips,
}
def to_nebula_ip(self, ip_str):
ip_list = list(map(int, ip_str.split(".")))
ip_list[0] = 10
ip_list[1] = 168
ip = '.'.join(map(str, ip_list))
return ip
def map_to_nebula_ips(self, ip_strs):
ip_list = [ self.to_nebula_ip(ip_str) for ip_str in ip_strs ]
ips = ', '.join(ip_list)
return ips

View File

@@ -1,11 +0,0 @@
[all]
generic1.vagrant
generic2.vagrant
lighthouse1.vagrant
[generic]
generic1.vagrant
generic2.vagrant
[lighthouse]
lighthouse1.vagrant

View File

@@ -1,23 +0,0 @@
---
- name: test connection to vagrant boxes
hosts: all
tasks:
- debug: msg=ok
- name: build nebula binaries locally
connection: local
hosts: localhost
tasks:
- command: chdir=../../../ make build/linux-amd64/"{{ item }}"
with_items:
- nebula
- nebula-cert
tags:
- build-nebula
- name: install nebula on all vagrant hosts
hosts: all
become: yes
gather_facts: yes
roles:
- nebula

View File

@@ -1,3 +0,0 @@
---
# defaults file for nebula
nebula_config_directory: "/etc/nebula/"

Some files were not shown because too many files have changed in this diff Show More