Compare commits

...

265 Commits

Author SHA1 Message Date
JackDoan
9cef6752c9 more incompatibilities, was this a good idea at all? 2025-10-17 15:52:15 -05:00
JackDoan
a0c6cea6fc backport incompatible bart change (grr) 2025-10-17 12:05:23 -05:00
JackDoan
fc165a8b75 update CHANGELOG.md 2025-10-17 11:54:35 -05:00
JackDoan
845d72b97b update bart, x/crypto, x/net 2025-10-17 11:52:58 -05:00
Nate Brown
7c3f533950 Better words (#1497) 2025-10-10 10:31:46 -05:00
Nate Brown
824cd3f0d6 Update CHANGELOG for Nebula v1.9.7 2025-10-07 21:10:16 -05:00
Nate Brown
9f692175e1 HostInfo.remoteCidr should only be populated with the entire vpn ip address issued in the certificate (#1494) 2025-10-07 17:35:58 -05:00
Nate Brown
22af56f156 Fix recv_error receipt limit allowance for v1.9.x (#1459)
* Fix recv_error receipt limit allowance

* backport #1463 recv_error behavior changes

---------

Co-authored-by: JackDoan <me@jackdoan.com>
2025-09-04 15:52:32 -05:00
brad-defined
1d73e463cd Quietly log error on UDP_NETRESET ioctl on Windows. (#1453)
* Quietly log error on UDP_NETRESET ioctl on Windows.

* dampen unexpected error warnings
2025-08-19 17:33:31 -04:00
brad-defined
105e0ec66c v1.9.6 (#1434)
Update CHANGELOG for Nebula v1.9.6
2025-07-18 08:39:33 -04:00
Nate Brown
4870bb680d Darwin udp fix (#1426) 2025-07-01 16:41:29 -05:00
brad-defined
a1498ca8f8 Store relay states in a slice for consistent ordering (#1422) 2025-06-24 12:04:00 -04:00
Nate Brown
9877648da9 Drop inactive tunnels (#1413) 2025-06-23 11:32:50 -05:00
brad-defined
8e0a7bcbb7 Disable UDP receive error returns due to ICMP messages on Windows. (#1412) 2025-05-22 08:55:45 -04:00
brad-defined
8c29b15c6d fix relay migration panic (#1403) 2025-05-13 14:58:58 -04:00
brad-defined
04d7a8ccba Retry UDP receive on Windows in some receive error cases (#1404) 2025-05-13 14:58:37 -04:00
Nate Brown
b55b9019a7 v1.9.5 (#1285)
Update CHANGELOG for Nebula v1.9.5
2024-12-06 09:50:24 -05:00
Nate Brown
2e85d138cd [v1.9.x] do not panic when loading a V2 CA certificate (#1282)
Co-authored-by: Jack Doan <jackdoan@rivian.com>
2024-12-03 09:49:54 -06:00
brad-defined
9bfdfbafc1 Backport reestablish relays from cert-v2 to release-1.9 (#1277) 2024-11-20 21:49:53 -06:00
Wade Simmons
ab81b62ea0 v1.9.4 (#1210)
Update CHANGELOG for Nebula v1.9.4
2024-09-09 14:11:44 -04:00
dependabot[bot]
45bbad2f21 Bump the golang-x-dependencies group with 4 updates (#1195)
Bumps the golang-x-dependencies group with 4 updates: [golang.org/x/crypto](https://github.com/golang/crypto), [golang.org/x/net](https://github.com/golang/net), [golang.org/x/sys](https://github.com/golang/sys) and [golang.org/x/term](https://github.com/golang/term).


Updates `golang.org/x/crypto` from 0.25.0 to 0.26.0
- [Commits](https://github.com/golang/crypto/compare/v0.25.0...v0.26.0)

Updates `golang.org/x/net` from 0.27.0 to 0.28.0
- [Commits](https://github.com/golang/net/compare/v0.27.0...v0.28.0)

Updates `golang.org/x/sys` from 0.23.0 to 0.24.0
- [Commits](https://github.com/golang/sys/compare/v0.23.0...v0.24.0)

Updates `golang.org/x/term` from 0.22.0 to 0.23.0
- [Commits](https://github.com/golang/term/compare/v0.22.0...v0.23.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
- dependency-name: golang.org/x/net
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
- dependency-name: golang.org/x/sys
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
- dependency-name: golang.org/x/term
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-03 16:47:36 -04:00
Jack Doan
3dc56e1184 Support UDP dialling with gvisor (#1181) 2024-08-26 12:38:32 -05:00
Wade Simmons
0736cfa562 udp: fix endianness for port (#1194)
If the host OS is already big endian, we were swapping bytes when we
shouldn't have. Use the Go helper to make sure we do the endianness
correctly

Fixes: #1189
2024-08-14 12:53:00 -04:00
Jack Doan
248cf194cd fix integer wraparound in the calculation of handshake timeouts on 32-bit targets (#1185)
Fixes: #1169
2024-08-13 09:25:18 -04:00
dependabot[bot]
8a6a0f0636 Bump the golang-x-dependencies group with 2 updates (#1190)
Bumps the golang-x-dependencies group with 2 updates: [golang.org/x/sync](https://github.com/golang/sync) and [golang.org/x/sys](https://github.com/golang/sys).


Updates `golang.org/x/sync` from 0.7.0 to 0.8.0
- [Commits](https://github.com/golang/sync/compare/v0.7.0...v0.8.0)

Updates `golang.org/x/sys` from 0.22.0 to 0.23.0
- [Commits](https://github.com/golang/sys/compare/v0.22.0...v0.23.0)

---
updated-dependencies:
- dependency-name: golang.org/x/sync
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
- dependency-name: golang.org/x/sys
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-08-07 11:58:46 -04:00
Wade Simmons
f5f6c269ac fix rare panic when local index collision happens (#1191)
A local index collision happens when two tunnels attempt to use the same
random int32 index ID. This is a rare chance, and we have code to deal
with it, but we have a panic because we return the wrong thing in this
case. This change should fix the panic.
2024-08-07 11:53:32 -04:00
brad-defined
9a63fa0a07 Make some Nebula state programmatically available via control object (#1188) 2024-08-01 13:40:05 -04:00
Nate Brown
e264a0ff88 Switch most everything to netip in prep for ipv6 in the overlay (#1173) 2024-07-31 10:18:56 -05:00
dependabot[bot]
00458302ca Bump the golang-x-dependencies group with 4 updates (#1174)
Bumps the golang-x-dependencies group with 4 updates: [golang.org/x/crypto](https://github.com/golang/crypto), [golang.org/x/net](https://github.com/golang/net), [golang.org/x/sys](https://github.com/golang/sys) and [golang.org/x/term](https://github.com/golang/term).


Updates `golang.org/x/crypto` from 0.24.0 to 0.25.0
- [Commits](https://github.com/golang/crypto/compare/v0.24.0...v0.25.0)

Updates `golang.org/x/net` from 0.26.0 to 0.27.0
- [Commits](https://github.com/golang/net/compare/v0.26.0...v0.27.0)

Updates `golang.org/x/sys` from 0.21.0 to 0.22.0
- [Commits](https://github.com/golang/sys/compare/v0.21.0...v0.22.0)

Updates `golang.org/x/term` from 0.21.0 to 0.22.0
- [Commits](https://github.com/golang/term/compare/v0.21.0...v0.22.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
- dependency-name: golang.org/x/net
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
- dependency-name: golang.org/x/sys
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
- dependency-name: golang.org/x/term
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-07-29 11:42:33 -04:00
Wade Simmons
e6009b8491 github actions: use macos-latest (#1171)
macos-11 was deprecated and removed:

> The macos-11 label has been deprecated and will no longer be available after 28 June 2024.

We can just use macos-latest instead.
2024-07-02 11:50:51 -04:00
dependabot[bot]
b9aace1e58 Bump github.com/prometheus/client_golang from 1.19.0 to 1.19.1 (#1147)
Bumps [github.com/prometheus/client_golang](https://github.com/prometheus/client_golang) from 1.19.0 to 1.19.1.
- [Release notes](https://github.com/prometheus/client_golang/releases)
- [Changelog](https://github.com/prometheus/client_golang/blob/main/CHANGELOG.md)
- [Commits](https://github.com/prometheus/client_golang/compare/v1.19.0...v1.19.1)

---
updated-dependencies:
- dependency-name: github.com/prometheus/client_golang
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-06-24 14:54:51 -04:00
dependabot[bot]
a76723eaf5 Bump Apple-Actions/import-codesign-certs from 2 to 3 (#1146)
Bumps [Apple-Actions/import-codesign-certs](https://github.com/apple-actions/import-codesign-certs) from 2 to 3.
- [Release notes](https://github.com/apple-actions/import-codesign-certs/releases)
- [Commits](https://github.com/apple-actions/import-codesign-certs/compare/v2...v3)

---
updated-dependencies:
- dependency-name: Apple-Actions/import-codesign-certs
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-06-24 14:54:05 -04:00
Caleb Jasik
8109cf2170 Add puncuation to doc comment (#1164)
* Add puncuation to doc comment

* Fix list formatting inside `EncryptDanger` doc comment
2024-06-24 14:50:17 -04:00
Wade Simmons
97e9834f82 cleanup SK_MEMINFO vars (#1162)
We had to manually define these types before, but the latest release of
`golang.org/x/sys` adds these definitions:

- 6dfb94eaa3

Since we just updated with this PR, we can clean this up now:

- https://github.com/slackhq/nebula/pull/1161
2024-06-24 14:47:14 -04:00
dependabot[bot]
506ba5ab5b Bump github.com/miekg/dns from 1.1.59 to 1.1.61 (#1168)
Bumps [github.com/miekg/dns](https://github.com/miekg/dns) from 1.1.59 to 1.1.61.
- [Changelog](https://github.com/miekg/dns/blob/master/Makefile.release)
- [Commits](https://github.com/miekg/dns/compare/v1.1.59...v1.1.61)

---
updated-dependencies:
- dependency-name: github.com/miekg/dns
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-06-24 14:46:27 -04:00
dependabot[bot]
d372df56ab Bump google.golang.org/protobuf in the protobuf-dependencies group (#1167)
Bumps the protobuf-dependencies group with 1 update: google.golang.org/protobuf.


Updates `google.golang.org/protobuf` from 1.34.1 to 1.34.2

---
updated-dependencies:
- dependency-name: google.golang.org/protobuf
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: protobuf-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-06-24 14:45:52 -04:00
dependabot[bot]
40cfd00e87 Bump the golang-x-dependencies group with 4 updates (#1161)
Bumps the golang-x-dependencies group with 4 updates: [golang.org/x/crypto](https://github.com/golang/crypto), [golang.org/x/net](https://github.com/golang/net), [golang.org/x/sys](https://github.com/golang/sys) and [golang.org/x/term](https://github.com/golang/term).


Updates `golang.org/x/crypto` from 0.23.0 to 0.24.0
- [Commits](https://github.com/golang/crypto/compare/v0.23.0...v0.24.0)

Updates `golang.org/x/net` from 0.25.0 to 0.26.0
- [Commits](https://github.com/golang/net/compare/v0.25.0...v0.26.0)

Updates `golang.org/x/sys` from 0.20.0 to 0.21.0
- [Commits](https://github.com/golang/sys/compare/v0.20.0...v0.21.0)

Updates `golang.org/x/term` from 0.20.0 to 0.21.0
- [Commits](https://github.com/golang/term/compare/v0.20.0...v0.21.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
- dependency-name: golang.org/x/net
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
- dependency-name: golang.org/x/sys
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
- dependency-name: golang.org/x/term
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-06-10 16:08:43 -04:00
Wade Simmons
b14bad586a v1.9.3 (#1160)
Update CHANGELOG for Nebula v1.9.3
2024-06-06 13:17:07 -04:00
Wade Simmons
4c066d8c32 initialize messageCounter to 2 instead of verifying later (#1156)
Clean up the messageCounter checks added in #1154. Instead of checking that
messageCounter is still at 2, just initialize it to 2 and only increment for
non-handshake messages. Handshake packets will always be packets 1 and 2.
2024-06-06 13:03:07 -04:00
Wade Simmons
249ae41fec v1.9.2 (#1155)
Update CHANGELOG for Nebula v1.9.2
2024-06-03 15:50:02 -04:00
Wade Simmons
d9cae9e062 ensure messageCounter is set before handshake is complete (#1154)
Ensure we set messageCounter to 2 before the handshake is marked as
complete.
2024-06-03 15:40:51 -04:00
Wade Simmons
a92056a7db v1.9.1 (#1152)
Update CHANGELOG for Nebula v1.9.1
2024-05-29 14:06:46 -04:00
Wade Simmons
4eb1da0958 remove deadlock in GetOrHandshake (#1151)
We had a rare deadlock in GetOrHandshake because we kept the hostmap
lock when we do the call to StartHandshake. StartHandshake can block
while sending to the lighthouse query worker channel, and that worker
needs to be able to grab the hostmap lock to do its work. Other calls
for StartHandshake don't hold the hostmap lock so we should be able to
drop it here.

This lock was originally added with: https://github.com/slackhq/nebula/pull/954
2024-05-29 12:52:52 -04:00
Wade Simmons
50b24c102e v1.9.0 (#1137)
Update CHANGELOG for Nebula v1.9.0

Co-authored-by: John Maguire <john@defined.net>
2024-05-08 10:31:24 -04:00
dependabot[bot]
c0130f8161 Bump the golang-x-dependencies group with 4 updates (#1138)
Bumps the golang-x-dependencies group with 4 updates: [golang.org/x/crypto](https://github.com/golang/crypto), [golang.org/x/net](https://github.com/golang/net), [golang.org/x/sys](https://github.com/golang/sys) and [golang.org/x/term](https://github.com/golang/term).


Updates `golang.org/x/crypto` from 0.22.0 to 0.23.0
- [Commits](https://github.com/golang/crypto/compare/v0.22.0...v0.23.0)

Updates `golang.org/x/net` from 0.24.0 to 0.25.0
- [Commits](https://github.com/golang/net/compare/v0.24.0...v0.25.0)

Updates `golang.org/x/sys` from 0.19.0 to 0.20.0
- [Commits](https://github.com/golang/sys/compare/v0.19.0...v0.20.0)

Updates `golang.org/x/term` from 0.19.0 to 0.20.0
- [Commits](https://github.com/golang/term/compare/v0.19.0...v0.20.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
- dependency-name: golang.org/x/net
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
- dependency-name: golang.org/x/sys
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
- dependency-name: golang.org/x/term
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-05-06 16:17:50 -04:00
dependabot[bot]
f19a28645e Bump google.golang.org/protobuf in the protobuf-dependencies group (#1139)
Bumps the protobuf-dependencies group with 1 update: google.golang.org/protobuf.


Updates `google.golang.org/protobuf` from 1.34.0 to 1.34.1

---
updated-dependencies:
- dependency-name: google.golang.org/protobuf
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: protobuf-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-05-06 16:17:05 -04:00
Jack Doan
fd1906b16f minor text fixes (#1135) 2024-05-03 20:43:40 -05:00
Wade Simmons
d6e4b88bb5 release: use download-action v4 in docker section (#1134)
We missed this upgrade in #1047
2024-05-03 11:35:55 -04:00
dependabot[bot]
18f69af455 Bump actions/download-artifact from 3 to 4 (#1047)
Bumps [actions/download-artifact](https://github.com/actions/download-artifact) from 3 to 4.
- [Release notes](https://github.com/actions/download-artifact/releases)
- [Commits](https://github.com/actions/download-artifact/compare/v3...v4)

---
updated-dependencies:
- dependency-name: actions/download-artifact
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-05-02 11:25:22 -04:00
dependabot[bot]
aa18d7fa4f Bump actions/upload-artifact from 3 to 4 (#1046)
* Bump actions/upload-artifact from 3 to 4

Bumps [actions/upload-artifact](https://github.com/actions/upload-artifact) from 3 to 4.
- [Release notes](https://github.com/actions/upload-artifact/releases)
- [Commits](https://github.com/actions/upload-artifact/compare/v3...v4)

---
updated-dependencies:
- dependency-name: actions/upload-artifact
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

* try to fix upload conflict

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Wade Simmons <wsimmons@slack-corp.com>
2024-05-02 11:24:58 -04:00
John Maguire
b5c3486796 Push Docker images as part of the release workflow (#1037) 2024-05-02 09:37:11 -04:00
dependabot[bot]
f39bfbb7fa Bump google.golang.org/protobuf in the protobuf-dependencies group (#1133)
Bumps the protobuf-dependencies group with 1 update: google.golang.org/protobuf.


Updates `google.golang.org/protobuf` from 1.33.0 to 1.34.0

---
updated-dependencies:
- dependency-name: google.golang.org/protobuf
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: protobuf-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-30 11:45:05 -04:00
Wade Simmons
4f4941e187 Add Vagrant based smoke tests (#1067)
* WIP smoke test freebsd

* fix bitrot

We now test that the firewall blocks inbound on host3 from host2

* WIP ipv6 test

* cleanup

* rename to make clear

* fix filename

* restore

* no sudo docker

* WIP

* WIP

* WIP

* WIP

* extra smoke tests

* WIP

* WIP

* add over improvements made in smoke.sh

* more tests

* use generic/freebsd14

* cleanup from test

* smoke test openbsd-amd64

* add netbsd-amd64

* try to fix vagrant
2024-04-30 11:02:16 -04:00
fyl
5f17db5dfa Add support for LoongArch64 (#1003) 2024-04-30 09:55:44 -05:00
John Maguire
f31bab5f1a Add support for SSH CAs (#1098)
- Accept certs signed by trusted CAs
- Username must match the cert principal if set
- Any username can be used if cert principal is empty
- Don't allow removed pubkeys/CAs to be used after reload
2024-04-30 10:50:17 -04:00
kindknow
9cd944d320 chore: fix function name in comment (#1111) 2024-04-30 09:43:38 -05:00
John Maguire
f7db0eb5cc Remove Vagrant example (#1129) 2024-04-30 09:40:24 -05:00
dependabot[bot]
7e7d5e00ca Bump github.com/prometheus/client_golang from 1.18.0 to 1.19.0 (#1086)
Bumps [github.com/prometheus/client_golang](https://github.com/prometheus/client_golang) from 1.18.0 to 1.19.0.
- [Release notes](https://github.com/prometheus/client_golang/releases)
- [Changelog](https://github.com/prometheus/client_golang/blob/main/CHANGELOG.md)
- [Commits](https://github.com/prometheus/client_golang/compare/v1.18.0...v1.19.0)

---
updated-dependencies:
- dependency-name: github.com/prometheus/client_golang
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-30 10:30:18 -04:00
Wade Simmons
24f336ec56 switch off deprecated elliptic.Marshal (#1108)
elliptic.Marshal was deprecated, we can replace it with the ECDH methods
even though we aren't using ECDH here. See:

- f03fb147d7

We still using elliptic.Unmarshal because this issue needs to be
resolved:

- https://github.com/golang/go/issues/63963
2024-04-30 10:02:49 -04:00
John Maguire
d7f52dec41 Fix errant capitalisation in DNS TXT response (#1127)
Co-authored-by: Oliver Marriott <hello@omarriott.com>
2024-04-30 09:58:56 -04:00
NODA Kai
e54f9dd206 dns_server.go: parseQuery: set NXDOMAIN if there's no Answer to return (#845) 2024-04-30 09:56:57 -04:00
Andrew Kraut
df78158cfa Create service script for open-rc (#711) 2024-04-30 09:53:00 -04:00
Robin Candau
8b55caa15e Remove Arch nebula.service file (#1132) 2024-04-30 07:45:23 -04:00
Jon Rafkind
7ed9f2a688 add ssh command to print device info (#763) 2024-04-29 16:09:34 -05:00
Wade Simmons
3aca576b07 update to go1.22 (#981)
* update to go1.21

Since the first minor version update has already been released, we can
probably feel comfortable updating to go1.21. This version now enforces
that the go version on the system is compatible with the version
specified in go.mod, so we can remove the old logic around checking the
minimum version in the Makefile.

- https://go.dev/doc/go1.21#tools

> To improve forwards compatibility, Go 1.21 now reads the go line in a go.work or go.mod file as a strict minimum requirement: go 1.21.0 means that the workspace or module cannot be used with Go 1.20 or with Go 1.21rc1. This allows projects that depend on fixes made in later versions of Go to ensure that they are not used with earlier versions. It also gives better error reporting for projects that make use of new Go features: when the problem is that a newer Go version is needed, that problem is reported clearly, instead of attempting to build the code and printing errors about unresolved imports or syntax errors.

* update to go1.22

* bump gvisor

* fix merge conflicts

* use latest gvisor `go` branch

Need to use the latest commit on the `go` branch, see:

- https://github.com/google/gvisor?tab=readme-ov-file#using-go-get

* mod tidy

* more fixes

* give smoketest more time

Is this why it is failing?

* also a little more sleep here

---------

Co-authored-by: Jack Doan <me@jackdoan.com>
2024-04-29 16:44:42 -04:00
Nate Brown
a99618e95c Don't log invalid certificates (#1116) 2024-04-29 15:21:00 -05:00
Caleb Jasik
8e94eb974e Add suggested filenames for collected profiles in the ssh commands (#1109) 2024-04-29 15:20:46 -05:00
John Maguire
41e2e1de02 Remove Fedora nebula.service file (#1128) 2024-04-29 15:30:22 -04:00
dependabot[bot]
d95fb4a314 Bump the golang-x-dependencies group with 5 updates (#1110)
Bumps the golang-x-dependencies group with 5 updates:

| Package | From | To |
| --- | --- | --- |
| [golang.org/x/crypto](https://github.com/golang/crypto) | `0.21.0` | `0.22.0` |
| [golang.org/x/net](https://github.com/golang/net) | `0.22.0` | `0.24.0` |
| [golang.org/x/sync](https://github.com/golang/sync) | `0.6.0` | `0.7.0` |
| [golang.org/x/sys](https://github.com/golang/sys) | `0.18.0` | `0.19.0` |
| [golang.org/x/term](https://github.com/golang/term) | `0.18.0` | `0.19.0` |


Updates `golang.org/x/crypto` from 0.21.0 to 0.22.0
- [Commits](https://github.com/golang/crypto/compare/v0.21.0...v0.22.0)

Updates `golang.org/x/net` from 0.22.0 to 0.24.0
- [Commits](https://github.com/golang/net/compare/v0.22.0...v0.24.0)

Updates `golang.org/x/sync` from 0.6.0 to 0.7.0
- [Commits](https://github.com/golang/sync/compare/v0.6.0...v0.7.0)

Updates `golang.org/x/sys` from 0.18.0 to 0.19.0
- [Commits](https://github.com/golang/sys/compare/v0.18.0...v0.19.0)

Updates `golang.org/x/term` from 0.18.0 to 0.19.0
- [Commits](https://github.com/golang/term/compare/v0.18.0...v0.19.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
- dependency-name: golang.org/x/net
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
- dependency-name: golang.org/x/sync
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
- dependency-name: golang.org/x/sys
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
- dependency-name: golang.org/x/term
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-23 13:50:53 -04:00
dependabot[bot]
cdcea00669 Bump github.com/miekg/dns from 1.1.58 to 1.1.59 (#1126)
Bumps [github.com/miekg/dns](https://github.com/miekg/dns) from 1.1.58 to 1.1.59.
- [Changelog](https://github.com/miekg/dns/blob/master/Makefile.release)
- [Commits](https://github.com/miekg/dns/compare/v1.1.58...v1.1.59)

---
updated-dependencies:
- dependency-name: github.com/miekg/dns
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-23 11:08:08 -04:00
dependabot[bot]
9bd92a7fc2 Bump golang.org/x/net from 0.22.0 to 0.23.0 (#1123)
Bumps [golang.org/x/net](https://github.com/golang/net) from 0.22.0 to 0.23.0.
- [Commits](https://github.com/golang/net/compare/v0.22.0...v0.23.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-23 11:06:15 -04:00
Nate Brown
a5a07cc760 Allow :: in lighthouse.dns.host config (#1115) 2024-04-11 21:44:36 -05:00
Nate Brown
c1711bc9c5 Remove tcp rtt tracking from the firewall (#1114) 2024-04-11 21:44:22 -05:00
Wade Simmons
7efa750aef avoid deadlock in lighthouse queryWorker (#1112)
* avoid deadlock in lighthouse queryWorker

If the lighthouse queryWorker tries to grab to call StartHandshake on
a lighthouse vpnIp, we can deadlock on the handshake_manager lock. This
change drops the handshake_manager lock before we send on the lighthouse
queryChan (which could block), and also avoids sending to the channel if
this is a lighthouse IP itself.

* need to hold lock during cacheCb
2024-04-11 17:00:01 -04:00
Nate Brown
a390125935 Support reloading preferred_ranges (#1043) 2024-04-03 22:14:51 -05:00
Nate Brown
bbb15f8cb1 Unsafe route reload (#1083) 2024-03-28 15:17:28 -05:00
John Maguire
8b68a08723 Fix "any" firewall rules for unsafe_routes (#1099) 2024-03-28 15:17:12 -05:00
dependabot[bot]
f8fb9759e9 Bump the golang-x-dependencies group with 1 update (#1094)
Bumps the golang-x-dependencies group with 1 update: [golang.org/x/net](https://github.com/golang/net).


Updates `golang.org/x/net` from 0.21.0 to 0.22.0
- [Commits](https://github.com/golang/net/compare/v0.21.0...v0.22.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-03-22 12:58:13 -04:00
dependabot[bot]
1f1d660200 Bump google.golang.org/protobuf from 1.32.0 to 1.33.0 (#1092)
Bumps google.golang.org/protobuf from 1.32.0 to 1.33.0.

---
updated-dependencies:
- dependency-name: google.golang.org/protobuf
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-03-18 11:12:13 -04:00
dependabot[bot]
279265058f Bump github.com/stretchr/testify from 1.8.4 to 1.9.0 (#1087)
Bumps [github.com/stretchr/testify](https://github.com/stretchr/testify) from 1.8.4 to 1.9.0.
- [Release notes](https://github.com/stretchr/testify/releases)
- [Commits](https://github.com/stretchr/testify/compare/v1.8.4...v1.9.0)

---
updated-dependencies:
- dependency-name: github.com/stretchr/testify
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-03-18 11:06:18 -04:00
dependabot[bot]
2a778de07e Bump github.com/flynn/noise from 1.0.1 to 1.1.0 (#1072)
Bumps [github.com/flynn/noise](https://github.com/flynn/noise) from 1.0.1 to 1.1.0.
- [Commits](https://github.com/flynn/noise/compare/v1.0.1...v1.1.0)

---
updated-dependencies:
- dependency-name: github.com/flynn/noise
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-03-18 10:47:53 -04:00
dependabot[bot]
2affd371e3 Bump the golang-x-dependencies group with 4 updates (#1085)
Bumps the golang-x-dependencies group with 4 updates: [golang.org/x/crypto](https://github.com/golang/crypto), [golang.org/x/net](https://github.com/golang/net), [golang.org/x/sys](https://github.com/golang/sys) and [golang.org/x/term](https://github.com/golang/term).


Updates `golang.org/x/crypto` from 0.18.0 to 0.21.0
- [Commits](https://github.com/golang/crypto/compare/v0.18.0...v0.21.0)

Updates `golang.org/x/net` from 0.20.0 to 0.21.0
- [Commits](https://github.com/golang/net/compare/v0.20.0...v0.21.0)

Updates `golang.org/x/sys` from 0.16.0 to 0.18.0
- [Commits](https://github.com/golang/sys/compare/v0.16.0...v0.18.0)

Updates `golang.org/x/term` from 0.16.0 to 0.18.0
- [Commits](https://github.com/golang/term/compare/v0.16.0...v0.18.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
- dependency-name: golang.org/x/net
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
- dependency-name: golang.org/x/sys
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
- dependency-name: golang.org/x/term
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-03-18 10:43:17 -04:00
Nate Brown
cc8b3cc961 Add config option for local_cidr control 2024-02-15 11:46:45 -06:00
Nate Brown
f346cf4109 At the end 2024-02-05 10:23:10 -06:00
Nate Brown
8f44f22c37 In the middle 2024-02-05 10:23:10 -06:00
John Maguire
8822f1366c Add link to logs guide in bug report template (#1065) 2024-02-01 12:40:23 -05:00
brad-defined
e3f5a129c1 Return full error context from ContextualError.Error() (#1069) 2024-01-31 15:31:46 -05:00
mrx
0f0534d739 Fix UDP listener on IPv4-only Linux (#787)
On some systems, IPv6 is disabled (for example, CIS benchmark recommends to disable it when not used), but currently all UDP connections are using AF_INET6 sockets.
When we are binding AF_INET6 socket to an address like ::ffff:1.2.3.4 (IPv4 addresses are parsed by net.ParseIP this way), we can't send or receive IPv6 packets anyway, so this will not break any scenarios.

---------

Co-authored-by: Wade Simmons <wsimmons@slack-corp.com>
2024-01-30 15:08:14 -05:00
dependabot[bot]
c5a403b7a8 Bump github.com/vishvananda/netlink (#1034)
Bumps [github.com/vishvananda/netlink](https://github.com/vishvananda/netlink) from 1.1.1-0.20211118161826-650dca95af54 to 1.2.1-beta.2.
- [Release notes](https://github.com/vishvananda/netlink/releases)
- [Commits](https://github.com/vishvananda/netlink/commits/v1.2.1-beta.2)

---
updated-dependencies:
- dependency-name: github.com/vishvananda/netlink
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-23 10:40:29 -05:00
dependabot[bot]
f23d328561 Bump the protobuf-dependencies group with 1 update (#1053)
Bumps the protobuf-dependencies group with 1 update: google.golang.org/protobuf.


Updates `google.golang.org/protobuf` from 1.31.0 to 1.32.0

---
updated-dependencies:
- dependency-name: google.golang.org/protobuf
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: protobuf-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-23 10:39:53 -05:00
dependabot[bot]
a977ee653d Bump github.com/miekg/dns from 1.1.57 to 1.1.58 (#1063)
Bumps [github.com/miekg/dns](https://github.com/miekg/dns) from 1.1.57 to 1.1.58.
- [Changelog](https://github.com/miekg/dns/blob/master/Makefile.release)
- [Commits](https://github.com/miekg/dns/compare/v1.1.57...v1.1.58)

---
updated-dependencies:
- dependency-name: github.com/miekg/dns
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-23 10:37:53 -05:00
Lingfeng Zhang
1f83d1758d Support inlined sshd host key (#1054) 2024-01-22 13:58:44 -05:00
dependabot[bot]
3210198276 Bump github.com/prometheus/client_golang from 1.17.0 to 1.18.0 (#1055)
Bumps [github.com/prometheus/client_golang](https://github.com/prometheus/client_golang) from 1.17.0 to 1.18.0.
- [Release notes](https://github.com/prometheus/client_golang/releases)
- [Changelog](https://github.com/prometheus/client_golang/blob/main/CHANGELOG.md)
- [Commits](https://github.com/prometheus/client_golang/compare/v1.17.0...v1.18.0)

---
updated-dependencies:
- dependency-name: github.com/prometheus/client_golang
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-12 10:26:39 -05:00
dependabot[bot]
0cef634635 Bump github.com/miekg/dns from 1.1.56 to 1.1.57 (#1022)
Bumps [github.com/miekg/dns](https://github.com/miekg/dns) from 1.1.56 to 1.1.57.
- [Changelog](https://github.com/miekg/dns/blob/master/Makefile.release)
- [Commits](https://github.com/miekg/dns/compare/v1.1.56...v1.1.57)

---
updated-dependencies:
- dependency-name: github.com/miekg/dns
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-12 09:57:38 -05:00
dependabot[bot]
637dc18bf8 Bump the golang-x-dependencies group with 3 updates (#1059)
Bumps the golang-x-dependencies group with 3 updates: [golang.org/x/crypto](https://github.com/golang/crypto), [golang.org/x/net](https://github.com/golang/net) and [golang.org/x/sync](https://github.com/golang/sync).


Updates `golang.org/x/crypto` from 0.17.0 to 0.18.0
- [Commits](https://github.com/golang/crypto/compare/v0.17.0...v0.18.0)

Updates `golang.org/x/net` from 0.19.0 to 0.20.0
- [Commits](https://github.com/golang/net/compare/v0.19.0...v0.20.0)

Updates `golang.org/x/sync` from 0.5.0 to 0.6.0
- [Commits](https://github.com/golang/sync/compare/v0.5.0...v0.6.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
- dependency-name: golang.org/x/net
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
- dependency-name: golang.org/x/sync
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-12 09:55:41 -05:00
Wade Simmons
ea36949d8a v1.8.2 (#1058)
Update CHANGELOG for Nebula v1.8.2
2024-01-08 15:40:04 -05:00
Wade Simmons
0564d0a2cf when listen.port is zero, fix multiple routines (#1057)
This used to work correctly because when the multiple routines work was
first added in #382, but an important part to discover the listen port
before opening the other listeners on the same socket was lost in this
PR: #653.

This change should fix the regression and allow multiple routines to
work correctly when listen.port is set to `0`.

Thanks to @rawdigits for tracking down and discovering this regression.
2024-01-08 13:49:44 -05:00
nezu
b22ba6eb49 Update Arch Linux package link (#1024) 2023-12-27 10:38:24 -06:00
Wade Simmons
3a221812f6 test: build all non-main modules for mobile (#1036)
Ensure that we don't break the build for mobile by doing a `go build`
for all of the non-main modules in the repo. Should hopefully catch
issues like #1035 sooner.
2023-12-21 11:59:21 -05:00
dependabot[bot]
927ff4cc03 Bump github.com/flynn/noise from 1.0.0 to 1.0.1 (#1038)
Bumps [github.com/flynn/noise](https://github.com/flynn/noise) from 1.0.0 to 1.0.1.
- [Commits](https://github.com/flynn/noise/compare/v1.0.0...v1.0.1)

---
updated-dependencies:
- dependency-name: github.com/flynn/noise
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-21 11:57:53 -05:00
Wade Simmons
e5945a60aa v1.8.1 (#1049)
Update CHANGELOG for Nebula v1.8.1
2023-12-19 15:11:25 -05:00
Nate Brown
072edd56b3 Fix re-entrant GetOrHandshake issues (#1044) 2023-12-19 11:58:31 -06:00
dependabot[bot]
beb5f6bddc Bump golang.org/x/crypto from 0.16.0 to 0.17.0 (#1048)
Bumps [golang.org/x/crypto](https://github.com/golang/crypto) from 0.16.0 to 0.17.0.
- [Commits](https://github.com/golang/crypto/compare/v0.16.0...v0.17.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-19 10:57:09 -05:00
dependabot[bot]
8be9792059 Bump actions/setup-go from 4 to 5 (#1039)
Bumps [actions/setup-go](https://github.com/actions/setup-go) from 4 to 5.
- [Release notes](https://github.com/actions/setup-go/releases)
- [Commits](https://github.com/actions/setup-go/compare/v4...v5)

---
updated-dependencies:
- dependency-name: actions/setup-go
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-13 22:45:09 -06:00
John Maguire
af2fc48378 Fix mobile builds (#1035) 2023-12-06 16:18:21 -05:00
Wade Simmons
1d2f95e718 v1.8.0 (#1017)
Update CHANGELOG for Nebula v1.8.0
2023-12-06 14:38:58 -05:00
Lars Lehtonen
3a8743d511 cmd/nebula-cert: fix clobbered error (#1032)
* cmd/nebula-cert: fix clobbered error

Signed-off-by: Lars Lehtonen <lars.lehtonen@gmail.com>

* apply suggestions from Nate

This makes it much clearer what is happening in the code

---------

Signed-off-by: Lars Lehtonen <lars.lehtonen@gmail.com>
Co-authored-by: Wade Simmons <wsimmons@slack-corp.com>
2023-12-06 13:20:49 -05:00
Dave Russell
0209402942 SIGHUP is only useful when config was loaded from a file (#1030)
Have (*config.C).CatchHUP() return early when there is no file
path available from which to reload.
This will allow wrapping service to manage their own signal
trapping (which is particularly important if they've used
config from a string.
2023-12-06 10:13:38 -05:00
dependabot[bot]
fb55f5b762 Bump the golang-x-dependencies group with 3 updates (#1028)
Bumps the golang-x-dependencies group with 3 updates: [golang.org/x/crypto](https://github.com/golang/crypto), [golang.org/x/net](https://github.com/golang/net) and [golang.org/x/sync](https://github.com/golang/sync).


Updates `golang.org/x/crypto` from 0.14.0 to 0.16.0
- [Commits](https://github.com/golang/crypto/compare/v0.14.0...v0.16.0)

Updates `golang.org/x/net` from 0.17.0 to 0.19.0
- [Commits](https://github.com/golang/net/compare/v0.17.0...v0.19.0)

Updates `golang.org/x/sync` from 0.3.0 to 0.5.0
- [Commits](https://github.com/golang/sync/compare/v0.3.0...v0.5.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
- dependency-name: golang.org/x/net
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
- dependency-name: golang.org/x/sync
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-04 11:12:52 -05:00
Ben Ritcey
01cddb8013 Added firewall.rules.hash metric (#1010)
* Added firewall.rules.hash metric

Added a FNV-1 hash of the firewall rules as a Prometheus value.

* Switch FNV has to int64, include both hashes in log messages

* Use a uint32 for the FNV hash

Let go-metrics cast the uint32 to a int64, so it won't be lossy
when it eventually emits a float64 Prometheus metric.
2023-11-28 11:56:47 -05:00
Tristan Rice
1083279a45 add gvisor based service library (#965)
* add service/ library
2023-11-21 11:50:18 -05:00
Wade Simmons
fe16ea566d firewall reject packets: cleanup error cases (#957) 2023-11-13 12:43:51 -06:00
Nate Brown
3356e03d85 Default pki.disconnect_invalid to true and make it reloadable (#859) 2023-11-13 12:39:38 -06:00
dependabot[bot]
f41db52560 Bump the golang-x-dependencies group with 1 update (#1006)
Bumps the golang-x-dependencies group with 1 update: [golang.org/x/sys](https://github.com/golang/sys).

- [Commits](https://github.com/golang/sys/compare/v0.13.0...v0.14.0)

---
updated-dependencies:
- dependency-name: golang.org/x/sys
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-13 07:58:45 -08:00
Nate Brown
5181cb0474 Use generics for CIDRTrees to avoid casting issues (#1004) 2023-11-02 17:05:08 -05:00
Nate Brown
a44e1b8b05 Clean up a hostinfo to reduce memory usage (#955) 2023-11-02 16:53:59 -05:00
guangwu
276978377a chore: remove refs to deprecated io/ioutil (#987)
Signed-off-by: guoguangwu <guoguangwu@magic-shield.com>
2023-10-31 10:35:13 -04:00
dependabot[bot]
777eb96aea Bump github.com/prometheus/client_golang from 1.16.0 to 1.17.0 (#984)
Bumps [github.com/prometheus/client_golang](https://github.com/prometheus/client_golang) from 1.16.0 to 1.17.0.
- [Release notes](https://github.com/prometheus/client_golang/releases)
- [Changelog](https://github.com/prometheus/client_golang/blob/main/CHANGELOG.md)
- [Commits](https://github.com/prometheus/client_golang/compare/v1.16.0...v1.17.0)

---
updated-dependencies:
- dependency-name: github.com/prometheus/client_golang
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-10-31 10:33:04 -04:00
Wade Simmons
0912ef14f4 github actions smoke-test: run with data race detector (#988)
Run the github actions smoke tests with data race detector enabled, so
we can detect if a PR introduces a simple data race.
2023-10-31 10:32:39 -04:00
Lars Lehtonen
77a8ce1712 main: fix dropped error (#1002)
This isn't an actual issue because the current implementation of NewSSHServer never returns an error (https://github.com/slackhq/nebula/blob/v1.7.2/sshd/server.go#L56), but still good to fix so no surprises happen in the future.
2023-10-31 10:32:08 -04:00
John Maguire
87b628ba24 Fix truncated comment in config.yml (#999) 2023-10-27 08:39:34 -04:00
Nate Brown
50d6a1e8ca QueryServer needs to be done outside of the lock (#996) 2023-10-17 15:43:51 -05:00
dependabot[bot]
e78fe0b9ef Bump golang.org/x/net from 0.15.0 to 0.17.0 (#990)
Bumps [golang.org/x/net](https://github.com/golang/net) from 0.15.0 to 0.17.0.
- [Commits](https://github.com/golang/net/compare/v0.15.0...v0.17.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-10-16 13:28:59 -04:00
Nate Brown
5fccbb8676 Retry wintun creation (#985) 2023-10-16 10:06:43 -05:00
dependabot[bot]
c289c7a7ca Bump github.com/miekg/dns from 1.1.55 to 1.1.56 (#979)
Bumps [github.com/miekg/dns](https://github.com/miekg/dns) from 1.1.55 to 1.1.56.
- [Changelog](https://github.com/miekg/dns/blob/master/Makefile.release)
- [Commits](https://github.com/miekg/dns/compare/v1.1.55...v1.1.56)

---
updated-dependencies:
- dependency-name: github.com/miekg/dns
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-09-22 09:48:26 -04:00
dependabot[bot]
e3fbfbfd4d Bump golang.org/x/net from 0.14.0 to 0.15.0 (#977)
Bumps [golang.org/x/net](https://github.com/golang/net) from 0.14.0 to 0.15.0.
- [Commits](https://github.com/golang/net/compare/v0.14.0...v0.15.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-09-22 09:47:45 -04:00
dependabot[bot]
282ca4368e Bump golang.org/x/crypto from 0.12.0 to 0.13.0 (#976)
Bumps [golang.org/x/crypto](https://github.com/golang/crypto) from 0.12.0 to 0.13.0.
- [Commits](https://github.com/golang/crypto/compare/v0.12.0...v0.13.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-09-22 09:47:00 -04:00
Wade Simmons
280fa026ea smoke-test: don't assume docker needs sudo (#958)
Let the host deal with this detail if necessary
2023-09-07 13:57:41 -04:00
Lars Lehtonen
dbdb48f182 cert: fix dropped errors (#961) 2023-09-07 13:54:01 -04:00
Nate Brown
f7e392995a Fix rebind to not put the socket in blocking mode (#972) 2023-09-07 11:56:09 -05:00
dependabot[bot]
d271df8da8 Bump golang.org/x/term from 0.11.0 to 0.12.0 (#967)
Bumps [golang.org/x/term](https://github.com/golang/term) from 0.11.0 to 0.12.0.
- [Commits](https://github.com/golang/term/compare/v0.11.0...v0.12.0)

---
updated-dependencies:
- dependency-name: golang.org/x/term
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-09-05 12:47:55 -04:00
dependabot[bot]
eea5e6a5df Bump actions/checkout from 3 to 4 (#969)
Bumps [actions/checkout](https://github.com/actions/checkout) from 3 to 4.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v3...v4)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-09-05 11:43:56 -04:00
dependabot[bot]
790268a176 Bump golang.org/x/sys from 0.11.0 to 0.12.0 (#968)
Bumps [golang.org/x/sys](https://github.com/golang/sys) from 0.11.0 to 0.12.0.
- [Commits](https://github.com/golang/sys/compare/v0.11.0...v0.12.0)

---
updated-dependencies:
- dependency-name: golang.org/x/sys
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-09-05 11:42:08 -04:00
brad-defined
06b480e177 Fix relay migration (#964)
* Fix for relay migration on rehandshaking issue. On rehandshake, the relay tunnel doesn't migrate to the new hostinfo object correctly, due to an incorrect Nebula IP sent in the CreateRelayRequest message.
* Add a test for this case

---------

Co-authored-by: Nate Brown <nbrown.us@gmail.com>
2023-09-05 09:29:27 -04:00
Nate Brown
076ebc6c6e Simplify getting a hostinfo or starting a handshake with one (#954) 2023-08-21 18:51:45 -05:00
Nate Brown
7edcf620c0 We only need the certificate in ConnectionState (#953) 2023-08-21 14:11:06 -05:00
Nate Brown
5a131b2975 Combine ca, cert, and key handling (#952) 2023-08-14 21:32:40 -05:00
Nate Brown
223cc6e660 Limit how often a busy tunnel can requery the lighthouse (#940)
Co-authored-by: Wade Simmons <wadey@slack-corp.com>
2023-08-08 13:26:41 -05:00
Wade Simmons
5671c6607c dependabot: group together common deps (#950)
Group together deps that are often updated together.

- https://docs.github.com/en/code-security/dependabot/dependabot-version-updates/configuration-options-for-the-dependabot.yml-file#groups
2023-08-08 13:15:42 -04:00
dependabot[bot]
7ecafbe61d Bump golang.org/x/net from 0.13.0 to 0.14.0 (#947)
Bumps [golang.org/x/net](https://github.com/golang/net) from 0.13.0 to 0.14.0.
- [Commits](https://github.com/golang/net/compare/v0.13.0...v0.14.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-08-08 10:04:46 -05:00
dependabot[bot]
546eb3bfbc Bump golang.org/x/crypto from 0.11.0 to 0.12.0 (#949)
Bumps [golang.org/x/crypto](https://github.com/golang/crypto) from 0.11.0 to 0.12.0.
- [Commits](https://github.com/golang/crypto/compare/v0.11.0...v0.12.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-08-07 21:28:06 -05:00
dependabot[bot]
7364d99e34 Bump golang.org/x/term from 0.10.0 to 0.11.0 (#946)
Bumps [golang.org/x/term](https://github.com/golang/term) from 0.10.0 to 0.11.0.
- [Commits](https://github.com/golang/term/compare/v0.10.0...v0.11.0)

---
updated-dependencies:
- dependency-name: golang.org/x/term
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-08-07 21:07:30 -05:00
dependabot[bot]
83b6dc7b16 Bump golang.org/x/net from 0.12.0 to 0.13.0 (#943)
Bumps [golang.org/x/net](https://github.com/golang/net) from 0.12.0 to 0.13.0.
- [Commits](https://github.com/golang/net/compare/v0.12.0...v0.13.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-08-02 14:28:32 -04:00
Wade Simmons
3d0da7c859 update mergo to 1.0.0 (#941)
The mergo package has moved to a vanity URL. This causes fun issues with
dependabot. Update to the new release:

- https://github.com/darccio/mergo/releases/tag/v1.0.0
- https://github.com/darccio/mergo/compare/v0.3.15...v1.0.0
2023-08-02 14:00:20 -04:00
Caleb Jasik
ed00f5d530 Remove unused config code (last edited 4yrs ago) (#938) 2023-07-31 15:59:20 -05:00
dependabot[bot]
38e56a4858 Bump golang.org/x/net from 0.9.0 to 0.12.0 (#931) 2023-07-27 15:43:16 -05:00
dependabot[bot]
fce93ccb54 Bump google.golang.org/protobuf from 1.30.0 to 1.31.0 (#930) 2023-07-27 15:42:33 -05:00
dependabot[bot]
0d715effbc Bump Apple-Actions/import-codesign-certs from 1 to 2 (#923) 2023-07-27 15:31:36 -05:00
dependabot[bot]
0c003b64f1 Bump golang.org/x/term from 0.8.0 to 0.10.0 (#928) 2023-07-27 14:38:36 -05:00
Nate Brown
14d0106716 Send the lh update worker into its own routine instead of taking over the reload routine (#935) 2023-07-27 14:38:10 -05:00
dependabot[bot]
959b015b3b Bump github.com/sirupsen/logrus from 1.9.0 to 1.9.3 (#933) 2023-07-27 14:36:36 -05:00
Nate Brown
0bffa76b5e Build for openbsd (#812) 2023-07-27 14:27:35 -05:00
c0repwn3r
03e70210a5 Add support for NetBSD (#916) 2023-07-27 13:44:47 -05:00
Nate Brown
9c6592b159 Guard e2e udp and tun channels when closed (#934) 2023-07-26 12:52:14 -05:00
dependabot[bot]
e5af94e27a Bump github.com/prometheus/client_golang from 1.15.1 to 1.16.0 (#927)
Bumps [github.com/prometheus/client_golang](https://github.com/prometheus/client_golang) from 1.15.1 to 1.16.0.
- [Release notes](https://github.com/prometheus/client_golang/releases)
- [Changelog](https://github.com/prometheus/client_golang/blob/main/CHANGELOG.md)
- [Commits](https://github.com/prometheus/client_golang/compare/v1.15.1...v1.16.0)

---
updated-dependencies:
- dependency-name: github.com/prometheus/client_golang
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-24 13:56:09 -04:00
dependabot[bot]
96f51f78ea Bump golang.org/x/sys from 0.8.0 to 0.10.0 (#926)
Bumps [golang.org/x/sys](https://github.com/golang/sys) from 0.8.0 to 0.10.0.
- [Commits](https://github.com/golang/sys/compare/v0.8.0...v0.10.0)

---
updated-dependencies:
- dependency-name: golang.org/x/sys
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-24 13:53:39 -04:00
Nate Brown
a10baeee92 Pull hostmap and pending hostmap apart, remove unused functions (#843) 2023-07-24 12:37:52 -05:00
dependabot[bot]
52c9e360e7 Bump github.com/miekg/dns from 1.1.54 to 1.1.55 (#925)
Bumps [github.com/miekg/dns](https://github.com/miekg/dns) from 1.1.54 to 1.1.55.
- [Changelog](https://github.com/miekg/dns/blob/master/Makefile.release)
- [Commits](https://github.com/miekg/dns/compare/v1.1.54...v1.1.55)

---
updated-dependencies:
- dependency-name: github.com/miekg/dns
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-24 12:52:29 -04:00
dependabot[bot]
8caaff7109 Bump github.com/stretchr/testify from 1.8.2 to 1.8.4 (#924)
Bumps [github.com/stretchr/testify](https://github.com/stretchr/testify) from 1.8.2 to 1.8.4.
- [Release notes](https://github.com/stretchr/testify/releases)
- [Commits](https://github.com/stretchr/testify/compare/v1.8.2...v1.8.4)

---
updated-dependencies:
- dependency-name: github.com/stretchr/testify
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-24 12:51:31 -04:00
Nate Brown
1e3c155896 Attempt to notify systemd of service readiness on linux (#929) 2023-07-24 11:30:18 -05:00
Wade Simmons
f5db03c834 add dependabot config (#922)
This should give us PRs weekly with dependency updates, and also let us
manually check for updates when needed.

- https://docs.github.com/en/code-security/dependabot/dependabot-version-updates/configuration-options-for-the-dependabot.yml-file
2023-07-21 17:21:58 -04:00
Nate Brown
c5ce945852 Update README to include a link to go install docs (#919) 2023-07-20 21:30:38 -05:00
John Maguire
7e380bde7e Document new DNS config options (#879) 2023-07-10 15:19:05 -04:00
Nate Brown
a3e59a38ef Use registered io on Windows when possible (#905) 2023-07-10 12:43:48 -05:00
John Maguire
8ba5d64dbc Add support for naming FreeBSD tun devices (#903) 2023-06-22 12:13:31 -04:00
Nate Brown
3bbf5f4e67 Use an interface for udp conns (#901) 2023-06-14 10:48:52 -05:00
Wade Simmons
928731acfe fix up the release workflow (#891)
actions/create-release is deprecated, just switch to using `gh` cli.
This is actually much easier anyways!
2023-06-14 11:45:01 -04:00
Nate Brown
57eb80e9fb v1.7.2 (#887)
Update CHANGELOG for Nebula v1.7.2
2023-06-01 11:05:07 -04:00
brad-defined
96f4dcaab8 Fix reconfig freeze attempting to send to an unbuffered, unread channel (#886)
* Fixes a reocnfig freeze where the reconfig attempts to send to an unbuffered channel with no readers.
Only create stop channel when a DNS goroutine is created, and only send when the channel exists.
Buffer to size 1 so that the stop message can be immediately sent even if the goroutine is busy doing DNS lookups.
2023-05-31 16:05:46 -04:00
Wade Simmons
6d8c5f437c GitHub actions update setup-go (#881)
This does caching for us, so we can remove our manual caching of modules
2023-05-23 13:24:33 -04:00
John Maguire
165b671e70 v1.7.1 (#878)
Update CHANGELOG for Nebula v1.7.1
2023-05-18 15:39:24 -04:00
brad-defined
6be0bad68a Fix static_host_map DNS lookup Linux issue - put v4 addr into v6 slice(#877) 2023-05-18 14:13:32 -04:00
Wade Simmons
7ae3cd25f8 v1.7.0 (#870)
Update CHANGELOG for Nebula v1.7.0
2023-05-17 11:02:53 -04:00
Wade Simmons
9a7ed57a3f Cache cert verification methods (#871)
* cache cert verification

CheckSignature and Verify are expensive methods, and certificates are
static. Cache the results.

* use atomics

* make sure public key bytes match

* add VerifyWithCache and ResetCache

* cleanup

* use VerifyWithCache

* doc
2023-05-17 10:14:26 -04:00
Wade Simmons
eb9f22a8fa fix mismerge of P256 and encrypted private keys (#869)
The private key length is checked in a switch statement below these
lines, these lines should have been removed.
2023-05-09 14:05:55 -04:00
Nate Brown
54a8499c7b Fix go vet (#868) 2023-05-09 11:01:30 -05:00
Wade Simmons
419aaf2e36 issue templates: remove Report Security Vulnerability (#867)
This is redundant as Github automatically adds a section for this near the top.
2023-05-09 11:37:48 -04:00
Ilya Lukyanov
1701087035 Add destination CIDR checking (#507) 2023-05-09 10:37:23 -05:00
Nate Brown
a9cb2e06f4 Add ability to respect the system route table for unsafe route on linux (#839) 2023-05-09 10:36:55 -05:00
Wade Simmons
115b4b70b1 add SECURITY.md (#864)
* add SECURITY.md

Fixes: #699

* add Security mention to New issue template

* cleanup
2023-05-09 11:25:21 -04:00
Wade Simmons
0707caedb4 document P256 and BoringCrypto (#865)
* document P256 and BoringCrypto

Some basic descriptions of P256 and BoringCrypto added to the bottom of
README.md so that their prupose is not a mystery.

* typo
2023-05-09 11:24:52 -04:00
brad-defined
bd9cc01d62 Dns static lookerupper (#796)
* Support lighthouse DNS names, and regularly resolve the name in a background goroutine to discover DNS updates.
2023-05-09 11:22:08 -04:00
Nate Brown
d1f786419c Try rehandshaking a main hostinfo after releasing hostmap locks (#863) 2023-05-08 14:43:03 -05:00
Wade Simmons
31ed9269d7 add test for GOEXPERIMENT=boringcrypto (#861)
* add test for GOEXPERIMENT=boringcrypto

* fix NebulaCertificate.Sign

Set the PublicKey field in a more compatible way for the tests. The
current method grabs the public key from the certificate, but the
correct thing to do is to derive it from the private key. Either way
doesn't really matter as I don't think the Sign method actually even
uses the PublicKey field.

* assert boring

* cleanup tests
2023-05-08 13:27:01 -04:00
Nate Brown
48eb63899f Have lighthouses ack updates to reduce test packet traffic (#851) 2023-05-05 14:44:03 -05:00
Nate Brown
b26c13336f Fix test on master (#860) 2023-05-04 20:11:33 -05:00
Wade Simmons
e0185c4b01 Support NIST curve P256 (#769)
* Support NIST curve P256

This change adds support for NIST curve P256. When you use `nebula-cert ca`
or `nebula-cert keygen`, you can specify `-curve P256` to enable it. The
curve to use is based on the curve defined in your CA certificate.

Internally, we use ECDSA P256 to sign certificates, and ECDH P256 to do
Noise handshakes. P256 is not supported natively in Noise Protocol, so
we define `DHP256` in the `noiseutil` package to implement support for
it.

You cannot have a mixed network of Curve25519 and P256 certificates,
since the Noise protocol will only attempt to parse using the Curve
defined in the host's certificate.

* verify the curves match in VerifyPrivateKey

This would have failed anyways once we tried to actually use the bytes
in the private key, but its better to detect the issue up front with
a better error message.

* add cert.Curve argument to Sign method

* fix mismerge

* use crypto/ecdh

This is the preferred method for doing ECDH functions now, and also has
a boringcrypto specific codepath.

* remove other ecdh uses of crypto/elliptic

use crypto/ecdh instead
2023-05-04 17:50:23 -04:00
Nate Brown
702e1c59bd Always disconnect block listed hosts (#858) 2023-05-04 16:09:42 -05:00
Nate Brown
5fe8f45d05 Clear lighthouse cache for a vpn ip on a dead connection when its the final hostinfo (#857) 2023-05-04 15:42:12 -05:00
Nate Brown
03e4a7f988 Rehandshaking (#838)
Co-authored-by: Brad Higgins <brad@defined.net>
Co-authored-by: Wade Simmons <wadey@slack-corp.com>
2023-05-04 15:16:37 -05:00
Wade Simmons
0b67b19771 add boringcrypto Makefile targets (#856)
This adds a few build targets to compile with `GOEXPERIMENT=boringcrypto`:

- `bin-boringcrypto`
- `release-boringcrypto`

It also adds a field to the intial start up log indicating if
boringcrypto is enabled in the binary.
2023-05-04 15:42:45 -04:00
Wade Simmons
a0d3b93ae5 update dependencies: 2023-05 (#855)
Updates that end up in the final binaries (go version -m):

    Updated  github.com/imdario/mergo             https://github.com/imdario/mergo/compare/v0.3.13...v0.3.15
    Updated  github.com/miekg/dns                 https://github.com/miekg/dns/compare/v1.1.52...v1.1.54
    Updated  github.com/prometheus/client_golang  https://github.com/prometheus/client_golang/compare/v1.14.0...v1.15.1
    Updated  github.com/prometheus/client_model   https://github.com/prometheus/client_model/compare/v0.3.0...v0.4.0
    Updated  golang.org/x/crypto                  https://github.com/golang/crypto/compare/v0.7.0...v0.8.0
    Updated  golang.org/x/net                     https://github.com/golang/net/compare/v0.8.0...v0.9.0
    Updated  golang.org/x/sys                     https://github.com/golang/sys/compare/v0.6.0...v0.8.0
    Updated  golang.org/x/term                    https://github.com/golang/term/compare/v0.6.0...v0.8.0
    Updated  google.golang.org/protobuf           v1.29.0...v1.30.0
2023-05-04 15:42:15 -04:00
Wade Simmons
58ec1f7a7b build with go1.20 (#854)
* build with go1.20

This has been out for a bit and is up to go1.20.4. We have been using
go1.20 for the Slack builds and have seen no issues.

* need the quotes

* use go install
2023-05-04 11:35:03 -04:00
Nate Brown
397fe5f879 Add ability to skip installing unsafe routes on the os routing table (#831) 2023-04-10 12:32:37 -05:00
brad-defined
9b03053191 update EncReader and EncWriter interface function args to have concrete types (#844)
* Update LightHouseHandlerFunc to remove EncWriter param.
* Move EncWriter to interface
* EncReader, too
2023-04-07 14:28:37 -04:00
Nate Brown
3cb4e0ef57 Allow listen.host to contain names (#825) 2023-04-05 11:29:26 -05:00
Wade Simmons
e0553822b0 Use NewGCMTLS (when using experiment boringcrypto) (#803)
* Use NewGCMTLS (when using experiment boringcrypto)

This change only affects builds built using `GOEXPERIMENT=boringcrypto`.
When built with this experiment, we use the NewGCMTLS() method exposed by
goboring, which validates that the nonce is strictly monotonically increasing.
This is the TLS 1.2 specification for nonce generation (which also matches the
method used by the Noise Protocol)

- https://github.com/golang/go/blob/go1.19/src/crypto/tls/cipher_suites.go#L520-L522
- https://github.com/golang/go/blob/go1.19/src/crypto/internal/boring/aes.go#L235-L237
- https://github.com/golang/go/blob/go1.19/src/crypto/internal/boring/aes.go#L250
- ae223d6138/include/openssl/aead.h (L379-L381)
- ae223d6138/crypto/fipsmodule/cipher/e_aes.c (L1082-L1093)

* need to lock around EncryptDanger in SendVia

* fix link to test vector
2023-04-05 11:08:23 -04:00
Nate Brown
d3fe3efcb0 Fix handshake retry regression (#842) 2023-04-05 10:04:30 -05:00
Nate Brown
fd99ce9a71 Use fewer test packets (#840) 2023-04-04 13:42:24 -05:00
Wade Simmons
6685856b5d emit certificate.expiration_ttl_seconds metric (#782) 2023-04-03 20:18:16 -05:00
John Maguire
a56a97e5c3 Add ability to encrypt CA private key at rest (#386)
Fixes #8.

`nebula-cert ca` now supports encrypting the CA's private key with a
passphrase. Pass `-encrypt` in order to be prompted for a passphrase.
Encryption is performed using AES-256-GCM and Argon2id for KDF. KDF
parameters default to RFC recommendations, but can be overridden via CLI
flags `-argon-memory`, `-argon-parallelism`, and `-argon-iterations`.
2023-04-03 13:59:38 -04:00
Nate Brown
ee8e1348e9 Use connection manager to drive NAT maintenance (#835)
Co-authored-by: brad-defined <77982333+brad-defined@users.noreply.github.com>
2023-03-31 15:45:05 -05:00
Nate Brown
1a6c657451 Normalize logs (#837) 2023-03-30 15:07:31 -05:00
Nate Brown
6b3d42efa5 Use atomic.Pointer for certState (#833) 2023-03-30 13:04:09 -05:00
brad-defined
2801fb2286 Fix relay (#827)
Co-authored-by: Nate Brown <nbrown.us@gmail.com>
2023-03-30 11:09:20 -05:00
Ryan Huber
e28336c5db probes to the lh are not generally useful as recv_error should catch (#408) 2023-03-29 15:09:36 -05:00
Wade Simmons
3e5c7e6860 add punchy.respond_delay config option (#721) 2023-03-29 14:32:35 -05:00
Wade Simmons
8a82e0fb16 ssh: add save-mutex-profile (#737) 2023-03-29 14:30:28 -05:00
Nate Brown
f0ef80500d Remove dead code and re-order transit from pending to main hostmap on stage 2 (#828) 2023-03-17 15:36:24 -05:00
Wade Simmons
61b784d2bb Update dependencies 2023-03 (#824)
List of dependency updates that appear in the final binaries (other are
only used in tests, or don't actually get used by the modules we import):

    Updated	github.com/cespare/xxhash	https://github.com/cespare/xxhash/compare/v2.1.2...v2.2.0
    Updated	github.com/golang/protobuf	https://github.com/golang/protobuf/compare/v1.5.2...v1.5.3
    Updated	github.com/miekg/dns	https://github.com/miekg/dns/compare/v1.1.50...v1.1.52
    Updated	github.com/prometheus/common	https://github.com/prometheus/common/compare/v0.37.0...v0.42.0
    Updated	github.com/prometheus/procfs	https://github.com/prometheus/procfs/compare/v0.8.0...v0.9.0
    Updated	github.com/vishvananda/netns	https://github.com/vishvananda/netns/compare/v0.0.1...v0.0.4
    Updated	golang.org/x/crypto	https://github.com/golang/crypto/compare/v0.3.0...v0.7.0
    Updated	golang.org/x/net	https://github.com/golang/net/compare/v0.2.0...v0.8.0
    Updated	golang.org/x/sys	https://github.com/golang/sys/compare/v0.2.0...v0.6.0
    Updated	golang.org/x/term	https://github.com/golang/term/compare/v0.2.0...v0.6.0
    Updated	golang.zx2c4.com/wintun	415007cec224...0fa3db229ce2
    Updated	google.golang.org/protobuf	v1.28.1...v1.29.0
2023-03-13 15:37:32 -04:00
Caleb Jasik
5da79e2a4c Run make vet in CI (#693) 2023-03-13 15:35:12 -04:00
Wade Simmons
e1af37e46d add calculated_remotes (#759)
* add calculated_remotes

This setting allows us to "guess" what the remote might be for a host
while we wait for the lighthouse response. For networks that hard
designed with in mind, it can help speed up handshake performance, as well as
improve resiliency in the case that all lighthouses are down.

Example:

    lighthouse:
      # ...

      calculated_remotes:
        # For any Nebula IPs in 10.0.10.0/24, this will apply the mask and add
        # the calculated IP as an initial remote (while we wait for the response
        # from the lighthouse). Both CIDRs must have the same mask size.
        # For example, Nebula IP 10.0.10.123 will have a calculated remote of
        # 192.168.1.123

        10.0.10.0/24:
          - mask: 192.168.1.0/24
            port: 4242

* figure out what is up with this test

* add test

* better logic for sending handshakes

Keep track of the last light of hosts we sent handshakes to. Only log
handshake sent messages if the list has changed.

Remove the test Test_NewHandshakeManagerTrigger because it is faulty and
makes no sense. It relys on the fact that no handshake packets actually
get sent, but with these changes we would send packets now (which it
should!)

* use atomic.Pointer

* cleanup to make it clearer

* fix typo in example
2023-03-13 15:09:08 -04:00
Wade Simmons
6e0ae4f9a3 firewall: add option to send REJECT replies (#738)
* firewall: add option to send REJECT replies

This change allows you to configure the firewall to send REJECT packets
when a packet is denied.

    firewall:
      # Action to take when a packet is not allowed by the firewall rules.
      # Can be one of:
      #   `drop` (default): silently drop the packet.
      #   `reject`: send a reject reply.
      #     - For TCP, this will be a RST "Connection Reset" packet.
      #     - For other protocols, this will be an ICMP port unreachable packet.
      outbound_action: drop
      inbound_action: drop

These packets are only sent to established tunnels, and only on the
overlay network (currently IPv4 only).

    $ ping -c1 192.168.100.3
    PING 192.168.100.3 (192.168.100.3) 56(84) bytes of data.
    From 192.168.100.3 icmp_seq=2 Destination Port Unreachable

    --- 192.168.100.3 ping statistics ---
    2 packets transmitted, 0 received, +1 errors, 100% packet loss, time 31ms

    $ nc -nzv 192.168.100.3 22
    (UNKNOWN) [192.168.100.3] 22 (?) : Connection refused

This change also modifies the smoke test to capture tcpdump pcaps from
both the inside and outside to inspect what is going on over the wire.
It also now does TCP and UDP packet tests using the Nmap version of
ncat.

* calculate seq and ack the same was as the kernel

The logic a bit confusing, so we copy it straight from how the kernel
does iptables `--reject-with tcp-reset`:

- https://github.com/torvalds/linux/blob/v5.19/net/ipv4/netfilter/nf_reject_ipv4.c#L193-L221

* cleanup
2023-03-13 15:08:40 -04:00
Caleb Jasik
f0ac61c1f0 Add nebula.plist based on the homebrew nebula LaunchDaemon plist (#762) 2023-03-13 13:16:46 -05:00
Nate Brown
92cc32f844 Remove handshake race avoidance (#820)
Co-authored-by: Wade Simmons <wadey@slack-corp.com>
2023-03-13 12:35:14 -05:00
Nate Brown
2ea360e5e2 Render hostmaps as mermaid graphs in e2e tests (#815) 2023-02-16 13:23:33 -06:00
Caleb Jasik
469ae78748 Add homebrew install method to readme (#630) 2023-02-13 14:42:58 -06:00
Nate Brown
a06977bbd5 Track connections by local index id instead of vpn ip (#807) 2023-02-13 14:41:05 -06:00
John Maguire
5bd8712946 Immediately forward packets from self to self on FreeBSD (#808) 2023-01-23 15:51:54 -06:00
Tricia
0fc4d8192f log network as String to match the other log event in interface.go that emits network (#811)
Co-authored-by: Tricia Bogen <tbogen@slack-corp.com>
2023-01-23 14:05:35 -05:00
Nate Brown
5278b6f926 Generic timerwheel (#804) 2023-01-18 10:56:42 -06:00
Nate Brown
c177126ed0 Fix possible panic in the timerwheels (#802) 2023-01-11 19:35:19 -06:00
John Maguire
c44da3abee Make DNS queries case insensitive (#793) 2022-12-20 16:59:11 -05:00
John Maguire
b7e73da943 Add note indicating modes have usage text (#794) 2022-12-20 16:53:56 -05:00
John Maguire
ff54bfd9f3 Add nebula-cert.exe and cert files to .gitignore (#722) 2022-12-20 16:52:51 -05:00
John Maguire
b5a85a6eb8 Update example config with IPv6 note for allow lists (#742) 2022-12-20 16:50:02 -05:00
Fabio Alessandro Locati
3ae242fa5f Add nss-lookup to the systemd wants (#791)
* Add nss-lookup to the systemd wants to ensure DNS is running before starting nebula

* Add Ansible & example service scripts

* Fix #797

* Align Ansible scripts and examples

Co-authored-by: John Maguire <contact@johnmaguire.me>
2022-12-19 14:42:07 -05:00
Fabio Alessandro Locati
cb2ec861ea Nebula is now in Fedora official repositories (#719) 2022-12-19 14:40:53 -05:00
John Maguire
a3e6edf9c7 Use config.yml consistently (not config.yaml) (#789) 2022-12-19 11:45:15 -06:00
John Maguire
ad7222509d Add a link to mobile nebula in the new issue form (#790) 2022-12-19 11:28:49 -06:00
Caleb Jasik
12dbbd3dd3 Fix typos found by https://github.com/crate-ci/typos (#735) 2022-12-19 11:28:27 -06:00
John Maguire
ec48298fe8 Update config to show aes cipher instead of chacha (#788) 2022-12-07 11:38:56 -06:00
Ian VanSchooten
77769de1e6 Docs: Update doc links (#751)
* Update documentation links

* Update links
2022-11-29 11:32:43 -05:00
Alexander Averyanov
022ae83a4a Fix typo: my -> may (#758) 2022-11-28 13:59:57 -05:00
Wade Simmons
d4f9500ca5 Update dependencies (2022-11) (#780)
* update dependencies

Update to latest dependencies on Nov 21, 2022.

Here are the diffs for deps that actually end up in the binaries (based
on `go version -m`)

    Updated  github.com/imdario/mergo                          https://github.com/imdario/mergo/compare/v0.3.12...v0.3.13
    Updated  github.com/matttproud/golang_protobuf_extensions  https://github.com/matttproud/golang_protobuf_extensions/compare/v1.0.1...v1.0.4
    Updated  github.com/miekg/dns                              https://github.com/miekg/dns/compare/v1.1.48...v1.1.50
    Updated  github.com/prometheus/client_golang               https://github.com/prometheus/client_golang/compare/v1.12.1...v1.14.0
    Updated  github.com/prometheus/client_model                https://github.com/prometheus/client_model/compare/v0.2.0...v0.3.0
    Updated  github.com/prometheus/common                      https://github.com/prometheus/common/compare/v0.33.0...v0.37.0
    Updated  github.com/prometheus/procfs                      https://github.com/prometheus/procfs/compare/v0.7.3...v0.8.0
    Updated  github.com/sirupsen/logrus                        https://github.com/sirupsen/logrus/compare/v1.8.1...v1.9.0
    Updated  github.com/vishvananda/netns                      https://github.com/vishvananda/netns/compare/50045581ed74...v0.0.1
    Updated  golang.org/x/crypto                               https://github.com/golang/crypto/compare/ae2d96664a29...v0.3.0
    Updated  golang.org/x/net                                  https://github.com/golang/net/compare/749bd193bc2b...v0.2.0
    Updated  golang.org/x/sys                                  https://github.com/golang/sys/compare/289d7a0edf71...v0.2.0
    Updated  golang.org/x/term                                 https://github.com/golang/term/compare/03fcf44c2211...v0.2.0
    Updated  google.golang.org/protobuf                        v1.28.0...v1.28.1

* test that mergo merges like we expect
2022-11-23 10:46:41 -05:00
brad-defined
9a8892c526 Fix 756 SSH command line parsing error to write to user instead of stderr (#757) 2022-11-22 20:55:27 -06:00
brad-defined
813b64ffb1 Remove unused variables from connection manager (#677) 2022-11-15 20:33:09 -06:00
John Maguire
85f5849d0b Fix a hang when shutting down Android (#772) 2022-11-11 10:18:43 -06:00
Wade Simmons
9af242dc47 switch to new sync/atomic helpers in go1.19 (#728)
These new helpers make the code a lot cleaner. I confirmed that the
simple helpers like `atomic.Int64` don't add any extra overhead as they
get inlined by the compiler. `atomic.Pointer` adds an extra method call
as it no longer gets inlined, but we aren't using these on the hot path
so it is probably okay.
2022-10-31 13:37:41 -04:00
Wade Simmons
a800a48857 v1.6.1 (#752)
Update CHANGELOG for Nebula v1.6.1
2022-09-26 13:38:18 -04:00
Nate Brown
4c0ae3df5e Refuse to process double encrypted packets (#741) 2022-09-19 12:47:48 -05:00
Nate Brown
feb3e1317f Add a simple benchmark to e2e tests (#739) 2022-09-01 09:44:58 -05:00
Jon Rafkind
c2259f14a7 explicitly reload config from ssh command (#725) 2022-08-08 12:44:09 -05:00
Nate Brown
b1eeb5f3b8 Support unsafe_routes on mobile again (#729) 2022-08-05 09:58:10 -05:00
Nate Brown
2adf0ca1d1 Use issue templates to improve bug reports (#726) 2022-07-29 12:57:05 -05:00
Nate Brown
92dfccf01a v1.6.0 (#701)
Update CHANGELOG for Nebula v1.6.0

Co-authored-by: Wade Simmons <wsimmons@slack-corp.com>
Co-authored-by: brad-defined <77982333+brad-defined@users.noreply.github.com>
2022-06-30 16:15:18 -04:00
brad-defined
38e495e0d2 Remove EXPERIMENTAL text from routines example config. (#702) 2022-06-30 11:20:41 -04:00
brad-defined
78a0255c91 typeos (#700) 2022-06-29 11:19:20 -04:00
brad-defined
169cdbbd35 Immediately forward packets received on the nebula TUN device from self to self (#501)
* Immediately forward packets received on the nebula TUN device with a destination of our Nebula VPN IP right back out that same TUN device on MacOS.
2022-06-27 14:36:10 -04:00
Nate Brown
0d1ee4214a Add relay e2e tests and output some mermaid sequence diagrams (#691) 2022-06-27 12:33:29 -05:00
Wade Simmons
7b9287709c add listen.send_recv_error config option (#670)
By default, Nebula replies to packets it has no tunnel for with a `recv_error` packet. This packet helps speed up re-connection
in the case that Nebula on either side did not shut down cleanly. This response can be abused as a way to discover if Nebula is running
on a host though. This option lets you configure if you want to send `recv_error` packets always, never, or only to private network remotes.
valid values: always, never, private

This setting is reloadable with SIGHUP.
2022-06-27 12:37:54 -04:00
Wade Simmons
85ec807b7e reserve NebulaHandshakeDetails fields for multiport (#674)
We are currently testing changes for multiport (related to #497) that
use fields 6 and 7 in the protobuf. Reserved these fields so that when
we eventually open the PR we are backwards compatible with any future
changes.
2022-06-27 12:07:05 -04:00
John Maguire
a0b280621d Remove firewall.conntrack.max_connections from examples (#684) 2022-06-23 10:29:54 -05:00
Caleb Jasik
527f953c2c Remove x509 config loading code (#685) 2022-06-23 10:27:34 -05:00
brad-defined
1a7c575011 Relay (#678)
Co-authored-by: Wade Simmons <wsimmons@slack-corp.com>
2022-06-21 13:35:23 -05:00
Don Stephan
332fa2b825 fix panic in handleInvalidCertificate (#675)
* fix panic in handleInvalidCertificate

when HandleMonitorTick fires, the hostmap can be nil which causes a panic to occur when trying to clean up the hostmap in handleInvalidCertificate. This fix just stops the invalidation from continuing if the hostmap doesn't exist.

* removed conditional for disconnectInvalid in HandleDeletionTick
2022-05-16 13:29:57 -04:00
Wade Simmons
45d1d2b6c6 Update dependencies - 2022-04 (#664)
Updated  github.com/kardianos/service         https://github.com/kardianos/service/compare/v1.2.0...v1.2.1
    Updated  github.com/miekg/dns                 https://github.com/miekg/dns/compare/v1.1.43...v1.1.48
    Updated  github.com/prometheus/client_golang  https://github.com/prometheus/client_golang/compare/v1.11.0...v1.12.1
    Updated  github.com/prometheus/common         https://github.com/prometheus/common/compare/v0.32.1...v0.33.0
    Updated  github.com/stretchr/testify          https://github.com/stretchr/testify/compare/v1.7.0...v1.7.1
    Updated  golang.org/x/crypto                  5770296d90...ae2d96664a
    Updated  golang.org/x/net                     69e39bad7d...749bd193bc
    Updated  golang.org/x/sys                     7861aae155...289d7a0edf
    Updated  golang.zx2c4.com/wireguard/windows   v0.5.1...v0.5.3
    Updated  google.golang.org/protobuf           v1.27.1...v1.28.0
2022-04-18 12:12:25 -04:00
Wade Simmons
3913062c43 build and test with go1.18 (#656)
- https://go.dev/doc/go1.18
2022-04-05 17:08:00 -04:00
Wade Simmons
b38bd36766 fix connection manager check when disconnect_invalid set (#658)
This restores the hostMap.QueryVpnIP block to how it looked before #370
was merged. I'm not sure why the patch from #370 wanted to continue on
if there was no match found in the hostmap, since there isn't anything
to do at that point (the tunnel has already been closed).

This was causing a crash because the handleInvalidCertificate check
expects the hostinfo to be passed in (but it is nil since there was no
hostinfo in the hostmap).

Fixes: #657
2022-04-04 13:38:36 -04:00
Nate Brown
d85e24f49f Allow for self reported ips to the lighthouse (#650) 2022-04-04 12:35:23 -05:00
bitshop
7672c7087a Add to build all windows-arm64 / bin-windows-arm64 build option (#638)
* Add to build all windows-arm64 / bin-winarm64 builds

* update release to build for windows-arm64

* cleanup

Co-authored-by: Wade Simmons <wsimmons@slack-corp.com>
2022-03-18 13:23:10 -04:00
Caleb Jasik
730a5c4a23 Update link to nebula docs (#655) 2022-03-18 11:15:16 -04:00
brad-defined
03498a0cb2 Make nebula advertise its dynamic port to lighthouses (#653) 2022-03-15 18:03:56 -05:00
Nate Brown
312a01dc09 Lighthouse reload support (#649)
Co-authored-by: John Maguire <contact@johnmaguire.me>
2022-03-14 12:35:13 -05:00
Nate Brown
bbe0a032bb Fix windows unsafe_routes regression (#648) 2022-03-09 13:23:29 -06:00
199 changed files with 16679 additions and 6575 deletions

69
.github/ISSUE_TEMPLATE/bug-report.yml vendored Normal file
View File

@@ -0,0 +1,69 @@
name: "\U0001F41B Bug Report"
description: Report an issue or possible bug
title: "\U0001F41B BUG:"
labels: []
assignees: []
body:
- type: markdown
attributes:
value: |
### Thank you for taking the time to file a bug report!
Please fill out this form as completely as possible.
- type: input
id: version
attributes:
label: What version of `nebula` are you using? (`nebula -version`)
placeholder: 0.0.0
validations:
required: true
- type: input
id: os
attributes:
label: What operating system are you using?
description: iOS and Android specific issues belong in the [mobile_nebula](https://github.com/DefinedNet/mobile_nebula) repo.
placeholder: Linux, Mac, Windows
validations:
required: true
- type: textarea
id: description
attributes:
label: Describe the Bug
description: A clear and concise description of what the bug is.
validations:
required: true
- type: textarea
id: logs
attributes:
label: Logs from affected hosts
description: |
Please provide logs from ALL affected hosts during the time of the issue. If you do not provide logs we will be unable to assist you!
[Learn how to find Nebula logs here.](https://nebula.defined.net/docs/guides/viewing-nebula-logs/)
Improve formatting by using <code>```</code> at the beginning and end of each log block.
value: |
```
```
validations:
required: true
- type: textarea
id: configs
attributes:
label: Config files from affected hosts
description: |
Provide config files for all affected hosts.
Improve formatting by using <code>```</code> at the beginning and end of each config file.
value: |
```
```
validations:
required: true

13
.github/ISSUE_TEMPLATE/config.yml vendored Normal file
View File

@@ -0,0 +1,13 @@
blank_issues_enabled: true
contact_links:
- name: 📘 Documentation
url: https://nebula.defined.net/docs/
about: Review documentation.
- name: 💁 Support/Chat
url: https://join.slack.com/t/nebulaoss/shared_invite/enQtOTA5MDI4NDg3MTg4LTkwY2EwNTI4NzQyMzc0M2ZlODBjNWI3NTY1MzhiOThiMmZlZjVkMTI0NGY4YTMyNjUwMWEyNzNkZTJmYzQxOGU
about: 'This issue tracker is not for support questions. Join us on Slack for assistance!'
- name: 📱 Mobile Nebula
url: https://github.com/definednet/mobile_nebula
about: 'This issue tracker is not for mobile support. Try the Mobile Nebula repo instead!'

22
.github/dependabot.yml vendored Normal file
View File

@@ -0,0 +1,22 @@
version: 2
updates:
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "weekly"
- package-ecosystem: "gomod"
directory: "/"
schedule:
interval: "weekly"
groups:
golang-x-dependencies:
patterns:
- "golang.org/x/*"
zx2c4-dependencies:
patterns:
- "golang.zx2c4.com/*"
protobuf-dependencies:
patterns:
- "github.com/golang/protobuf"
- "google.golang.org/protobuf"

View File

@@ -14,31 +14,21 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- name: Set up Go 1.17 - uses: actions/checkout@v4
uses: actions/setup-go@v2
with:
go-version: 1.17
id: go
- name: Check out code into the Go module directory - uses: actions/setup-go@v5
uses: actions/checkout@v2
- uses: actions/cache@v2
with: with:
path: ~/go/pkg/mod go-version: '1.22'
key: ${{ runner.os }}-gofmt1.17-${{ hashFiles('**/go.sum') }} check-latest: true
restore-keys: |
${{ runner.os }}-gofmt1.17-
- name: Install goimports - name: Install goimports
run: | run: |
go get golang.org/x/tools/cmd/goimports go install golang.org/x/tools/cmd/goimports@latest
go build golang.org/x/tools/cmd/goimports
- name: gofmt - name: gofmt
run: | run: |
if [ "$(find . -iname '*.go' | grep -v '\.pb\.go$' | xargs ./goimports -l)" ] if [ "$(find . -iname '*.go' | grep -v '\.pb\.go$' | xargs goimports -l)" ]
then then
find . -iname '*.go' | grep -v '\.pb\.go$' | xargs ./goimports -d find . -iname '*.go' | grep -v '\.pb\.go$' | xargs goimports -d
exit 1 exit 1
fi fi

View File

@@ -7,51 +7,55 @@ name: Create release and upload binaries
jobs: jobs:
build-linux: build-linux:
name: Build Linux All name: Build Linux/BSD All
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- name: Set up Go 1.17 - uses: actions/checkout@v4
uses: actions/setup-go@v2
with:
go-version: 1.17
- name: Checkout code - uses: actions/setup-go@v5
uses: actions/checkout@v2 with:
go-version: '1.22'
check-latest: true
- name: Build - name: Build
run: | run: |
make BUILD_NUMBER="${GITHUB_REF#refs/tags/v}" release-linux release-freebsd make BUILD_NUMBER="${GITHUB_REF#refs/tags/v}" release-linux release-freebsd release-openbsd release-netbsd
mkdir release mkdir release
mv build/*.tar.gz release mv build/*.tar.gz release
- name: Upload artifacts - name: Upload artifacts
uses: actions/upload-artifact@v2 uses: actions/upload-artifact@v4
with: with:
name: linux-latest name: linux-latest
path: release path: release
build-windows: build-windows:
name: Build Windows amd64 name: Build Windows
runs-on: windows-latest runs-on: windows-latest
steps: steps:
- name: Set up Go 1.17 - uses: actions/checkout@v4
uses: actions/setup-go@v2
with:
go-version: 1.17
- name: Checkout code - uses: actions/setup-go@v5
uses: actions/checkout@v2 with:
go-version: '1.22'
check-latest: true
- name: Build - name: Build
run: | run: |
echo $Env:GITHUB_REF.Substring(11) echo $Env:GITHUB_REF.Substring(11)
go build -trimpath -ldflags "-X main.Build=$($Env:GITHUB_REF.Substring(11))" -o build\nebula.exe ./cmd/nebula-service mkdir build\windows-amd64
go build -trimpath -ldflags "-X main.Build=$($Env:GITHUB_REF.Substring(11))" -o build\nebula-cert.exe ./cmd/nebula-cert $Env:GOARCH = "amd64"
go build -trimpath -ldflags "-X main.Build=$($Env:GITHUB_REF.Substring(11))" -o build\windows-amd64\nebula.exe ./cmd/nebula-service
go build -trimpath -ldflags "-X main.Build=$($Env:GITHUB_REF.Substring(11))" -o build\windows-amd64\nebula-cert.exe ./cmd/nebula-cert
mkdir build\windows-arm64
$Env:GOARCH = "arm64"
go build -trimpath -ldflags "-X main.Build=$($Env:GITHUB_REF.Substring(11))" -o build\windows-arm64\nebula.exe ./cmd/nebula-service
go build -trimpath -ldflags "-X main.Build=$($Env:GITHUB_REF.Substring(11))" -o build\windows-arm64\nebula-cert.exe ./cmd/nebula-cert
mkdir build\dist\windows mkdir build\dist\windows
mv dist\windows\wintun build\dist\windows\ mv dist\windows\wintun build\dist\windows\
- name: Upload artifacts - name: Upload artifacts
uses: actions/upload-artifact@v2 uses: actions/upload-artifact@v4
with: with:
name: windows-latest name: windows-latest
path: build path: build
@@ -60,19 +64,18 @@ jobs:
name: Build Universal Darwin name: Build Universal Darwin
env: env:
HAS_SIGNING_CREDS: ${{ secrets.AC_USERNAME != '' }} HAS_SIGNING_CREDS: ${{ secrets.AC_USERNAME != '' }}
runs-on: macos-11 runs-on: macos-latest
steps: steps:
- name: Set up Go 1.17 - uses: actions/checkout@v4
uses: actions/setup-go@v2
with:
go-version: 1.17
- name: Checkout code - uses: actions/setup-go@v5
uses: actions/checkout@v2 with:
go-version: '1.22'
check-latest: true
- name: Import certificates - name: Import certificates
if: env.HAS_SIGNING_CREDS == 'true' if: env.HAS_SIGNING_CREDS == 'true'
uses: Apple-Actions/import-codesign-certs@v1 uses: Apple-Actions/import-codesign-certs@v3
with: with:
p12-file-base64: ${{ secrets.APPLE_DEVELOPER_CERTIFICATE_P12_BASE64 }} p12-file-base64: ${{ secrets.APPLE_DEVELOPER_CERTIFICATE_P12_BASE64 }}
p12-password: ${{ secrets.APPLE_DEVELOPER_CERTIFICATE_PASSWORD }} p12-password: ${{ secrets.APPLE_DEVELOPER_CERTIFICATE_PASSWORD }}
@@ -101,35 +104,92 @@ jobs:
fi fi
- name: Upload artifacts - name: Upload artifacts
uses: actions/upload-artifact@v2 uses: actions/upload-artifact@v4
with: with:
name: darwin-latest name: darwin-latest
path: ./release/* path: ./release/*
build-docker:
name: Create and Upload Docker Images
# Technically we only need build-linux to succeed, but if any platforms fail we'll
# want to investigate and restart the build
needs: [build-linux, build-darwin, build-windows]
runs-on: ubuntu-latest
env:
HAS_DOCKER_CREDS: ${{ vars.DOCKERHUB_USERNAME != '' && secrets.DOCKERHUB_TOKEN != '' }}
# XXX It's not possible to write a conditional here, so instead we do it on every step
#if: ${{ env.HAS_DOCKER_CREDS == 'true' }}
steps:
# Be sure to checkout the code before downloading artifacts, or they will
# be overwritten
- name: Checkout code
if: ${{ env.HAS_DOCKER_CREDS == 'true' }}
uses: actions/checkout@v4
- name: Download artifacts
if: ${{ env.HAS_DOCKER_CREDS == 'true' }}
uses: actions/download-artifact@v4
with:
name: linux-latest
path: artifacts
- name: Login to Docker Hub
if: ${{ env.HAS_DOCKER_CREDS == 'true' }}
uses: docker/login-action@v3
with:
username: ${{ vars.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Set up Docker Buildx
if: ${{ env.HAS_DOCKER_CREDS == 'true' }}
uses: docker/setup-buildx-action@v3
- name: Build and push images
if: ${{ env.HAS_DOCKER_CREDS == 'true' }}
env:
DOCKER_IMAGE_REPO: ${{ vars.DOCKER_IMAGE_REPO || 'nebulaoss/nebula' }}
DOCKER_IMAGE_TAG: ${{ vars.DOCKER_IMAGE_TAG || 'latest' }}
run: |
mkdir -p build/linux-{amd64,arm64}
tar -zxvf artifacts/nebula-linux-amd64.tar.gz -C build/linux-amd64/
tar -zxvf artifacts/nebula-linux-arm64.tar.gz -C build/linux-arm64/
docker buildx build . --push -f docker/Dockerfile --platform linux/amd64,linux/arm64 --tag "${DOCKER_IMAGE_REPO}:${DOCKER_IMAGE_TAG}" --tag "${DOCKER_IMAGE_REPO}:${GITHUB_REF#refs/tags/v}"
release: release:
name: Create and Upload Release name: Create and Upload Release
needs: [build-linux, build-darwin, build-windows] needs: [build-linux, build-darwin, build-windows]
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v4
- name: Download artifacts - name: Download artifacts
uses: actions/download-artifact@v2 uses: actions/download-artifact@v4
with:
path: artifacts
- name: Zip Windows - name: Zip Windows
run: | run: |
cd windows-latest cd artifacts/windows-latest
cp windows-amd64/* .
zip -r nebula-windows-amd64.zip nebula.exe nebula-cert.exe dist zip -r nebula-windows-amd64.zip nebula.exe nebula-cert.exe dist
cp windows-arm64/* .
zip -r nebula-windows-arm64.zip nebula.exe nebula-cert.exe dist
- name: Create sha256sum - name: Create sha256sum
run: | run: |
cd artifacts
for dir in linux-latest darwin-latest windows-latest for dir in linux-latest darwin-latest windows-latest
do do
( (
cd $dir cd $dir
if [ "$dir" = windows-latest ] if [ "$dir" = windows-latest ]
then then
sha256sum <nebula.exe | sed 's=-$=nebula-windows-amd64.zip/nebula.exe=' sha256sum <windows-amd64/nebula.exe | sed 's=-$=nebula-windows-amd64.zip/nebula.exe='
sha256sum <nebula-cert.exe | sed 's=-$=nebula-windows-amd64.zip/nebula-cert.exe=' sha256sum <windows-amd64/nebula-cert.exe | sed 's=-$=nebula-windows-amd64.zip/nebula-cert.exe='
sha256sum <windows-arm64/nebula.exe | sed 's=-$=nebula-windows-arm64.zip/nebula.exe='
sha256sum <windows-arm64/nebula-cert.exe | sed 's=-$=nebula-windows-arm64.zip/nebula-cert.exe='
sha256sum nebula-windows-amd64.zip sha256sum nebula-windows-amd64.zip
sha256sum nebula-windows-arm64.zip
elif [ "$dir" = darwin-latest ] elif [ "$dir" = darwin-latest ]
then then
sha256sum <nebula-darwin.zip | sed 's=-$=nebula-darwin.zip=' sha256sum <nebula-darwin.zip | sed 's=-$=nebula-darwin.zip='
@@ -147,185 +207,12 @@ jobs:
- name: Create Release - name: Create Release
id: create_release id: create_release
uses: actions/create-release@v1
env: env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with: run: |
tag_name: ${{ github.ref }} cd artifacts
release_name: Release ${{ github.ref }} gh release create \
draft: false --verify-tag \
prerelease: false --title "Release ${{ github.ref_name }}" \
"${{ github.ref_name }}" \
## SHASUM256.txt *-latest/*.zip *-latest/*.tar.gz
## Upload assets (I wish we could just upload the whole folder at once...
##
- name: Upload SHASUM256.txt
uses: actions/upload-release-asset@v1.0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: ./SHASUM256.txt
asset_name: SHASUM256.txt
asset_content_type: text/plain
- name: Upload darwin zip
uses: actions/upload-release-asset@v1.0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: ./darwin-latest/nebula-darwin.zip
asset_name: nebula-darwin.zip
asset_content_type: application/zip
- name: Upload windows-amd64
uses: actions/upload-release-asset@v1.0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: ./windows-latest/nebula-windows-amd64.zip
asset_name: nebula-windows-amd64.zip
asset_content_type: application/zip
- name: Upload linux-amd64
uses: actions/upload-release-asset@v1.0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: ./linux-latest/nebula-linux-amd64.tar.gz
asset_name: nebula-linux-amd64.tar.gz
asset_content_type: application/gzip
- name: Upload linux-386
uses: actions/upload-release-asset@v1.0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: ./linux-latest/nebula-linux-386.tar.gz
asset_name: nebula-linux-386.tar.gz
asset_content_type: application/gzip
- name: Upload linux-ppc64le
uses: actions/upload-release-asset@v1.0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: ./linux-latest/nebula-linux-ppc64le.tar.gz
asset_name: nebula-linux-ppc64le.tar.gz
asset_content_type: application/gzip
- name: Upload linux-arm-5
uses: actions/upload-release-asset@v1.0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: ./linux-latest/nebula-linux-arm-5.tar.gz
asset_name: nebula-linux-arm-5.tar.gz
asset_content_type: application/gzip
- name: Upload linux-arm-6
uses: actions/upload-release-asset@v1.0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: ./linux-latest/nebula-linux-arm-6.tar.gz
asset_name: nebula-linux-arm-6.tar.gz
asset_content_type: application/gzip
- name: Upload linux-arm-7
uses: actions/upload-release-asset@v1.0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: ./linux-latest/nebula-linux-arm-7.tar.gz
asset_name: nebula-linux-arm-7.tar.gz
asset_content_type: application/gzip
- name: Upload linux-arm64
uses: actions/upload-release-asset@v1.0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: ./linux-latest/nebula-linux-arm64.tar.gz
asset_name: nebula-linux-arm64.tar.gz
asset_content_type: application/gzip
- name: Upload linux-mips
uses: actions/upload-release-asset@v1.0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: ./linux-latest/nebula-linux-mips.tar.gz
asset_name: nebula-linux-mips.tar.gz
asset_content_type: application/gzip
- name: Upload linux-mipsle
uses: actions/upload-release-asset@v1.0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: ./linux-latest/nebula-linux-mipsle.tar.gz
asset_name: nebula-linux-mipsle.tar.gz
asset_content_type: application/gzip
- name: Upload linux-mips64
uses: actions/upload-release-asset@v1.0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: ./linux-latest/nebula-linux-mips64.tar.gz
asset_name: nebula-linux-mips64.tar.gz
asset_content_type: application/gzip
- name: Upload linux-mips64le
uses: actions/upload-release-asset@v1.0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: ./linux-latest/nebula-linux-mips64le.tar.gz
asset_name: nebula-linux-mips64le.tar.gz
asset_content_type: application/gzip
- name: Upload linux-mips-softfloat
uses: actions/upload-release-asset@v1.0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: ./linux-latest/nebula-linux-mips-softfloat.tar.gz
asset_name: nebula-linux-mips-softfloat.tar.gz
asset_content_type: application/gzip
- name: Upload linux-riscv64
uses: actions/upload-release-asset@v1.0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: ./linux-latest/nebula-linux-riscv64.tar.gz
asset_name: nebula-linux-riscv64.tar.gz
asset_content_type: application/gzip
- name: Upload freebsd-amd64
uses: actions/upload-release-asset@v1.0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: ./linux-latest/nebula-freebsd-amd64.tar.gz
asset_name: nebula-freebsd-amd64.tar.gz
asset_content_type: application/gzip

48
.github/workflows/smoke-extra.yml vendored Normal file
View File

@@ -0,0 +1,48 @@
name: smoke-extra
on:
push:
branches:
- master
pull_request:
types: [opened, synchronize, labeled, reopened]
paths:
- '.github/workflows/smoke**'
- '**Makefile'
- '**.go'
- '**.proto'
- 'go.mod'
- 'go.sum'
jobs:
smoke-extra:
if: github.ref == 'refs/heads/master' || contains(github.event.pull_request.labels.*.name, 'smoke-test-extra')
name: Run extra smoke tests
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-go@v5
with:
go-version-file: 'go.mod'
check-latest: true
- name: install vagrant
run: sudo apt-get update && sudo apt-get install -y vagrant virtualbox
- name: freebsd-amd64
run: make smoke-vagrant/freebsd-amd64
- name: openbsd-amd64
run: make smoke-vagrant/openbsd-amd64
- name: netbsd-amd64
run: make smoke-vagrant/netbsd-amd64
- name: linux-386
run: make smoke-vagrant/linux-386
- name: linux-amd64-ipv6disable
run: make smoke-vagrant/linux-amd64-ipv6disable
timeout-minutes: 30

View File

@@ -18,24 +18,15 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- name: Set up Go 1.17 - uses: actions/checkout@v4
uses: actions/setup-go@v2
with:
go-version: 1.17
id: go
- name: Check out code into the Go module directory - uses: actions/setup-go@v5
uses: actions/checkout@v2
- uses: actions/cache@v2
with: with:
path: ~/go/pkg/mod go-version: '1.22'
key: ${{ runner.os }}-go1.17-${{ hashFiles('**/go.sum') }} check-latest: true
restore-keys: |
${{ runner.os }}-go1.17-
- name: build - name: build
run: make bin-docker run: make bin-docker CGO_ENABLED=1 BUILD_ARGS=-race
- name: setup docker image - name: setup docker image
working-directory: ./.github/workflows/smoke working-directory: ./.github/workflows/smoke
@@ -45,4 +36,20 @@ jobs:
working-directory: ./.github/workflows/smoke working-directory: ./.github/workflows/smoke
run: ./smoke.sh run: ./smoke.sh
- name: setup relay docker image
working-directory: ./.github/workflows/smoke
run: ./build-relay.sh
- name: run smoke relay
working-directory: ./.github/workflows/smoke
run: ./smoke-relay.sh
- name: setup docker image for P256
working-directory: ./.github/workflows/smoke
run: NAME="smoke-p256" CURVE=P256 ./build.sh
- name: run smoke-p256
working-directory: ./.github/workflows/smoke
run: NAME="smoke-p256" ./smoke.sh
timeout-minutes: 10 timeout-minutes: 10

View File

@@ -1,4 +1,6 @@
FROM debian:buster FROM ubuntu:jammy
RUN apt-get update && apt-get install -y iputils-ping ncat tcpdump
ADD ./build /nebula ADD ./build /nebula

44
.github/workflows/smoke/build-relay.sh vendored Executable file
View File

@@ -0,0 +1,44 @@
#!/bin/sh
set -e -x
rm -rf ./build
mkdir ./build
(
cd build
cp ../../../../build/linux-amd64/nebula .
cp ../../../../build/linux-amd64/nebula-cert .
HOST="lighthouse1" AM_LIGHTHOUSE=true ../genconfig.sh >lighthouse1.yml <<EOF
relay:
am_relay: true
EOF
export LIGHTHOUSES="192.168.100.1 172.17.0.2:4242"
export REMOTE_ALLOW_LIST='{"172.17.0.4/32": false, "172.17.0.5/32": false}'
HOST="host2" ../genconfig.sh >host2.yml <<EOF
relay:
relays:
- 192.168.100.1
EOF
export REMOTE_ALLOW_LIST='{"172.17.0.3/32": false}'
HOST="host3" ../genconfig.sh >host3.yml
HOST="host4" ../genconfig.sh >host4.yml <<EOF
relay:
use_relays: false
EOF
../../../../nebula-cert ca -name "Smoke Test"
../../../../nebula-cert sign -name "lighthouse1" -groups "lighthouse,lighthouse1" -ip "192.168.100.1/24"
../../../../nebula-cert sign -name "host2" -groups "host,host2" -ip "192.168.100.2/24"
../../../../nebula-cert sign -name "host3" -groups "host,host3" -ip "192.168.100.3/24"
../../../../nebula-cert sign -name "host4" -groups "host,host4" -ip "192.168.100.4/24"
)
docker build -t nebula:smoke-relay .

View File

@@ -11,6 +11,11 @@ mkdir ./build
cp ../../../../build/linux-amd64/nebula . cp ../../../../build/linux-amd64/nebula .
cp ../../../../build/linux-amd64/nebula-cert . cp ../../../../build/linux-amd64/nebula-cert .
if [ "$1" ]
then
cp "../../../../build/$1/nebula" "$1-nebula"
fi
HOST="lighthouse1" \ HOST="lighthouse1" \
AM_LIGHTHOUSE=true \ AM_LIGHTHOUSE=true \
../genconfig.sh >lighthouse1.yml ../genconfig.sh >lighthouse1.yml
@@ -29,11 +34,11 @@ mkdir ./build
OUTBOUND='[{"port": "any", "proto": "icmp", "group": "lighthouse"}]' \ OUTBOUND='[{"port": "any", "proto": "icmp", "group": "lighthouse"}]' \
../genconfig.sh >host4.yml ../genconfig.sh >host4.yml
../../../../nebula-cert ca -name "Smoke Test" ../../../../nebula-cert ca -curve "${CURVE:-25519}" -name "Smoke Test"
../../../../nebula-cert sign -name "lighthouse1" -groups "lighthouse,lighthouse1" -ip "192.168.100.1/24" ../../../../nebula-cert sign -name "lighthouse1" -groups "lighthouse,lighthouse1" -ip "192.168.100.1/24"
../../../../nebula-cert sign -name "host2" -groups "host,host2" -ip "192.168.100.2/24" ../../../../nebula-cert sign -name "host2" -groups "host,host2" -ip "192.168.100.2/24"
../../../../nebula-cert sign -name "host3" -groups "host,host3" -ip "192.168.100.3/24" ../../../../nebula-cert sign -name "host3" -groups "host,host3" -ip "192.168.100.3/24"
../../../../nebula-cert sign -name "host4" -groups "host,host4" -ip "192.168.100.4/24" ../../../../nebula-cert sign -name "host4" -groups "host,host4" -ip "192.168.100.4/24"
) )
sudo docker build -t nebula:smoke . docker build -t "nebula:${NAME:-smoke}" .

View File

@@ -40,15 +40,20 @@ pki:
lighthouse: lighthouse:
am_lighthouse: ${AM_LIGHTHOUSE:-false} am_lighthouse: ${AM_LIGHTHOUSE:-false}
hosts: $(lighthouse_hosts) hosts: $(lighthouse_hosts)
remote_allow_list: ${REMOTE_ALLOW_LIST}
listen: listen:
host: 0.0.0.0 host: 0.0.0.0
port: ${LISTEN_PORT:-4242} port: ${LISTEN_PORT:-4242}
tun: tun:
dev: ${TUN_DEV:-nebula1} dev: ${TUN_DEV:-tun0}
firewall: firewall:
inbound_action: reject
outbound_action: reject
outbound: ${OUTBOUND:-$FIREWALL_ALL} outbound: ${OUTBOUND:-$FIREWALL_ALL}
inbound: ${INBOUND:-$FIREWALL_ALL} inbound: ${INBOUND:-$FIREWALL_ALL}
$(test -t 0 || cat)
EOF EOF

85
.github/workflows/smoke/smoke-relay.sh vendored Executable file
View File

@@ -0,0 +1,85 @@
#!/bin/bash
set -e -x
set -o pipefail
mkdir -p logs
cleanup() {
echo
echo " *** cleanup"
echo
set +e
if [ "$(jobs -r)" ]
then
docker kill lighthouse1 host2 host3 host4
fi
}
trap cleanup EXIT
docker run --name lighthouse1 --rm nebula:smoke-relay -config lighthouse1.yml -test
docker run --name host2 --rm nebula:smoke-relay -config host2.yml -test
docker run --name host3 --rm nebula:smoke-relay -config host3.yml -test
docker run --name host4 --rm nebula:smoke-relay -config host4.yml -test
docker run --name lighthouse1 --device /dev/net/tun:/dev/net/tun --cap-add NET_ADMIN --rm nebula:smoke-relay -config lighthouse1.yml 2>&1 | tee logs/lighthouse1 | sed -u 's/^/ [lighthouse1] /' &
sleep 1
docker run --name host2 --device /dev/net/tun:/dev/net/tun --cap-add NET_ADMIN --rm nebula:smoke-relay -config host2.yml 2>&1 | tee logs/host2 | sed -u 's/^/ [host2] /' &
sleep 1
docker run --name host3 --device /dev/net/tun:/dev/net/tun --cap-add NET_ADMIN --rm nebula:smoke-relay -config host3.yml 2>&1 | tee logs/host3 | sed -u 's/^/ [host3] /' &
sleep 1
docker run --name host4 --device /dev/net/tun:/dev/net/tun --cap-add NET_ADMIN --rm nebula:smoke-relay -config host4.yml 2>&1 | tee logs/host4 | sed -u 's/^/ [host4] /' &
sleep 1
set +x
echo
echo " *** Testing ping from lighthouse1"
echo
set -x
docker exec lighthouse1 ping -c1 192.168.100.2
docker exec lighthouse1 ping -c1 192.168.100.3
docker exec lighthouse1 ping -c1 192.168.100.4
set +x
echo
echo " *** Testing ping from host2"
echo
set -x
docker exec host2 ping -c1 192.168.100.1
# Should fail because no relay configured in this direction
! docker exec host2 ping -c1 192.168.100.3 -w5 || exit 1
! docker exec host2 ping -c1 192.168.100.4 -w5 || exit 1
set +x
echo
echo " *** Testing ping from host3"
echo
set -x
docker exec host3 ping -c1 192.168.100.1
docker exec host3 ping -c1 192.168.100.2
docker exec host3 ping -c1 192.168.100.4
set +x
echo
echo " *** Testing ping from host4"
echo
set -x
docker exec host4 ping -c1 192.168.100.1
# Should fail because relays not allowed
! docker exec host4 ping -c1 192.168.100.2 -w5 || exit 1
docker exec host4 ping -c1 192.168.100.3
docker exec host4 sh -c 'kill 1'
docker exec host3 sh -c 'kill 1'
docker exec host2 sh -c 'kill 1'
docker exec lighthouse1 sh -c 'kill 1'
sleep 5
if [ "$(jobs -r)" ]
then
echo "nebula still running after SIGTERM sent" >&2
exit 1
fi

105
.github/workflows/smoke/smoke-vagrant.sh vendored Executable file
View File

@@ -0,0 +1,105 @@
#!/bin/bash
set -e -x
set -o pipefail
export VAGRANT_CWD="$PWD/vagrant-$1"
mkdir -p logs
cleanup() {
echo
echo " *** cleanup"
echo
set +e
if [ "$(jobs -r)" ]
then
docker kill lighthouse1 host2
fi
vagrant destroy -f
}
trap cleanup EXIT
CONTAINER="nebula:${NAME:-smoke}"
docker run --name lighthouse1 --rm "$CONTAINER" -config lighthouse1.yml -test
docker run --name host2 --rm "$CONTAINER" -config host2.yml -test
vagrant up
vagrant ssh -c "cd /nebula && /nebula/$1-nebula -config host3.yml -test"
docker run --name lighthouse1 --device /dev/net/tun:/dev/net/tun --cap-add NET_ADMIN --rm "$CONTAINER" -config lighthouse1.yml 2>&1 | tee logs/lighthouse1 | sed -u 's/^/ [lighthouse1] /' &
sleep 1
docker run --name host2 --device /dev/net/tun:/dev/net/tun --cap-add NET_ADMIN --rm "$CONTAINER" -config host2.yml 2>&1 | tee logs/host2 | sed -u 's/^/ [host2] /' &
sleep 1
vagrant ssh -c "cd /nebula && sudo sh -c 'echo \$\$ >/nebula/pid && exec /nebula/$1-nebula -config host3.yml'" &
sleep 15
# grab tcpdump pcaps for debugging
docker exec lighthouse1 tcpdump -i nebula1 -q -w - -U 2>logs/lighthouse1.inside.log >logs/lighthouse1.inside.pcap &
docker exec lighthouse1 tcpdump -i eth0 -q -w - -U 2>logs/lighthouse1.outside.log >logs/lighthouse1.outside.pcap &
docker exec host2 tcpdump -i nebula1 -q -w - -U 2>logs/host2.inside.log >logs/host2.inside.pcap &
docker exec host2 tcpdump -i eth0 -q -w - -U 2>logs/host2.outside.log >logs/host2.outside.pcap &
# vagrant ssh -c "tcpdump -i nebula1 -q -w - -U" 2>logs/host3.inside.log >logs/host3.inside.pcap &
# vagrant ssh -c "tcpdump -i eth0 -q -w - -U" 2>logs/host3.outside.log >logs/host3.outside.pcap &
docker exec host2 ncat -nklv 0.0.0.0 2000 &
vagrant ssh -c "ncat -nklv 0.0.0.0 2000" &
#docker exec host2 ncat -e '/usr/bin/echo host2' -nkluv 0.0.0.0 3000 &
#vagrant ssh -c "ncat -e '/usr/bin/echo host3' -nkluv 0.0.0.0 3000" &
set +x
echo
echo " *** Testing ping from lighthouse1"
echo
set -x
docker exec lighthouse1 ping -c1 192.168.100.2
docker exec lighthouse1 ping -c1 192.168.100.3
set +x
echo
echo " *** Testing ping from host2"
echo
set -x
docker exec host2 ping -c1 192.168.100.1
# Should fail because not allowed by host3 inbound firewall
! docker exec host2 ping -c1 192.168.100.3 -w5 || exit 1
set +x
echo
echo " *** Testing ncat from host2"
echo
set -x
# Should fail because not allowed by host3 inbound firewall
#! docker exec host2 ncat -nzv -w5 192.168.100.3 2000 || exit 1
#! docker exec host2 ncat -nzuv -w5 192.168.100.3 3000 | grep -q host3 || exit 1
set +x
echo
echo " *** Testing ping from host3"
echo
set -x
vagrant ssh -c "ping -c1 192.168.100.1"
vagrant ssh -c "ping -c1 192.168.100.2"
set +x
echo
echo " *** Testing ncat from host3"
echo
set -x
#vagrant ssh -c "ncat -nzv -w5 192.168.100.2 2000"
#vagrant ssh -c "ncat -nzuv -w5 192.168.100.2 3000" | grep -q host2
vagrant ssh -c "sudo xargs kill </nebula/pid"
docker exec host2 sh -c 'kill 1'
docker exec lighthouse1 sh -c 'kill 1'
sleep 1
if [ "$(jobs -r)" ]
then
echo "nebula still running after SIGTERM sent" >&2
exit 1
fi

View File

@@ -7,63 +7,112 @@ set -o pipefail
mkdir -p logs mkdir -p logs
cleanup() { cleanup() {
echo
echo " *** cleanup"
echo
set +e set +e
if [ "$(jobs -r)" ] if [ "$(jobs -r)" ]
then then
sudo docker kill lighthouse1 host2 host3 host4 docker kill lighthouse1 host2 host3 host4
fi fi
} }
trap cleanup EXIT trap cleanup EXIT
sudo docker run --name lighthouse1 --rm nebula:smoke -config lighthouse1.yml -test CONTAINER="nebula:${NAME:-smoke}"
sudo docker run --name host2 --rm nebula:smoke -config host2.yml -test
sudo docker run --name host3 --rm nebula:smoke -config host3.yml -test
sudo docker run --name host4 --rm nebula:smoke -config host4.yml -test
sudo docker run --name lighthouse1 --device /dev/net/tun:/dev/net/tun --cap-add NET_ADMIN --rm nebula:smoke -config lighthouse1.yml 2>&1 | tee logs/lighthouse1 & docker run --name lighthouse1 --rm "$CONTAINER" -config lighthouse1.yml -test
docker run --name host2 --rm "$CONTAINER" -config host2.yml -test
docker run --name host3 --rm "$CONTAINER" -config host3.yml -test
docker run --name host4 --rm "$CONTAINER" -config host4.yml -test
docker run --name lighthouse1 --device /dev/net/tun:/dev/net/tun --cap-add NET_ADMIN --rm "$CONTAINER" -config lighthouse1.yml 2>&1 | tee logs/lighthouse1 | sed -u 's/^/ [lighthouse1] /' &
sleep 1 sleep 1
sudo docker run --name host2 --device /dev/net/tun:/dev/net/tun --cap-add NET_ADMIN --rm nebula:smoke -config host2.yml 2>&1 | tee logs/host2 & docker run --name host2 --device /dev/net/tun:/dev/net/tun --cap-add NET_ADMIN --rm "$CONTAINER" -config host2.yml 2>&1 | tee logs/host2 | sed -u 's/^/ [host2] /' &
sleep 1 sleep 1
sudo docker run --name host3 --device /dev/net/tun:/dev/net/tun --cap-add NET_ADMIN --rm nebula:smoke -config host3.yml 2>&1 | tee logs/host3 & docker run --name host3 --device /dev/net/tun:/dev/net/tun --cap-add NET_ADMIN --rm "$CONTAINER" -config host3.yml 2>&1 | tee logs/host3 | sed -u 's/^/ [host3] /' &
sleep 1 sleep 1
sudo docker run --name host4 --device /dev/net/tun:/dev/net/tun --cap-add NET_ADMIN --rm nebula:smoke -config host4.yml 2>&1 | tee logs/host4 & docker run --name host4 --device /dev/net/tun:/dev/net/tun --cap-add NET_ADMIN --rm "$CONTAINER" -config host4.yml 2>&1 | tee logs/host4 | sed -u 's/^/ [host4] /' &
sleep 1 sleep 1
# grab tcpdump pcaps for debugging
docker exec lighthouse1 tcpdump -i nebula1 -q -w - -U 2>logs/lighthouse1.inside.log >logs/lighthouse1.inside.pcap &
docker exec lighthouse1 tcpdump -i eth0 -q -w - -U 2>logs/lighthouse1.outside.log >logs/lighthouse1.outside.pcap &
docker exec host2 tcpdump -i nebula1 -q -w - -U 2>logs/host2.inside.log >logs/host2.inside.pcap &
docker exec host2 tcpdump -i eth0 -q -w - -U 2>logs/host2.outside.log >logs/host2.outside.pcap &
docker exec host3 tcpdump -i nebula1 -q -w - -U 2>logs/host3.inside.log >logs/host3.inside.pcap &
docker exec host3 tcpdump -i eth0 -q -w - -U 2>logs/host3.outside.log >logs/host3.outside.pcap &
docker exec host4 tcpdump -i nebula1 -q -w - -U 2>logs/host4.inside.log >logs/host4.inside.pcap &
docker exec host4 tcpdump -i eth0 -q -w - -U 2>logs/host4.outside.log >logs/host4.outside.pcap &
docker exec host2 ncat -nklv 0.0.0.0 2000 &
docker exec host3 ncat -nklv 0.0.0.0 2000 &
docker exec host2 ncat -e '/usr/bin/echo host2' -nkluv 0.0.0.0 3000 &
docker exec host3 ncat -e '/usr/bin/echo host3' -nkluv 0.0.0.0 3000 &
set +x set +x
echo echo
echo " *** Testing ping from lighthouse1" echo " *** Testing ping from lighthouse1"
echo echo
set -x set -x
sudo docker exec lighthouse1 ping -c1 192.168.100.2 docker exec lighthouse1 ping -c1 192.168.100.2
sudo docker exec lighthouse1 ping -c1 192.168.100.3 docker exec lighthouse1 ping -c1 192.168.100.3
set +x set +x
echo echo
echo " *** Testing ping from host2" echo " *** Testing ping from host2"
echo echo
set -x set -x
sudo docker exec host2 ping -c1 192.168.100.1 docker exec host2 ping -c1 192.168.100.1
# Should fail because not allowed by host3 inbound firewall # Should fail because not allowed by host3 inbound firewall
! sudo docker exec host2 ping -c1 192.168.100.3 -w5 || exit 1 ! docker exec host2 ping -c1 192.168.100.3 -w5 || exit 1
set +x
echo
echo " *** Testing ncat from host2"
echo
set -x
# Should fail because not allowed by host3 inbound firewall
! docker exec host2 ncat -nzv -w5 192.168.100.3 2000 || exit 1
! docker exec host2 ncat -nzuv -w5 192.168.100.3 3000 | grep -q host3 || exit 1
set +x set +x
echo echo
echo " *** Testing ping from host3" echo " *** Testing ping from host3"
echo echo
set -x set -x
sudo docker exec host3 ping -c1 192.168.100.1 docker exec host3 ping -c1 192.168.100.1
sudo docker exec host3 ping -c1 192.168.100.2 docker exec host3 ping -c1 192.168.100.2
set +x
echo
echo " *** Testing ncat from host3"
echo
set -x
docker exec host3 ncat -nzv -w5 192.168.100.2 2000
docker exec host3 ncat -nzuv -w5 192.168.100.2 3000 | grep -q host2
set +x set +x
echo echo
echo " *** Testing ping from host4" echo " *** Testing ping from host4"
echo echo
set -x set -x
sudo docker exec host4 ping -c1 192.168.100.1 docker exec host4 ping -c1 192.168.100.1
# Should fail because not allowed by host4 outbound firewall # Should fail because not allowed by host4 outbound firewall
! sudo docker exec host4 ping -c1 192.168.100.2 -w5 || exit 1 ! docker exec host4 ping -c1 192.168.100.2 -w5 || exit 1
! sudo docker exec host4 ping -c1 192.168.100.3 -w5 || exit 1 ! docker exec host4 ping -c1 192.168.100.3 -w5 || exit 1
set +x
echo
echo " *** Testing ncat from host4"
echo
set -x
# Should fail because not allowed by host4 outbound firewall
! docker exec host4 ncat -nzv -w5 192.168.100.2 2000 || exit 1
! docker exec host4 ncat -nzv -w5 192.168.100.3 2000 || exit 1
! docker exec host4 ncat -nzuv -w5 192.168.100.2 3000 | grep -q host2 || exit 1
! docker exec host4 ncat -nzuv -w5 192.168.100.3 3000 | grep -q host3 || exit 1
set +x set +x
echo echo
@@ -71,13 +120,19 @@ echo " *** Testing conntrack"
echo echo
set -x set -x
# host2 can ping host3 now that host3 pinged it first # host2 can ping host3 now that host3 pinged it first
sudo docker exec host2 ping -c1 192.168.100.3 docker exec host2 ping -c1 192.168.100.3
# host4 can ping host2 once conntrack established # host4 can ping host2 once conntrack established
sudo docker exec host2 ping -c1 192.168.100.4 docker exec host2 ping -c1 192.168.100.4
sudo docker exec host4 ping -c1 192.168.100.2 docker exec host4 ping -c1 192.168.100.2
sudo docker exec host4 sh -c 'kill 1' docker exec host4 sh -c 'kill 1'
sudo docker exec host3 sh -c 'kill 1' docker exec host3 sh -c 'kill 1'
sudo docker exec host2 sh -c 'kill 1' docker exec host2 sh -c 'kill 1'
sudo docker exec lighthouse1 sh -c 'kill 1' docker exec lighthouse1 sh -c 'kill 1'
sleep 1 sleep 5
if [ "$(jobs -r)" ]
then
echo "nebula still running after SIGTERM sent" >&2
exit 1
fi

View File

@@ -0,0 +1,7 @@
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure("2") do |config|
config.vm.box = "generic/freebsd14"
config.vm.synced_folder "../build", "/nebula", type: "rsync"
end

View File

@@ -0,0 +1,7 @@
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure("2") do |config|
config.vm.box = "ubuntu/xenial32"
config.vm.synced_folder "../build", "/nebula"
end

View File

@@ -0,0 +1,16 @@
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure("2") do |config|
config.vm.box = "ubuntu/jammy64"
config.vm.synced_folder "../build", "/nebula"
config.vm.provision :shell do |shell|
shell.inline = <<-EOF
sed -i 's/GRUB_CMDLINE_LINUX=""/GRUB_CMDLINE_LINUX="ipv6.disable=1"/' /etc/default/grub
update-grub
EOF
shell.privileged = true
shell.reboot = true
end
end

View File

@@ -0,0 +1,7 @@
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure("2") do |config|
config.vm.box = "generic/netbsd9"
config.vm.synced_folder "../build", "/nebula", type: "rsync"
end

View File

@@ -0,0 +1,7 @@
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure("2") do |config|
config.vm.box = "generic/openbsd7"
config.vm.synced_folder "../build", "/nebula", type: "rsync"
end

View File

@@ -18,54 +18,69 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- name: Set up Go 1.17 - uses: actions/checkout@v4
uses: actions/setup-go@v2
with:
go-version: 1.17
id: go
- name: Check out code into the Go module directory - uses: actions/setup-go@v5
uses: actions/checkout@v2
- uses: actions/cache@v2
with: with:
path: ~/go/pkg/mod go-version: '1.22'
key: ${{ runner.os }}-go1.17-${{ hashFiles('**/go.sum') }} check-latest: true
restore-keys: |
${{ runner.os }}-go1.17-
- name: Build - name: Build
run: make all run: make all
- name: Vet
run: make vet
- name: Test - name: Test
run: make test run: make test
- name: End 2 end - name: End 2 end
run: make e2evv run: make e2evv
- name: Build test mobile
run: make build-test-mobile
- uses: actions/upload-artifact@v4
with:
name: e2e packet flow linux-latest
path: e2e/mermaid/linux-latest
if-no-files-found: warn
test-linux-boringcrypto:
name: Build and test on linux with boringcrypto
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-go@v5
with:
go-version: '1.22'
check-latest: true
- name: Build
run: make bin-boringcrypto
- name: Test
run: make test-boringcrypto
- name: End 2 end
run: make e2evv GOEXPERIMENT=boringcrypto CGO_ENABLED=1
test: test:
name: Build and test on ${{ matrix.os }} name: Build and test on ${{ matrix.os }}
runs-on: ${{ matrix.os }} runs-on: ${{ matrix.os }}
strategy: strategy:
matrix: matrix:
os: [windows-latest, macos-11] os: [windows-latest, macos-latest]
steps: steps:
- name: Set up Go 1.17 - uses: actions/checkout@v4
uses: actions/setup-go@v2
with:
go-version: 1.17
id: go
- name: Check out code into the Go module directory - uses: actions/setup-go@v5
uses: actions/checkout@v2
- uses: actions/cache@v2
with: with:
path: ~/go/pkg/mod go-version: '1.22'
key: ${{ runner.os }}-go1.17-${{ hashFiles('**/go.sum') }} check-latest: true
restore-keys: |
${{ runner.os }}-go1.17-
- name: Build nebula - name: Build nebula
run: go build ./cmd/nebula run: go build ./cmd/nebula
@@ -73,8 +88,17 @@ jobs:
- name: Build nebula-cert - name: Build nebula-cert
run: go build ./cmd/nebula-cert run: go build ./cmd/nebula-cert
- name: Vet
run: make vet
- name: Test - name: Test
run: go test -v ./... run: make test
- name: End 2 end - name: End 2 end
run: make e2evv run: make e2evv
- uses: actions/upload-artifact@v4
with:
name: e2e packet flow ${{ matrix.os }}
path: e2e/mermaid/${{ matrix.os }}
if-no-files-found: warn

9
.gitignore vendored
View File

@@ -4,9 +4,14 @@
/nebula-arm6 /nebula-arm6
/nebula-darwin /nebula-darwin
/nebula.exe /nebula.exe
/cert/*.crt /nebula-cert.exe
/cert/*.key
/coverage.out /coverage.out
/cpu.pprof /cpu.pprof
/build /build
/*.tar.gz /*.tar.gz
/e2e/mermaid/
**.crt
**.key
**.pem
!/examples/quickstart-vagrant/ansible/roles/nebula/files/vagrant-test-ca.key
!/examples/quickstart-vagrant/ansible/roles/nebula/files/vagrant-test-ca.crt

View File

@@ -7,6 +7,364 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
## [Unreleased] ## [Unreleased]
### Changed
- Various dependency updates.
## [1.9.7] - 2025-10-10
### Security
- Fix an issue where Nebula could incorrectly accept and process a packet from an erroneous source IP when the sender's
certificate is configured with unsafe_routes (cert v1/v2) or multiple IPs (cert v2). (#1494)
### Changed
- Disable sending `recv_error` messages when a packet is received outside the allowable counter window. (#1459)
- Improve error messages and remove some unnecessary fatal conditions in the Windows and generic udp listener. (#1543)
## [1.9.6] - 2025-7-15
### Added
- Support dropping inactive tunnels. This is disabled by default in this release but can be enabled with `tunnels.drop_inactive`. See example config for more details. (#1413)
### Fixed
- Fix Darwin freeze due to presence of some Network Extensions (#1426)
- Ensure the same relay tunnel is always used when multiple relay tunnels are present (#1422)
- Fix Windows freeze due to ICMP error handling (#1412)
- Fix relay migration panic (#1403)
## [1.9.5] - 2024-12-05
### Added
- Gracefully ignore v2 certificates. (#1282)
### Fixed
- Fix relays that refuse to re-establish after one of the remote tunnel pairs breaks. (#1277)
## [1.9.4] - 2024-09-09
### Added
- Support UDP dialing with gVisor. (#1181)
### Changed
- Make some Nebula state programmatically available via control object. (#1188)
- Switch internal representation of IPs to netip, to prepare for IPv6 support
in the overlay. (#1173)
- Minor build and cleanup changes. (#1171, #1164, #1162)
- Various dependency updates. (#1195, #1190, #1174, #1168, #1167, #1161, #1147, #1146)
### Fixed
- Fix a bug on big endian hosts, like mips. (#1194)
- Fix a rare panic if a local index collision happens. (#1191)
- Fix integer wraparound in the calculation of handshake timeouts on 32-bit targets. (#1185)
## [1.9.3] - 2024-06-06
### Fixed
- Initialize messageCounter to 2 instead of verifying later. (#1156)
## [1.9.2] - 2024-06-03
### Fixed
- Ensure messageCounter is set before handshake is complete. (#1154)
## [1.9.1] - 2024-05-29
### Fixed
- Fixed a potential deadlock in GetOrHandshake. (#1151)
## [1.9.0] - 2024-05-07
### Deprecated
- This release adds a new setting `default_local_cidr_any` that defaults to
true to match previous behavior, but will default to false in the next
release (1.10). When set to false, `local_cidr` is matched correctly for
firewall rules on hosts acting as unsafe routers, and should be set for any
firewall rules you want to allow unsafe route hosts to access. See the issue
and example config for more details. (#1071, #1099)
### Added
- Nebula now has an official Docker image `nebulaoss/nebula` that is
distroless and contains just the `nebula` and `nebula-cert` binaries. You
can find it here: https://hub.docker.com/r/nebulaoss/nebula (#1037)
- Experimental binaries for `loong64` are now provided. (#1003)
- Added example service script for OpenRC. (#711)
- The SSH daemon now supports inlined host keys. (#1054)
- The SSH daemon now supports certificates with `sshd.trusted_cas`. (#1098)
### Changed
- Config setting `tun.unsafe_routes` is now reloadable. (#1083)
- Small documentation and internal improvements. (#1065, #1067, #1069, #1108,
#1109, #1111, #1135)
- Various dependency updates. (#1139, #1138, #1134, #1133, #1126, #1123, #1110,
#1094, #1092, #1087, #1086, #1085, #1072, #1063, #1059, #1055, #1053, #1047,
#1046, #1034, #1022)
### Removed
- Support for the deprecated `local_range` option has been removed. Please
change to `preferred_ranges` (which is also now reloadable). (#1043)
- We are now building with go1.22, which means that for Windows you need at
least Windows 10 or Windows Server 2016. This is because support for earlier
versions was removed in Go 1.21. See https://go.dev/doc/go1.21#windows (#981)
- Removed vagrant example, as it was unmaintained. (#1129)
- Removed Fedora and Arch nebula.service files, as they are maintained in the
upstream repos. (#1128, #1132)
- Remove the TCP round trip tracking metrics, as they never had correct data
and were an experiment to begin with. (#1114)
### Fixed
- Fixed a potential deadlock introduced in 1.8.1. (#1112)
- Fixed support for Linux when IPv6 has been disabled at the OS level. (#787)
- DNS will return NXDOMAIN now when there are no results. (#845)
- Allow `::` in `lighthouse.dns.host`. (#1115)
- Capitalization of `NotAfter` fixed in DNS TXT response. (#1127)
- Don't log invalid certificates. It is untrusted data and can cause a large
volume of logs. (#1116)
## [1.8.2] - 2024-01-08
### Fixed
- Fix multiple routines when listen.port is zero. This was a regression
introduced in v1.6.0. (#1057)
### Changed
- Small dependency update for Noise. (#1038)
## [1.8.1] - 2023-12-19
### Security
- Update `golang.org/x/crypto`, which includes a fix for CVE-2023-48795. (#1048)
### Fixed
- Fix a deadlock introduced in v1.8.0 that could occur during handshakes. (#1044)
- Fix mobile builds. (#1035)
## [1.8.0] - 2023-12-06
### Deprecated
- The next minor release of Nebula, 1.9.0, will require at least Windows 10 or
Windows Server 2016. This is because support for earlier versions was removed
in Go 1.21. See https://go.dev/doc/go1.21#windows
### Added
- Linux: Notify systemd of service readiness. This should resolve timing issues
with services that depend on Nebula being active. For an example of how to
enable this, see: `examples/service_scripts/nebula.service`. (#929)
- Windows: Use Registered IO (RIO) when possible. Testing on a Windows 11
machine shows ~50x improvement in throughput. (#905)
- NetBSD, OpenBSD: Added rudimentary support. (#916, #812)
- FreeBSD: Add support for naming tun devices. (#903)
### Changed
- `pki.disconnect_invalid` will now default to true. This means that once a
certificate expires, the tunnel will be disconnected. If you use SIGHUP to
reload certificates without restarting Nebula, you should ensure all of your
clients are on 1.7.0 or newer before you enable this feature. (#859)
- Limit how often a busy tunnel can requery the lighthouse. The new config
option `timers.requery_wait_duration` defaults to `60s`. (#940)
- The internal structures for hostmaps were refactored to reduce memory usage
and the potential for subtle bugs. (#843, #938, #953, #954, #955)
- Lots of dependency updates.
### Fixed
- Windows: Retry wintun device creation if it fails the first time. (#985)
- Fix issues with firewall reject packets that could cause panics. (#957)
- Fix relay migration during re-handshakes. (#964)
- Various other refactors and fixes. (#935, #952, #972, #961, #996, #1002,
#987, #1004, #1030, #1032, ...)
## [1.7.2] - 2023-06-01
### Fixed
- Fix a freeze during config reload if the `static_host_map` config was changed. (#886)
## [1.7.1] - 2023-05-18
### Fixed
- Fix IPv4 addresses returned by `static_host_map` DNS lookup queries being
treated as IPv6 addresses. (#877)
## [1.7.0] - 2023-05-17
### Added
- `nebula-cert ca` now supports encrypting the CA's private key with a
passphrase. Pass `-encrypt` in order to be prompted for a passphrase.
Encryption is performed using AES-256-GCM and Argon2id for KDF. KDF
parameters default to RFC recommendations, but can be overridden via CLI
flags `-argon-memory`, `-argon-parallelism`, and `-argon-iterations`. (#386)
- Support for curve P256 and BoringCrypto has been added. See README section
"Curve P256 and BoringCrypto" for more details. (#865, #861, #769, #856, #803)
- New firewall rule `local_cidr`. This could be used to filter destinations
when using `unsafe_routes`. (#507)
- Add `unsafe_route` option `install`. This controls whether the route is
installed in the systems routing table. (#831)
- Add `tun.use_system_route_table` option. Set to true to manage unsafe routes
directly on the system route table with gateway routes instead of in Nebula
configuration files. This is only supported on Linux. (#839)
- The metric `certificate.ttl_seconds` is now exposed via stats. (#782)
- Add `punchy.respond_delay` option. This allows you to change the delay
before attempting punchy.respond. Default is 5 seconds. (#721)
- Added SSH commands to allow the capture of a mutex profile. (#737)
- You can now set `lighthouse.calculated_remotes` to make it possible to do
handshakes without a lighthouse in certain configurations. (#759)
- The firewall can be configured to send REJECT replies instead of the default
DROP behavior. (#738)
- For macOS, an example launchd configuration file is now provided. (#762)
### Changed
- Lighthouses and other `static_host_map` entries that use DNS names will now
be automatically refreshed to detect when the IP address changes. (#796)
- Lighthouses send ACK replies back to clients so that they do not fall into
connection testing as often by clients. (#851, #408)
- Allow the `listen.host` option to contain a hostname. (#825)
- When Nebula switches to a new certificate (such as via SIGHUP), we now
rehandshake with all existing tunnels. This allows firewall groups to be
updated and `pki.disconnect_invalid` to know about the new certificate
expiration time. (#838, #857, #842, #840, #835, #828, #820, #807)
### Fixed
- Always disconnect blocklisted hosts, even if `pki.disconnect_invalid` is
not set. (#858)
- Dependencies updated and go1.20 required. (#780, #824, #855, #854)
- Fix possible race condition with relays. (#827)
- FreeBSD: Fix connection to the localhost's own Nebula IP. (#808)
- Normalize and document some common log field values. (#837, #811)
- Fix crash if you set unlucky values for the firewall timeout configuration
options. (#802)
- Make DNS queries case insensitive. (#793)
- Update example systemd configurations to want `nss-lookup`. (#791)
- Errors with SSH commands now go to the SSH tunnel instead of stderr. (#757)
- Fix a hang when shutting down Android. (#772)
## [1.6.1] - 2022-09-26
### Fixed
- Refuse to process underlay packets received from overlay IPs. This prevents
confusion on hosts that have unsafe routes configured. (#741)
- The ssh `reload` command did not work on Windows, since it relied on sending
a SIGHUP signal internally. This has been fixed. (#725)
- A regression in v1.5.2 that broke unsafe routes on Mobile clients has been
fixed. (#729)
## [1.6.0] - 2022-06-30
### Added
- Experimental: nebula clients can be configured to act as relays for other nebula clients.
Primarily useful when stubborn NATs make a direct tunnel impossible. (#678)
- Configuration option to report manually specified `ip:port`s to lighthouses. (#650)
- Windows arm64 build. (#638)
- `punchy` and most `lighthouse` config options now support hot reloading. (#649)
### Changed
- Build against go 1.18. (#656)
- Promoted `routines` config from experimental to supported feature. (#702)
- Dependencies updated. (#664)
### Fixed
- Packets destined for the same host that sent it will be returned on MacOS.
This matches the default behavior of other operating systems. (#501)
- `unsafe_route` configuration will no longer crash on Windows. (#648)
- A few panics that were introduced in 1.5.x. (#657, #658, #675)
### Security
- You can set `listen.send_recv_error` to control the conditions in which
`recv_error` messages are sent. Sending these messages can expose the fact
that Nebula is running on a host, but it speeds up re-handshaking. (#670)
### Removed
- `x509` config stanza support has been removed. (#685)
## [1.5.2] - 2021-12-14 ## [1.5.2] - 2021-12-14
### Added ### Added
@@ -345,7 +703,23 @@ created.)
- Initial public release. - Initial public release.
[Unreleased]: https://github.com/slackhq/nebula/compare/v1.5.2...HEAD [Unreleased]: https://github.com/slackhq/nebula/compare/v1.9.7...HEAD
[1.9.7]: https://github.com/slackhq/nebula/releases/tag/v1.9.7
[1.9.6]: https://github.com/slackhq/nebula/releases/tag/v1.9.6
[1.9.5]: https://github.com/slackhq/nebula/releases/tag/v1.9.5
[1.9.4]: https://github.com/slackhq/nebula/releases/tag/v1.9.4
[1.9.3]: https://github.com/slackhq/nebula/releases/tag/v1.9.3
[1.9.2]: https://github.com/slackhq/nebula/releases/tag/v1.9.2
[1.9.1]: https://github.com/slackhq/nebula/releases/tag/v1.9.1
[1.9.0]: https://github.com/slackhq/nebula/releases/tag/v1.9.0
[1.8.2]: https://github.com/slackhq/nebula/releases/tag/v1.8.2
[1.8.1]: https://github.com/slackhq/nebula/releases/tag/v1.8.1
[1.8.0]: https://github.com/slackhq/nebula/releases/tag/v1.8.0
[1.7.2]: https://github.com/slackhq/nebula/releases/tag/v1.7.2
[1.7.1]: https://github.com/slackhq/nebula/releases/tag/v1.7.1
[1.7.0]: https://github.com/slackhq/nebula/releases/tag/v1.7.0
[1.6.1]: https://github.com/slackhq/nebula/releases/tag/v1.6.1
[1.6.0]: https://github.com/slackhq/nebula/releases/tag/v1.6.0
[1.5.2]: https://github.com/slackhq/nebula/releases/tag/v1.5.2 [1.5.2]: https://github.com/slackhq/nebula/releases/tag/v1.5.2
[1.5.0]: https://github.com/slackhq/nebula/releases/tag/v1.5.0 [1.5.0]: https://github.com/slackhq/nebula/releases/tag/v1.5.0
[1.4.0]: https://github.com/slackhq/nebula/releases/tag/v1.4.0 [1.4.0]: https://github.com/slackhq/nebula/releases/tag/v1.4.0

37
LOGGING.md Normal file
View File

@@ -0,0 +1,37 @@
### Logging conventions
A log message (the string/format passed to `Info`, `Error`, `Debug` etc, as well as their `Sprintf` counterparts) should
be a descriptive message about the event and may contain specific identifying characteristics. Regardless of the
level of detail in the message identifying characteristics should always be included via `WithField`, `WithFields` or
`WithError`
If an error is being logged use `l.WithError(err)` so that there is better discoverability about the event as well
as the specific error condition.
#### Common fields
- `cert` - a `cert.NebulaCertificate` object, do not `.String()` this manually, `logrus` will marshal objects properly
for the formatter it is using.
- `fingerprint` - a single `NebeulaCertificate` hex encoded fingerprint
- `fingerprints` - an array of `NebulaCertificate` hex encoded fingerprints
- `fwPacket` - a FirewallPacket object
- `handshake` - an object containing:
- `stage` - the current stage counter
- `style` - noise handshake style `ix_psk0`, `xx`, etc
- `header` - a nebula header object
- `udpAddr` - a `net.UDPAddr` object
- `udpIp` - a udp ip address
- `vpnIp` - vpn ip of the host (remote or local)
- `relay` - the vpnIp of the relay host that is or should be handling the relay packet
- `relayFrom` - The vpnIp of the initial sender of the relayed packet
- `relayTo` - The vpnIp of the final destination of a relayed packet
#### Example:
```
l.WithError(err).
WithField("vpnIp", IntIp(hostinfo.hostId)).
WithField("udpAddr", addr).
WithField("handshake", m{"stage": 1, "style": "ix"}).
Info("Invalid certificate from host")
```

View File

@@ -1,20 +1,14 @@
GOMINVERSION = 1.17
NEBULA_CMD_PATH = "./cmd/nebula" NEBULA_CMD_PATH = "./cmd/nebula"
GO111MODULE = on
export GO111MODULE
CGO_ENABLED = 0 CGO_ENABLED = 0
export CGO_ENABLED export CGO_ENABLED
# Set up OS specific bits # Set up OS specific bits
ifeq ($(OS),Windows_NT) ifeq ($(OS),Windows_NT)
#TODO: we should be able to ditch awk as well
GOVERSION := $(shell go version | awk "{print substr($$3, 3)}")
GOISMIN := $(shell IF "$(GOVERSION)" GEQ "$(GOMINVERSION)" ECHO 1)
NEBULA_CMD_SUFFIX = .exe NEBULA_CMD_SUFFIX = .exe
NULL_FILE = nul NULL_FILE = nul
# RIO on windows does pointer stuff that makes go vet angry
VET_FLAGS = -unsafeptr=false
else else
GOVERSION := $(shell go version | awk '{print substr($$3, 3)}')
GOISMIN := $(shell expr "$(GOVERSION)" ">=" "$(GOMINVERSION)")
NEBULA_CMD_SUFFIX = NEBULA_CMD_SUFFIX =
NULL_FILE = /dev/null NULL_FILE = /dev/null
endif endif
@@ -28,6 +22,9 @@ ifndef BUILD_NUMBER
endif endif
endif endif
DOCKER_IMAGE_REPO ?= nebulaoss/nebula
DOCKER_IMAGE_TAG ?= latest
LDFLAGS = -X main.Build=$(BUILD_NUMBER) LDFLAGS = -X main.Build=$(BUILD_NUMBER)
ALL_LINUX = linux-amd64 \ ALL_LINUX = linux-amd64 \
@@ -42,13 +39,26 @@ ALL_LINUX = linux-amd64 \
linux-mips64 \ linux-mips64 \
linux-mips64le \ linux-mips64le \
linux-mips-softfloat \ linux-mips-softfloat \
linux-riscv64 linux-riscv64 \
linux-loong64
ALL_FREEBSD = freebsd-amd64 \
freebsd-arm64
ALL_OPENBSD = openbsd-amd64 \
openbsd-arm64
ALL_NETBSD = netbsd-amd64 \
netbsd-arm64
ALL = $(ALL_LINUX) \ ALL = $(ALL_LINUX) \
$(ALL_FREEBSD) \
$(ALL_OPENBSD) \
$(ALL_NETBSD) \
darwin-amd64 \ darwin-amd64 \
darwin-arm64 \ darwin-arm64 \
freebsd-amd64 \ windows-amd64 \
windows-amd64 windows-arm64
e2e: e2e:
$(TEST_ENV) go test -tags=e2e_testing -count=1 $(TEST_FLAGS) ./e2e $(TEST_ENV) go test -tags=e2e_testing -count=1 $(TEST_FLAGS) ./e2e
@@ -65,25 +75,47 @@ e2evvv: e2ev
e2evvvv: TEST_ENV += TEST_LOGS=3 e2evvvv: TEST_ENV += TEST_LOGS=3
e2evvvv: e2ev e2evvvv: e2ev
e2e-bench: TEST_FLAGS = -bench=. -benchmem -run=^$
e2e-bench: e2e
DOCKER_BIN = build/linux-amd64/nebula build/linux-amd64/nebula-cert
all: $(ALL:%=build/%/nebula) $(ALL:%=build/%/nebula-cert) all: $(ALL:%=build/%/nebula) $(ALL:%=build/%/nebula-cert)
docker: docker/linux-$(shell go env GOARCH)
release: $(ALL:%=build/nebula-%.tar.gz) release: $(ALL:%=build/nebula-%.tar.gz)
release-linux: $(ALL_LINUX:%=build/nebula-%.tar.gz) release-linux: $(ALL_LINUX:%=build/nebula-%.tar.gz)
release-freebsd: build/nebula-freebsd-amd64.tar.gz release-freebsd: $(ALL_FREEBSD:%=build/nebula-%.tar.gz)
release-openbsd: $(ALL_OPENBSD:%=build/nebula-%.tar.gz)
release-netbsd: $(ALL_NETBSD:%=build/nebula-%.tar.gz)
release-boringcrypto: build/nebula-linux-$(shell go env GOARCH)-boringcrypto.tar.gz
BUILD_ARGS = -trimpath BUILD_ARGS = -trimpath
bin-windows: build/windows-amd64/nebula.exe build/windows-amd64/nebula-cert.exe bin-windows: build/windows-amd64/nebula.exe build/windows-amd64/nebula-cert.exe
mv $? . mv $? .
bin-windows-arm64: build/windows-arm64/nebula.exe build/windows-arm64/nebula-cert.exe
mv $? .
bin-darwin: build/darwin-amd64/nebula build/darwin-amd64/nebula-cert bin-darwin: build/darwin-amd64/nebula build/darwin-amd64/nebula-cert
mv $? . mv $? .
bin-freebsd: build/freebsd-amd64/nebula build/freebsd-amd64/nebula-cert bin-freebsd: build/freebsd-amd64/nebula build/freebsd-amd64/nebula-cert
mv $? . mv $? .
bin-freebsd-arm64: build/freebsd-arm64/nebula build/freebsd-arm64/nebula-cert
mv $? .
bin-boringcrypto: build/linux-$(shell go env GOARCH)-boringcrypto/nebula build/linux-$(shell go env GOARCH)-boringcrypto/nebula-cert
mv $? .
bin: bin:
go build $(BUILD_ARGS) -ldflags "$(LDFLAGS)" -o ./nebula${NEBULA_CMD_SUFFIX} ${NEBULA_CMD_PATH} go build $(BUILD_ARGS) -ldflags "$(LDFLAGS)" -o ./nebula${NEBULA_CMD_SUFFIX} ${NEBULA_CMD_PATH}
go build $(BUILD_ARGS) -ldflags "$(LDFLAGS)" -o ./nebula-cert${NEBULA_CMD_SUFFIX} ./cmd/nebula-cert go build $(BUILD_ARGS) -ldflags "$(LDFLAGS)" -o ./nebula-cert${NEBULA_CMD_SUFFIX} ./cmd/nebula-cert
@@ -98,6 +130,12 @@ build/linux-mips-%: GOENV += GOMIPS=$(word 3, $(subst -, ,$*))
# Build an extra small binary for mips-softfloat # Build an extra small binary for mips-softfloat
build/linux-mips-softfloat/%: LDFLAGS += -s -w build/linux-mips-softfloat/%: LDFLAGS += -s -w
# boringcrypto
build/linux-amd64-boringcrypto/%: GOENV += GOEXPERIMENT=boringcrypto CGO_ENABLED=1
build/linux-arm64-boringcrypto/%: GOENV += GOEXPERIMENT=boringcrypto CGO_ENABLED=1
build/linux-amd64-boringcrypto/%: LDFLAGS += -checklinkname=0
build/linux-arm64-boringcrypto/%: LDFLAGS += -checklinkname=0
build/%/nebula: .FORCE build/%/nebula: .FORCE
GOOS=$(firstword $(subst -, , $*)) \ GOOS=$(firstword $(subst -, , $*)) \
GOARCH=$(word 2, $(subst -, ,$*)) $(GOENV) \ GOARCH=$(word 2, $(subst -, ,$*)) $(GOENV) \
@@ -120,16 +158,28 @@ build/nebula-%.tar.gz: build/%/nebula build/%/nebula-cert
build/nebula-%.zip: build/%/nebula.exe build/%/nebula-cert.exe build/nebula-%.zip: build/%/nebula.exe build/%/nebula-cert.exe
cd build/$* && zip ../nebula-$*.zip nebula.exe nebula-cert.exe cd build/$* && zip ../nebula-$*.zip nebula.exe nebula-cert.exe
docker/%: build/%/nebula build/%/nebula-cert
docker build . $(DOCKER_BUILD_ARGS) -f docker/Dockerfile --platform "$(subst -,/,$*)" --tag "${DOCKER_IMAGE_REPO}:${DOCKER_IMAGE_TAG}" --tag "${DOCKER_IMAGE_REPO}:$(BUILD_NUMBER)"
vet: vet:
go vet -v ./... go vet $(VET_FLAGS) -v ./...
test: test:
go test -v ./... go test -v ./...
test-boringcrypto:
GOEXPERIMENT=boringcrypto CGO_ENABLED=1 go test -ldflags "-checklinkname=0" -v ./...
test-cov-html: test-cov-html:
go test -coverprofile=coverage.out go test -coverprofile=coverage.out
go tool cover -html=coverage.out go tool cover -html=coverage.out
build-test-mobile:
GOARCH=amd64 GOOS=ios go build $(shell go list ./... | grep -v '/cmd/\|/examples/')
GOARCH=arm64 GOOS=ios go build $(shell go list ./... | grep -v '/cmd/\|/examples/')
GOARCH=amd64 GOOS=android go build $(shell go list ./... | grep -v '/cmd/\|/examples/')
GOARCH=arm64 GOOS=android go build $(shell go list ./... | grep -v '/cmd/\|/examples/')
bench: bench:
go test -bench=. go test -bench=.
@@ -163,10 +213,21 @@ bin-docker: bin build/linux-amd64/nebula build/linux-amd64/nebula-cert
smoke-docker: bin-docker smoke-docker: bin-docker
cd .github/workflows/smoke/ && ./build.sh cd .github/workflows/smoke/ && ./build.sh
cd .github/workflows/smoke/ && ./smoke.sh cd .github/workflows/smoke/ && ./smoke.sh
cd .github/workflows/smoke/ && NAME="smoke-p256" CURVE="P256" ./build.sh
cd .github/workflows/smoke/ && NAME="smoke-p256" ./smoke.sh
smoke-relay-docker: bin-docker
cd .github/workflows/smoke/ && ./build-relay.sh
cd .github/workflows/smoke/ && ./smoke-relay.sh
smoke-docker-race: BUILD_ARGS = -race smoke-docker-race: BUILD_ARGS = -race
smoke-docker-race: CGO_ENABLED = 1
smoke-docker-race: smoke-docker smoke-docker-race: smoke-docker
smoke-vagrant/%: bin-docker build/%/nebula
cd .github/workflows/smoke/ && ./build.sh $*
cd .github/workflows/smoke/ && ./smoke-vagrant.sh $*
.FORCE: .FORCE:
.PHONY: e2e e2ev e2evv e2evvv e2evvvv test test-cov-html bench bench-cpu bench-cpu-long bin proto release service smoke-docker smoke-docker-race .PHONY: bench bench-cpu bench-cpu-long bin build-test-mobile e2e e2ev e2evv e2evvv e2evvvv proto release service smoke-docker smoke-docker-race test test-cov-html smoke-vagrant/%
.DEFAULT_GOAL := bin .DEFAULT_GOAL := bin

View File

@@ -8,7 +8,7 @@ and tunneling, and each of those individual pieces existed before Nebula in vari
What makes Nebula different to existing offerings is that it brings all of these ideas together, What makes Nebula different to existing offerings is that it brings all of these ideas together,
resulting in a sum that is greater than its individual parts. resulting in a sum that is greater than its individual parts.
Further documentation can be found [here](https://www.defined.net/nebula/introduction/). Further documentation can be found [here](https://nebula.defined.net/docs/).
You can read more about Nebula [here](https://medium.com/p/884110a5579). You can read more about Nebula [here](https://medium.com/p/884110a5579).
@@ -27,16 +27,36 @@ Check the [releases](https://github.com/slackhq/nebula/releases/latest) page for
#### Distribution Packages #### Distribution Packages
- [Arch Linux](https://archlinux.org/packages/community/x86_64/nebula/) - [Arch Linux](https://archlinux.org/packages/extra/x86_64/nebula/)
``` ```
$ sudo pacman -S nebula $ sudo pacman -S nebula
``` ```
- [Fedora Linux](https://copr.fedorainfracloud.org/coprs/jdoss/nebula/)
- [Fedora Linux](https://src.fedoraproject.org/rpms/nebula)
``` ```
$ sudo dnf copr enable jdoss/nebula
$ sudo dnf install nebula $ sudo dnf install nebula
``` ```
- [Debian Linux](https://packages.debian.org/source/stable/nebula)
```
$ sudo apt install nebula
```
- [Alpine Linux](https://pkgs.alpinelinux.org/packages?name=nebula)
```
$ sudo apk add nebula
```
- [macOS Homebrew](https://github.com/Homebrew/homebrew-core/blob/HEAD/Formula/nebula.rb)
```
$ brew install nebula
```
- [Docker](https://hub.docker.com/r/nebulaoss/nebula)
```
$ docker pull nebulaoss/nebula
```
#### Mobile #### Mobile
- [iOS](https://apps.apple.com/us/app/mobile-nebula/id1509587936?itsct=apps_box&amp;itscg=30200) - [iOS](https://apps.apple.com/us/app/mobile-nebula/id1509587936?itsct=apps_box&amp;itscg=30200)
@@ -93,18 +113,18 @@ Download a copy of the nebula [example configuration](https://github.com/slackhq
#### 6. Copy nebula credentials, configuration, and binaries to each host #### 6. Copy nebula credentials, configuration, and binaries to each host
For each host, copy the nebula binary to the host, along with `config.yaml` from step 5, and the files `ca.crt`, `{host}.crt`, and `{host}.key` from step 4. For each host, copy the nebula binary to the host, along with `config.yml` from step 5, and the files `ca.crt`, `{host}.crt`, and `{host}.key` from step 4.
**DO NOT COPY `ca.key` TO INDIVIDUAL NODES.** **DO NOT COPY `ca.key` TO INDIVIDUAL NODES.**
#### 7. Run nebula on each host #### 7. Run nebula on each host
``` ```
./nebula -config /path/to/config.yaml ./nebula -config /path/to/config.yml
``` ```
## Building Nebula from source ## Building Nebula from source
Download go and clone this repo. Change to the nebula directory. Make sure you have [go](https://go.dev/doc/install) installed and clone this repo. Change to the nebula directory.
To build nebula for all platforms: To build nebula for all platforms:
`make all` `make all`
@@ -114,6 +134,17 @@ To build nebula for a specific platform (ex, Windows):
See the [Makefile](Makefile) for more details on build targets See the [Makefile](Makefile) for more details on build targets
## Curve P256 and BoringCrypto
The default curve used for cryptographic handshakes and signatures is Curve25519. This is the recommended setting for most users. If your deployment has certain compliance requirements, you have the option of creating your CA using `nebula-cert ca -curve P256` to use NIST Curve P256. The CA will then sign certificates using ECDSA P256, and any hosts using these certificates will use P256 for ECDH handshakes.
In addition, Nebula can be built using the [BoringCrypto GOEXPERIMENT](https://github.com/golang/go/blob/go1.20/src/crypto/internal/boring/README.md) by running either of the following make targets:
make bin-boringcrypto
make release-boringcrypto
This is not the recommended default deployment, but may be useful based on your compliance requirements.
## Credits ## Credits
Nebula was created at Slack Technologies, Inc by Nate Brown and Ryan Huber, with contributions from Oliver Fross, Alan Lam, Wade Simmons, and Lining Wang. Nebula was created at Slack Technologies, Inc by Nate Brown and Ryan Huber, with contributions from Oliver Fross, Alan Lam, Wade Simmons, and Lining Wang.

12
SECURITY.md Normal file
View File

@@ -0,0 +1,12 @@
Security Policy
===============
Reporting a Vulnerability
-------------------------
If you believe you have found a security vulnerability with Nebula, please let
us know right away. We will investigate all reports and do our best to quickly
fix valid issues.
You can submit your report on [HackerOne](https://hackerone.com/slack) and our
security team will respond as soon as possible.

View File

@@ -2,17 +2,16 @@ package nebula
import ( import (
"fmt" "fmt"
"net" "net/netip"
"regexp" "regexp"
"github.com/slackhq/nebula/cidr" "github.com/gaissmai/bart"
"github.com/slackhq/nebula/config" "github.com/slackhq/nebula/config"
"github.com/slackhq/nebula/iputil"
) )
type AllowList struct { type AllowList struct {
// The values of this cidrTree are `bool`, signifying allow/deny // The values of this cidrTree are `bool`, signifying allow/deny
cidrTree *cidr.Tree6 cidrTree *bart.Table[bool]
} }
type RemoteAllowList struct { type RemoteAllowList struct {
@@ -20,7 +19,7 @@ type RemoteAllowList struct {
// Inside Range Specific, keys of this tree are inside CIDRs and values // Inside Range Specific, keys of this tree are inside CIDRs and values
// are *AllowList // are *AllowList
insideAllowLists *cidr.Tree6 insideAllowLists *bart.Table[*AllowList]
} }
type LocalAllowList struct { type LocalAllowList struct {
@@ -88,7 +87,7 @@ func newAllowList(k string, raw interface{}, handleKey func(key string, value in
return nil, fmt.Errorf("config `%s` has invalid type: %T", k, raw) return nil, fmt.Errorf("config `%s` has invalid type: %T", k, raw)
} }
tree := cidr.NewTree6() tree := new(bart.Table[bool])
// Keep track of the rules we have added for both ipv4 and ipv6 // Keep track of the rules we have added for both ipv4 and ipv6
type allowListRules struct { type allowListRules struct {
@@ -122,18 +121,20 @@ func newAllowList(k string, raw interface{}, handleKey func(key string, value in
return nil, fmt.Errorf("config `%s` has invalid value (type %T): %v", k, rawValue, rawValue) return nil, fmt.Errorf("config `%s` has invalid value (type %T): %v", k, rawValue, rawValue)
} }
_, ipNet, err := net.ParseCIDR(rawCIDR) ipNet, err := netip.ParsePrefix(rawCIDR)
if err != nil { if err != nil {
return nil, fmt.Errorf("config `%s` has invalid CIDR: %s", k, rawCIDR) return nil, fmt.Errorf("config `%s` has invalid CIDR: %s. %w", k, rawCIDR, err)
} }
// TODO: should we error on duplicate CIDRs in the config? ipNet = netip.PrefixFrom(ipNet.Addr().Unmap(), ipNet.Bits())
tree.AddCIDR(ipNet, value)
maskBits, maskSize := ipNet.Mask.Size() // TODO: should we error on duplicate CIDRs in the config?
tree.Insert(ipNet, value)
maskBits := ipNet.Bits()
var rules *allowListRules var rules *allowListRules
if maskSize == 32 { if ipNet.Addr().Is4() {
rules = &rules4 rules = &rules4
} else { } else {
rules = &rules6 rules = &rules6
@@ -156,8 +157,7 @@ func newAllowList(k string, raw interface{}, handleKey func(key string, value in
if !rules4.defaultSet { if !rules4.defaultSet {
if rules4.allValuesMatch { if rules4.allValuesMatch {
_, zeroCIDR, _ := net.ParseCIDR("0.0.0.0/0") tree.Insert(netip.PrefixFrom(netip.IPv4Unspecified(), 0), !rules4.allValues)
tree.AddCIDR(zeroCIDR, !rules4.allValues)
} else { } else {
return nil, fmt.Errorf("config `%s` contains both true and false rules, but no default set for 0.0.0.0/0", k) return nil, fmt.Errorf("config `%s` contains both true and false rules, but no default set for 0.0.0.0/0", k)
} }
@@ -165,8 +165,7 @@ func newAllowList(k string, raw interface{}, handleKey func(key string, value in
if !rules6.defaultSet { if !rules6.defaultSet {
if rules6.allValuesMatch { if rules6.allValuesMatch {
_, zeroCIDR, _ := net.ParseCIDR("::/0") tree.Insert(netip.PrefixFrom(netip.IPv6Unspecified(), 0), !rules6.allValues)
tree.AddCIDR(zeroCIDR, !rules6.allValues)
} else { } else {
return nil, fmt.Errorf("config `%s` contains both true and false rules, but no default set for ::/0", k) return nil, fmt.Errorf("config `%s` contains both true and false rules, but no default set for ::/0", k)
} }
@@ -218,13 +217,13 @@ func getAllowListInterfaces(k string, v interface{}) ([]AllowListNameRule, error
return nameRules, nil return nameRules, nil
} }
func getRemoteAllowRanges(c *config.C, k string) (*cidr.Tree6, error) { func getRemoteAllowRanges(c *config.C, k string) (*bart.Table[*AllowList], error) {
value := c.Get(k) value := c.Get(k)
if value == nil { if value == nil {
return nil, nil return nil, nil
} }
remoteAllowRanges := cidr.NewTree6() remoteAllowRanges := new(bart.Table[*AllowList])
rawMap, ok := value.(map[interface{}]interface{}) rawMap, ok := value.(map[interface{}]interface{})
if !ok { if !ok {
@@ -241,60 +240,27 @@ func getRemoteAllowRanges(c *config.C, k string) (*cidr.Tree6, error) {
return nil, err return nil, err
} }
_, ipNet, err := net.ParseCIDR(rawCIDR) ipNet, err := netip.ParsePrefix(rawCIDR)
if err != nil { if err != nil {
return nil, fmt.Errorf("config `%s` has invalid CIDR: %s", k, rawCIDR) return nil, fmt.Errorf("config `%s` has invalid CIDR: %s. %w", k, rawCIDR, err)
} }
remoteAllowRanges.AddCIDR(ipNet, allowList) remoteAllowRanges.Insert(netip.PrefixFrom(ipNet.Addr().Unmap(), ipNet.Bits()), allowList)
} }
return remoteAllowRanges, nil return remoteAllowRanges, nil
} }
func (al *AllowList) Allow(ip net.IP) bool { func (al *AllowList) Allow(ip netip.Addr) bool {
if al == nil { if al == nil {
return true return true
} }
result := al.cidrTree.MostSpecificContains(ip) result, _ := al.cidrTree.Lookup(ip)
switch v := result.(type) { return result
case bool:
return v
default:
panic(fmt.Errorf("invalid state, allowlist returned: %T %v", result, result))
}
} }
func (al *AllowList) AllowIpV4(ip iputil.VpnIp) bool { func (al *LocalAllowList) Allow(ip netip.Addr) bool {
if al == nil {
return true
}
result := al.cidrTree.MostSpecificContainsIpV4(ip)
switch v := result.(type) {
case bool:
return v
default:
panic(fmt.Errorf("invalid state, allowlist returned: %T %v", result, result))
}
}
func (al *AllowList) AllowIpV6(hi, lo uint64) bool {
if al == nil {
return true
}
result := al.cidrTree.MostSpecificContainsIpV6(hi, lo)
switch v := result.(type) {
case bool:
return v
default:
panic(fmt.Errorf("invalid state, allowlist returned: %T %v", result, result))
}
}
func (al *LocalAllowList) Allow(ip net.IP) bool {
if al == nil { if al == nil {
return true return true
} }
@@ -316,45 +282,25 @@ func (al *LocalAllowList) AllowName(name string) bool {
return !al.nameRules[0].Allow return !al.nameRules[0].Allow
} }
func (al *RemoteAllowList) AllowUnknownVpnIp(ip net.IP) bool { func (al *RemoteAllowList) AllowUnknownVpnIp(ip netip.Addr) bool {
if al == nil { if al == nil {
return true return true
} }
return al.AllowList.Allow(ip) return al.AllowList.Allow(ip)
} }
func (al *RemoteAllowList) Allow(vpnIp iputil.VpnIp, ip net.IP) bool { func (al *RemoteAllowList) Allow(vpnIp netip.Addr, ip netip.Addr) bool {
if !al.getInsideAllowList(vpnIp).Allow(ip) { if !al.getInsideAllowList(vpnIp).Allow(ip) {
return false return false
} }
return al.AllowList.Allow(ip) return al.AllowList.Allow(ip)
} }
func (al *RemoteAllowList) AllowIpV4(vpnIp iputil.VpnIp, ip iputil.VpnIp) bool { func (al *RemoteAllowList) getInsideAllowList(vpnIp netip.Addr) *AllowList {
if al == nil {
return true
}
if !al.getInsideAllowList(vpnIp).AllowIpV4(ip) {
return false
}
return al.AllowList.AllowIpV4(ip)
}
func (al *RemoteAllowList) AllowIpV6(vpnIp iputil.VpnIp, hi, lo uint64) bool {
if al == nil {
return true
}
if !al.getInsideAllowList(vpnIp).AllowIpV6(hi, lo) {
return false
}
return al.AllowList.AllowIpV6(hi, lo)
}
func (al *RemoteAllowList) getInsideAllowList(vpnIp iputil.VpnIp) *AllowList {
if al.insideAllowLists != nil { if al.insideAllowLists != nil {
inside := al.insideAllowLists.MostSpecificContainsIpV4(vpnIp) inside, ok := al.insideAllowLists.Lookup(vpnIp)
if inside != nil { if ok {
return inside.(*AllowList) return inside
} }
} }
return nil return nil

View File

@@ -1,11 +1,11 @@
package nebula package nebula
import ( import (
"net" "net/netip"
"regexp" "regexp"
"testing" "testing"
"github.com/slackhq/nebula/cidr" "github.com/gaissmai/bart"
"github.com/slackhq/nebula/config" "github.com/slackhq/nebula/config"
"github.com/slackhq/nebula/test" "github.com/slackhq/nebula/test"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
@@ -18,7 +18,7 @@ func TestNewAllowListFromConfig(t *testing.T) {
"192.168.0.0": true, "192.168.0.0": true,
} }
r, err := newAllowListFromConfig(c, "allowlist", nil) r, err := newAllowListFromConfig(c, "allowlist", nil)
assert.EqualError(t, err, "config `allowlist` has invalid CIDR: 192.168.0.0") assert.EqualError(t, err, "config `allowlist` has invalid CIDR: 192.168.0.0. netip.ParsePrefix(\"192.168.0.0\"): no '/'")
assert.Nil(t, r) assert.Nil(t, r)
c.Settings["allowlist"] = map[interface{}]interface{}{ c.Settings["allowlist"] = map[interface{}]interface{}{
@@ -98,26 +98,26 @@ func TestNewAllowListFromConfig(t *testing.T) {
} }
func TestAllowList_Allow(t *testing.T) { func TestAllowList_Allow(t *testing.T) {
assert.Equal(t, true, ((*AllowList)(nil)).Allow(net.ParseIP("1.1.1.1"))) assert.Equal(t, true, ((*AllowList)(nil)).Allow(netip.MustParseAddr("1.1.1.1")))
tree := cidr.NewTree6() tree := new(bart.Table[bool])
tree.AddCIDR(cidr.Parse("0.0.0.0/0"), true) tree.Insert(netip.MustParsePrefix("0.0.0.0/0"), true)
tree.AddCIDR(cidr.Parse("10.0.0.0/8"), false) tree.Insert(netip.MustParsePrefix("10.0.0.0/8"), false)
tree.AddCIDR(cidr.Parse("10.42.42.42/32"), true) tree.Insert(netip.MustParsePrefix("10.42.42.42/32"), true)
tree.AddCIDR(cidr.Parse("10.42.0.0/16"), true) tree.Insert(netip.MustParsePrefix("10.42.0.0/16"), true)
tree.AddCIDR(cidr.Parse("10.42.42.0/24"), true) tree.Insert(netip.MustParsePrefix("10.42.42.0/24"), true)
tree.AddCIDR(cidr.Parse("10.42.42.0/24"), false) tree.Insert(netip.MustParsePrefix("10.42.42.0/24"), false)
tree.AddCIDR(cidr.Parse("::1/128"), true) tree.Insert(netip.MustParsePrefix("::1/128"), true)
tree.AddCIDR(cidr.Parse("::2/128"), false) tree.Insert(netip.MustParsePrefix("::2/128"), false)
al := &AllowList{cidrTree: tree} al := &AllowList{cidrTree: tree}
assert.Equal(t, true, al.Allow(net.ParseIP("1.1.1.1"))) assert.Equal(t, true, al.Allow(netip.MustParseAddr("1.1.1.1")))
assert.Equal(t, false, al.Allow(net.ParseIP("10.0.0.4"))) assert.Equal(t, false, al.Allow(netip.MustParseAddr("10.0.0.4")))
assert.Equal(t, true, al.Allow(net.ParseIP("10.42.42.42"))) assert.Equal(t, true, al.Allow(netip.MustParseAddr("10.42.42.42")))
assert.Equal(t, false, al.Allow(net.ParseIP("10.42.42.41"))) assert.Equal(t, false, al.Allow(netip.MustParseAddr("10.42.42.41")))
assert.Equal(t, true, al.Allow(net.ParseIP("10.42.0.1"))) assert.Equal(t, true, al.Allow(netip.MustParseAddr("10.42.0.1")))
assert.Equal(t, true, al.Allow(net.ParseIP("::1"))) assert.Equal(t, true, al.Allow(netip.MustParseAddr("::1")))
assert.Equal(t, false, al.Allow(net.ParseIP("::2"))) assert.Equal(t, false, al.Allow(netip.MustParseAddr("::2")))
} }
func TestLocalAllowList_AllowName(t *testing.T) { func TestLocalAllowList_AllowName(t *testing.T) {

8
boring.go Normal file
View File

@@ -0,0 +1,8 @@
//go:build boringcrypto
// +build boringcrypto
package nebula
import "crypto/boring"
var boringEnabled = boring.Enabled

159
calculated_remote.go Normal file
View File

@@ -0,0 +1,159 @@
package nebula
import (
"encoding/binary"
"fmt"
"math"
"net"
"net/netip"
"strconv"
"github.com/gaissmai/bart"
"github.com/slackhq/nebula/config"
)
// This allows us to "guess" what the remote might be for a host while we wait
// for the lighthouse response. See "lighthouse.calculated_remotes" in the
// example config file.
type calculatedRemote struct {
ipNet netip.Prefix
mask netip.Prefix
port uint32
}
func newCalculatedRemote(maskCidr netip.Prefix, port int) (*calculatedRemote, error) {
masked := maskCidr.Masked()
if port < 0 || port > math.MaxUint16 {
return nil, fmt.Errorf("invalid port: %d", port)
}
return &calculatedRemote{
ipNet: maskCidr,
mask: masked,
port: uint32(port),
}, nil
}
func (c *calculatedRemote) String() string {
return fmt.Sprintf("CalculatedRemote(mask=%v port=%d)", c.ipNet, c.port)
}
func (c *calculatedRemote) Apply(ip netip.Addr) *Ip4AndPort {
// Combine the masked bytes of the "mask" IP with the unmasked bytes
// of the overlay IP
if c.ipNet.Addr().Is4() {
return c.apply4(ip)
}
return c.apply6(ip)
}
func (c *calculatedRemote) apply4(ip netip.Addr) *Ip4AndPort {
//TODO: IPV6-WORK this can be less crappy
maskb := net.CIDRMask(c.mask.Bits(), c.mask.Addr().BitLen())
mask := binary.BigEndian.Uint32(maskb[:])
b := c.mask.Addr().As4()
maskIp := binary.BigEndian.Uint32(b[:])
b = ip.As4()
intIp := binary.BigEndian.Uint32(b[:])
return &Ip4AndPort{(maskIp & mask) | (intIp & ^mask), c.port}
}
func (c *calculatedRemote) apply6(ip netip.Addr) *Ip4AndPort {
//TODO: IPV6-WORK
panic("Can not calculate ipv6 remote addresses")
}
func NewCalculatedRemotesFromConfig(c *config.C, k string) (*bart.Table[[]*calculatedRemote], error) {
value := c.Get(k)
if value == nil {
return nil, nil
}
calculatedRemotes := new(bart.Table[[]*calculatedRemote])
rawMap, ok := value.(map[any]any)
if !ok {
return nil, fmt.Errorf("config `%s` has invalid type: %T", k, value)
}
for rawKey, rawValue := range rawMap {
rawCIDR, ok := rawKey.(string)
if !ok {
return nil, fmt.Errorf("config `%s` has invalid key (type %T): %v", k, rawKey, rawKey)
}
cidr, err := netip.ParsePrefix(rawCIDR)
if err != nil {
return nil, fmt.Errorf("config `%s` has invalid CIDR: %s", k, rawCIDR)
}
//TODO: IPV6-WORK this does not verify that rawValue contains the same bits as cidr here
entry, err := newCalculatedRemotesListFromConfig(rawValue)
if err != nil {
return nil, fmt.Errorf("config '%s.%s': %w", k, rawCIDR, err)
}
calculatedRemotes.Insert(cidr, entry)
}
return calculatedRemotes, nil
}
func newCalculatedRemotesListFromConfig(raw any) ([]*calculatedRemote, error) {
rawList, ok := raw.([]any)
if !ok {
return nil, fmt.Errorf("calculated_remotes entry has invalid type: %T", raw)
}
var l []*calculatedRemote
for _, e := range rawList {
c, err := newCalculatedRemotesEntryFromConfig(e)
if err != nil {
return nil, fmt.Errorf("calculated_remotes entry: %w", err)
}
l = append(l, c)
}
return l, nil
}
func newCalculatedRemotesEntryFromConfig(raw any) (*calculatedRemote, error) {
rawMap, ok := raw.(map[any]any)
if !ok {
return nil, fmt.Errorf("invalid type: %T", raw)
}
rawValue := rawMap["mask"]
if rawValue == nil {
return nil, fmt.Errorf("missing mask: %v", rawMap)
}
rawMask, ok := rawValue.(string)
if !ok {
return nil, fmt.Errorf("invalid mask (type %T): %v", rawValue, rawValue)
}
maskCidr, err := netip.ParsePrefix(rawMask)
if err != nil {
return nil, fmt.Errorf("invalid mask: %s", rawMask)
}
var port int
rawValue = rawMap["port"]
if rawValue == nil {
return nil, fmt.Errorf("missing port: %v", rawMap)
}
switch v := rawValue.(type) {
case int:
port = v
case string:
port, err = strconv.Atoi(v)
if err != nil {
return nil, fmt.Errorf("invalid port: %s: %w", v, err)
}
default:
return nil, fmt.Errorf("invalid port (type %T): %v", rawValue, rawValue)
}
return newCalculatedRemote(maskCidr, port)
}

25
calculated_remote_test.go Normal file
View File

@@ -0,0 +1,25 @@
package nebula
import (
"net/netip"
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestCalculatedRemoteApply(t *testing.T) {
ipNet, err := netip.ParsePrefix("192.168.1.0/24")
require.NoError(t, err)
c, err := newCalculatedRemote(ipNet, 4242)
require.NoError(t, err)
input, err := netip.ParseAddr("10.0.10.182")
assert.NoError(t, err)
expected, err := netip.ParseAddr("192.168.1.182")
assert.NoError(t, err)
assert.Equal(t, NewIp4AndPortFromNetIP(expected, 4242), c.Apply(input))
}

173
cert.go
View File

@@ -1,173 +0,0 @@
package nebula
import (
"errors"
"fmt"
"io/ioutil"
"strings"
"time"
"github.com/sirupsen/logrus"
"github.com/slackhq/nebula/cert"
"github.com/slackhq/nebula/config"
)
type CertState struct {
certificate *cert.NebulaCertificate
rawCertificate []byte
rawCertificateNoKey []byte
publicKey []byte
privateKey []byte
}
func NewCertState(certificate *cert.NebulaCertificate, privateKey []byte) (*CertState, error) {
// Marshal the certificate to ensure it is valid
rawCertificate, err := certificate.Marshal()
if err != nil {
return nil, fmt.Errorf("invalid nebula certificate on interface: %s", err)
}
publicKey := certificate.Details.PublicKey
cs := &CertState{
rawCertificate: rawCertificate,
certificate: certificate, // PublicKey has been set to nil above
privateKey: privateKey,
publicKey: publicKey,
}
cs.certificate.Details.PublicKey = nil
rawCertNoKey, err := cs.certificate.Marshal()
if err != nil {
return nil, fmt.Errorf("error marshalling certificate no key: %s", err)
}
cs.rawCertificateNoKey = rawCertNoKey
// put public key back
cs.certificate.Details.PublicKey = cs.publicKey
return cs, nil
}
func NewCertStateFromConfig(c *config.C) (*CertState, error) {
var pemPrivateKey []byte
var err error
privPathOrPEM := c.GetString("pki.key", "")
if privPathOrPEM == "" {
// Support backwards compat with the old x509
//TODO: remove after this is rolled out everywhere - NB 2018/02/23
privPathOrPEM = c.GetString("x509.key", "")
}
if privPathOrPEM == "" {
return nil, errors.New("no pki.key path or PEM data provided")
}
if strings.Contains(privPathOrPEM, "-----BEGIN") {
pemPrivateKey = []byte(privPathOrPEM)
privPathOrPEM = "<inline>"
} else {
pemPrivateKey, err = ioutil.ReadFile(privPathOrPEM)
if err != nil {
return nil, fmt.Errorf("unable to read pki.key file %s: %s", privPathOrPEM, err)
}
}
rawKey, _, err := cert.UnmarshalX25519PrivateKey(pemPrivateKey)
if err != nil {
return nil, fmt.Errorf("error while unmarshaling pki.key %s: %s", privPathOrPEM, err)
}
var rawCert []byte
pubPathOrPEM := c.GetString("pki.cert", "")
if pubPathOrPEM == "" {
// Support backwards compat with the old x509
//TODO: remove after this is rolled out everywhere - NB 2018/02/23
pubPathOrPEM = c.GetString("x509.cert", "")
}
if pubPathOrPEM == "" {
return nil, errors.New("no pki.cert path or PEM data provided")
}
if strings.Contains(pubPathOrPEM, "-----BEGIN") {
rawCert = []byte(pubPathOrPEM)
pubPathOrPEM = "<inline>"
} else {
rawCert, err = ioutil.ReadFile(pubPathOrPEM)
if err != nil {
return nil, fmt.Errorf("unable to read pki.cert file %s: %s", pubPathOrPEM, err)
}
}
nebulaCert, _, err := cert.UnmarshalNebulaCertificateFromPEM(rawCert)
if err != nil {
return nil, fmt.Errorf("error while unmarshaling pki.cert %s: %s", pubPathOrPEM, err)
}
if nebulaCert.Expired(time.Now()) {
return nil, fmt.Errorf("nebula certificate for this host is expired")
}
if len(nebulaCert.Details.Ips) == 0 {
return nil, fmt.Errorf("no IPs encoded in certificate")
}
if err = nebulaCert.VerifyPrivateKey(rawKey); err != nil {
return nil, fmt.Errorf("private key is not a pair with public key in nebula cert")
}
return NewCertState(nebulaCert, rawKey)
}
func loadCAFromConfig(l *logrus.Logger, c *config.C) (*cert.NebulaCAPool, error) {
var rawCA []byte
var err error
caPathOrPEM := c.GetString("pki.ca", "")
if caPathOrPEM == "" {
return nil, errors.New("no pki.ca path or PEM data provided")
}
if strings.Contains(caPathOrPEM, "-----BEGIN") {
rawCA = []byte(caPathOrPEM)
} else {
rawCA, err = ioutil.ReadFile(caPathOrPEM)
if err != nil {
return nil, fmt.Errorf("unable to read pki.ca file %s: %s", caPathOrPEM, err)
}
}
CAs, err := cert.NewCAPoolFromBytes(rawCA)
if errors.Is(err, cert.ErrExpired) {
var expired int
for _, cert := range CAs.CAs {
if cert.Expired(time.Now()) {
expired++
l.WithField("cert", cert).Warn("expired certificate present in CA pool")
}
}
if expired >= len(CAs.CAs) {
return nil, errors.New("no valid CA certificates present")
}
} else if err != nil {
return nil, fmt.Errorf("error while adding CA certificate to CA trust store: %s", err)
}
for _, fp := range c.GetStringSlice("pki.blocklist", []string{}) {
l.WithField("fingerprint", fp).Infof("Blocklisting cert")
CAs.BlocklistFingerprint(fp)
}
// Support deprecated config for at least one minor release to allow for migrations
//TODO: remove in 2022 or later
for _, fp := range c.GetStringSlice("pki.blacklist", []string{}) {
l.WithField("fingerprint", fp).Infof("Blocklisting cert")
l.Warn("pki.blacklist is deprecated and will not be supported in a future release. Please migrate your config to use pki.blocklist")
CAs.BlocklistFingerprint(fp)
}
return CAs, nil
}

View File

@@ -24,31 +24,39 @@ func NewCAPool() *NebulaCAPool {
// NewCAPoolFromBytes will create a new CA pool from the provided // NewCAPoolFromBytes will create a new CA pool from the provided
// input bytes, which must be a PEM-encoded set of nebula certificates. // input bytes, which must be a PEM-encoded set of nebula certificates.
// If the pool contains unsupported certificates, they will generate warnings
// in the []error return arg.
// If the pool contains any expired certificates, an ErrExpired will be // If the pool contains any expired certificates, an ErrExpired will be
// returned along with the pool. The caller must handle any such errors. // returned along with the pool. The caller must handle any such errors.
func NewCAPoolFromBytes(caPEMs []byte) (*NebulaCAPool, error) { func NewCAPoolFromBytes(caPEMs []byte) (*NebulaCAPool, []error, error) {
pool := NewCAPool() pool := NewCAPool()
var err error var err error
var expired bool var warnings []error
good := 0
for { for {
caPEMs, err = pool.AddCACertificate(caPEMs) caPEMs, err = pool.AddCACertificate(caPEMs)
if errors.Is(err, ErrExpired) { if errors.Is(err, ErrExpired) {
expired = true warnings = append(warnings, err)
err = nil } else if errors.Is(err, ErrInvalidPEMCertificateUnsupported) {
} warnings = append(warnings, err)
if err != nil { } else if err != nil {
return nil, err return nil, warnings, err
} else {
// Only consider a good certificate if there were no errors present
good++
} }
if len(caPEMs) == 0 || strings.TrimSpace(string(caPEMs)) == "" { if len(caPEMs) == 0 || strings.TrimSpace(string(caPEMs)) == "" {
break break
} }
} }
if expired { if good == 0 {
return pool, ErrExpired return nil, warnings, errors.New("no valid CA certificates present")
} }
return pool, nil return pool, warnings, nil
} }
// AddCACertificate verifies a Nebula CA certificate and adds it to the pool // AddCACertificate verifies a Nebula CA certificate and adds it to the pool
@@ -91,9 +99,15 @@ func (ncp *NebulaCAPool) ResetCertBlocklist() {
ncp.certBlocklist = make(map[string]struct{}) ncp.certBlocklist = make(map[string]struct{})
} }
// IsBlocklisted returns true if the fingerprint fails to generate or has been explicitly blocklisted // NOTE: This uses an internal cache for Sha256Sum() that will not be invalidated
// automatically if you manually change any fields in the NebulaCertificate.
func (ncp *NebulaCAPool) IsBlocklisted(c *NebulaCertificate) bool { func (ncp *NebulaCAPool) IsBlocklisted(c *NebulaCertificate) bool {
h, err := c.Sha256Sum() return ncp.isBlocklistedWithCache(c, false)
}
// IsBlocklisted returns true if the fingerprint fails to generate or has been explicitly blocklisted
func (ncp *NebulaCAPool) isBlocklistedWithCache(c *NebulaCertificate, useCache bool) bool {
h, err := c.sha256SumWithCache(useCache)
if err != nil { if err != nil {
return true return true
} }

View File

@@ -2,35 +2,56 @@ package cert
import ( import (
"bytes" "bytes"
"crypto" "crypto/ecdh"
"crypto/ecdsa"
"crypto/ed25519"
"crypto/elliptic"
"crypto/rand" "crypto/rand"
"crypto/sha256" "crypto/sha256"
"encoding/binary" "encoding/binary"
"encoding/hex" "encoding/hex"
"encoding/json" "encoding/json"
"encoding/pem" "encoding/pem"
"errors"
"fmt" "fmt"
"math"
"math/big"
"net" "net"
"sync/atomic"
"time" "time"
"github.com/golang/protobuf/proto"
"golang.org/x/crypto/curve25519" "golang.org/x/crypto/curve25519"
"golang.org/x/crypto/ed25519" "google.golang.org/protobuf/proto"
) )
const publicKeyLen = 32 const publicKeyLen = 32
const ( const (
CertBanner = "NEBULA CERTIFICATE" CertBanner = "NEBULA CERTIFICATE"
CertificateV2Banner = "NEBULA CERTIFICATE V2"
X25519PrivateKeyBanner = "NEBULA X25519 PRIVATE KEY" X25519PrivateKeyBanner = "NEBULA X25519 PRIVATE KEY"
X25519PublicKeyBanner = "NEBULA X25519 PUBLIC KEY" X25519PublicKeyBanner = "NEBULA X25519 PUBLIC KEY"
EncryptedEd25519PrivateKeyBanner = "NEBULA ED25519 ENCRYPTED PRIVATE KEY"
Ed25519PrivateKeyBanner = "NEBULA ED25519 PRIVATE KEY" Ed25519PrivateKeyBanner = "NEBULA ED25519 PRIVATE KEY"
Ed25519PublicKeyBanner = "NEBULA ED25519 PUBLIC KEY" Ed25519PublicKeyBanner = "NEBULA ED25519 PUBLIC KEY"
P256PrivateKeyBanner = "NEBULA P256 PRIVATE KEY"
P256PublicKeyBanner = "NEBULA P256 PUBLIC KEY"
EncryptedECDSAP256PrivateKeyBanner = "NEBULA ECDSA P256 ENCRYPTED PRIVATE KEY"
ECDSAP256PrivateKeyBanner = "NEBULA ECDSA P256 PRIVATE KEY"
) )
type NebulaCertificate struct { type NebulaCertificate struct {
Details NebulaCertificateDetails Details NebulaCertificateDetails
Signature []byte Signature []byte
// the cached hex string of the calculated sha256sum
// for VerifyWithCache
sha256sum atomic.Pointer[string]
// the cached public key bytes if they were verified as the signer
// for VerifyWithCache
signatureVerified atomic.Pointer[[]byte]
} }
type NebulaCertificateDetails struct { type NebulaCertificateDetails struct {
@@ -46,10 +67,25 @@ type NebulaCertificateDetails struct {
// Map of groups for faster lookup // Map of groups for faster lookup
InvertedGroups map[string]struct{} InvertedGroups map[string]struct{}
Curve Curve
}
type NebulaEncryptedData struct {
EncryptionMetadata NebulaEncryptionMetadata
Ciphertext []byte
}
type NebulaEncryptionMetadata struct {
EncryptionAlgorithm string
Argon2Parameters Argon2Parameters
} }
type m map[string]interface{} type m map[string]interface{}
// Returned if we try to unmarshal an encrypted private key without a passphrase
var ErrPrivateKeyEncrypted = errors.New("private key must be decrypted")
// UnmarshalNebulaCertificate will unmarshal a protobuf byte representation of a nebula cert // UnmarshalNebulaCertificate will unmarshal a protobuf byte representation of a nebula cert
func UnmarshalNebulaCertificate(b []byte) (*NebulaCertificate, error) { func UnmarshalNebulaCertificate(b []byte) (*NebulaCertificate, error) {
if len(b) == 0 { if len(b) == 0 {
@@ -84,6 +120,7 @@ func UnmarshalNebulaCertificate(b []byte) (*NebulaCertificate, error) {
PublicKey: make([]byte, len(rc.Details.PublicKey)), PublicKey: make([]byte, len(rc.Details.PublicKey)),
IsCA: rc.Details.IsCA, IsCA: rc.Details.IsCA,
InvertedGroups: make(map[string]struct{}), InvertedGroups: make(map[string]struct{}),
Curve: rc.Details.Curve,
}, },
Signature: make([]byte, len(rc.Signature)), Signature: make([]byte, len(rc.Signature)),
} }
@@ -127,6 +164,9 @@ func UnmarshalNebulaCertificateFromPEM(b []byte) (*NebulaCertificate, []byte, er
if p == nil { if p == nil {
return nil, r, fmt.Errorf("input did not contain a valid PEM encoded block") return nil, r, fmt.Errorf("input did not contain a valid PEM encoded block")
} }
if p.Type == CertificateV2Banner {
return nil, r, fmt.Errorf("%w: %s", ErrInvalidPEMCertificateUnsupported, p.Type)
}
if p.Type != CertBanner { if p.Type != CertBanner {
return nil, r, fmt.Errorf("bytes did not contain a proper nebula certificate banner") return nil, r, fmt.Errorf("bytes did not contain a proper nebula certificate banner")
} }
@@ -134,6 +174,28 @@ func UnmarshalNebulaCertificateFromPEM(b []byte) (*NebulaCertificate, []byte, er
return nc, r, err return nc, r, err
} }
func MarshalPrivateKey(curve Curve, b []byte) []byte {
switch curve {
case Curve_CURVE25519:
return pem.EncodeToMemory(&pem.Block{Type: X25519PrivateKeyBanner, Bytes: b})
case Curve_P256:
return pem.EncodeToMemory(&pem.Block{Type: P256PrivateKeyBanner, Bytes: b})
default:
return nil
}
}
func MarshalSigningPrivateKey(curve Curve, b []byte) []byte {
switch curve {
case Curve_CURVE25519:
return pem.EncodeToMemory(&pem.Block{Type: Ed25519PrivateKeyBanner, Bytes: b})
case Curve_P256:
return pem.EncodeToMemory(&pem.Block{Type: ECDSAP256PrivateKeyBanner, Bytes: b})
default:
return nil
}
}
// MarshalX25519PrivateKey is a simple helper to PEM encode an X25519 private key // MarshalX25519PrivateKey is a simple helper to PEM encode an X25519 private key
func MarshalX25519PrivateKey(b []byte) []byte { func MarshalX25519PrivateKey(b []byte) []byte {
return pem.EncodeToMemory(&pem.Block{Type: X25519PrivateKeyBanner, Bytes: b}) return pem.EncodeToMemory(&pem.Block{Type: X25519PrivateKeyBanner, Bytes: b})
@@ -144,6 +206,90 @@ func MarshalEd25519PrivateKey(key ed25519.PrivateKey) []byte {
return pem.EncodeToMemory(&pem.Block{Type: Ed25519PrivateKeyBanner, Bytes: key}) return pem.EncodeToMemory(&pem.Block{Type: Ed25519PrivateKeyBanner, Bytes: key})
} }
func UnmarshalPrivateKey(b []byte) ([]byte, []byte, Curve, error) {
k, r := pem.Decode(b)
if k == nil {
return nil, r, 0, fmt.Errorf("input did not contain a valid PEM encoded block")
}
var expectedLen int
var curve Curve
switch k.Type {
case X25519PrivateKeyBanner:
expectedLen = 32
curve = Curve_CURVE25519
case P256PrivateKeyBanner:
expectedLen = 32
curve = Curve_P256
default:
return nil, r, 0, fmt.Errorf("bytes did not contain a proper nebula private key banner")
}
if len(k.Bytes) != expectedLen {
return nil, r, 0, fmt.Errorf("key was not %d bytes, is invalid %s private key", expectedLen, curve)
}
return k.Bytes, r, curve, nil
}
func UnmarshalSigningPrivateKey(b []byte) ([]byte, []byte, Curve, error) {
k, r := pem.Decode(b)
if k == nil {
return nil, r, 0, fmt.Errorf("input did not contain a valid PEM encoded block")
}
var curve Curve
switch k.Type {
case EncryptedEd25519PrivateKeyBanner:
return nil, nil, Curve_CURVE25519, ErrPrivateKeyEncrypted
case EncryptedECDSAP256PrivateKeyBanner:
return nil, nil, Curve_P256, ErrPrivateKeyEncrypted
case Ed25519PrivateKeyBanner:
curve = Curve_CURVE25519
if len(k.Bytes) != ed25519.PrivateKeySize {
return nil, r, 0, fmt.Errorf("key was not %d bytes, is invalid Ed25519 private key", ed25519.PrivateKeySize)
}
case ECDSAP256PrivateKeyBanner:
curve = Curve_P256
if len(k.Bytes) != 32 {
return nil, r, 0, fmt.Errorf("key was not 32 bytes, is invalid ECDSA P256 private key")
}
default:
return nil, r, 0, fmt.Errorf("bytes did not contain a proper nebula Ed25519/ECDSA private key banner")
}
return k.Bytes, r, curve, nil
}
// EncryptAndMarshalSigningPrivateKey is a simple helper to encrypt and PEM encode a private key
func EncryptAndMarshalSigningPrivateKey(curve Curve, b []byte, passphrase []byte, kdfParams *Argon2Parameters) ([]byte, error) {
ciphertext, err := aes256Encrypt(passphrase, kdfParams, b)
if err != nil {
return nil, err
}
b, err = proto.Marshal(&RawNebulaEncryptedData{
EncryptionMetadata: &RawNebulaEncryptionMetadata{
EncryptionAlgorithm: "AES-256-GCM",
Argon2Parameters: &RawNebulaArgon2Parameters{
Version: kdfParams.version,
Memory: kdfParams.Memory,
Parallelism: uint32(kdfParams.Parallelism),
Iterations: kdfParams.Iterations,
Salt: kdfParams.salt,
},
},
Ciphertext: ciphertext,
})
if err != nil {
return nil, err
}
switch curve {
case Curve_CURVE25519:
return pem.EncodeToMemory(&pem.Block{Type: EncryptedEd25519PrivateKeyBanner, Bytes: b}), nil
case Curve_P256:
return pem.EncodeToMemory(&pem.Block{Type: EncryptedECDSAP256PrivateKeyBanner, Bytes: b}), nil
default:
return nil, fmt.Errorf("invalid curve: %v", curve)
}
}
// UnmarshalX25519PrivateKey will try to pem decode an X25519 private key, returning any other bytes b // UnmarshalX25519PrivateKey will try to pem decode an X25519 private key, returning any other bytes b
// or an error on failure // or an error on failure
func UnmarshalX25519PrivateKey(b []byte) ([]byte, []byte, error) { func UnmarshalX25519PrivateKey(b []byte) ([]byte, []byte, error) {
@@ -168,9 +314,13 @@ func UnmarshalEd25519PrivateKey(b []byte) (ed25519.PrivateKey, []byte, error) {
if k == nil { if k == nil {
return nil, r, fmt.Errorf("input did not contain a valid PEM encoded block") return nil, r, fmt.Errorf("input did not contain a valid PEM encoded block")
} }
if k.Type != Ed25519PrivateKeyBanner {
if k.Type == EncryptedEd25519PrivateKeyBanner {
return nil, r, ErrPrivateKeyEncrypted
} else if k.Type != Ed25519PrivateKeyBanner {
return nil, r, fmt.Errorf("bytes did not contain a proper nebula Ed25519 private key banner") return nil, r, fmt.Errorf("bytes did not contain a proper nebula Ed25519 private key banner")
} }
if len(k.Bytes) != ed25519.PrivateKeySize { if len(k.Bytes) != ed25519.PrivateKeySize {
return nil, r, fmt.Errorf("key was not 64 bytes, is invalid ed25519 private key") return nil, r, fmt.Errorf("key was not 64 bytes, is invalid ed25519 private key")
} }
@@ -178,6 +328,126 @@ func UnmarshalEd25519PrivateKey(b []byte) (ed25519.PrivateKey, []byte, error) {
return k.Bytes, r, nil return k.Bytes, r, nil
} }
// UnmarshalNebulaEncryptedData will unmarshal a protobuf byte representation of a nebula cert into its
// protobuf-generated struct.
func UnmarshalNebulaEncryptedData(b []byte) (*NebulaEncryptedData, error) {
if len(b) == 0 {
return nil, fmt.Errorf("nil byte array")
}
var rned RawNebulaEncryptedData
err := proto.Unmarshal(b, &rned)
if err != nil {
return nil, err
}
if rned.EncryptionMetadata == nil {
return nil, fmt.Errorf("encoded EncryptionMetadata was nil")
}
if rned.EncryptionMetadata.Argon2Parameters == nil {
return nil, fmt.Errorf("encoded Argon2Parameters was nil")
}
params, err := unmarshalArgon2Parameters(rned.EncryptionMetadata.Argon2Parameters)
if err != nil {
return nil, err
}
ned := NebulaEncryptedData{
EncryptionMetadata: NebulaEncryptionMetadata{
EncryptionAlgorithm: rned.EncryptionMetadata.EncryptionAlgorithm,
Argon2Parameters: *params,
},
Ciphertext: rned.Ciphertext,
}
return &ned, nil
}
func unmarshalArgon2Parameters(params *RawNebulaArgon2Parameters) (*Argon2Parameters, error) {
if params.Version < math.MinInt32 || params.Version > math.MaxInt32 {
return nil, fmt.Errorf("Argon2Parameters Version must be at least %d and no more than %d", math.MinInt32, math.MaxInt32)
}
if params.Memory <= 0 || params.Memory > math.MaxUint32 {
return nil, fmt.Errorf("Argon2Parameters Memory must be be greater than 0 and no more than %d KiB", uint32(math.MaxUint32))
}
if params.Parallelism <= 0 || params.Parallelism > math.MaxUint8 {
return nil, fmt.Errorf("Argon2Parameters Parallelism must be be greater than 0 and no more than %d", math.MaxUint8)
}
if params.Iterations <= 0 || params.Iterations > math.MaxUint32 {
return nil, fmt.Errorf("-argon-iterations must be be greater than 0 and no more than %d", uint32(math.MaxUint32))
}
return &Argon2Parameters{
version: rune(params.Version),
Memory: uint32(params.Memory),
Parallelism: uint8(params.Parallelism),
Iterations: uint32(params.Iterations),
salt: params.Salt,
}, nil
}
// DecryptAndUnmarshalSigningPrivateKey will try to pem decode and decrypt an Ed25519/ECDSA private key with
// the given passphrase, returning any other bytes b or an error on failure
func DecryptAndUnmarshalSigningPrivateKey(passphrase, b []byte) (Curve, []byte, []byte, error) {
var curve Curve
k, r := pem.Decode(b)
if k == nil {
return curve, nil, r, fmt.Errorf("input did not contain a valid PEM encoded block")
}
switch k.Type {
case EncryptedEd25519PrivateKeyBanner:
curve = Curve_CURVE25519
case EncryptedECDSAP256PrivateKeyBanner:
curve = Curve_P256
default:
return curve, nil, r, fmt.Errorf("bytes did not contain a proper nebula encrypted Ed25519/ECDSA private key banner")
}
ned, err := UnmarshalNebulaEncryptedData(k.Bytes)
if err != nil {
return curve, nil, r, err
}
var bytes []byte
switch ned.EncryptionMetadata.EncryptionAlgorithm {
case "AES-256-GCM":
bytes, err = aes256Decrypt(passphrase, &ned.EncryptionMetadata.Argon2Parameters, ned.Ciphertext)
if err != nil {
return curve, nil, r, err
}
default:
return curve, nil, r, fmt.Errorf("unsupported encryption algorithm: %s", ned.EncryptionMetadata.EncryptionAlgorithm)
}
switch curve {
case Curve_CURVE25519:
if len(bytes) != ed25519.PrivateKeySize {
return curve, nil, r, fmt.Errorf("key was not %d bytes, is invalid ed25519 private key", ed25519.PrivateKeySize)
}
case Curve_P256:
if len(bytes) != 32 {
return curve, nil, r, fmt.Errorf("key was not 32 bytes, is invalid ECDSA P256 private key")
}
}
return curve, bytes, r, nil
}
func MarshalPublicKey(curve Curve, b []byte) []byte {
switch curve {
case Curve_CURVE25519:
return pem.EncodeToMemory(&pem.Block{Type: X25519PublicKeyBanner, Bytes: b})
case Curve_P256:
return pem.EncodeToMemory(&pem.Block{Type: P256PublicKeyBanner, Bytes: b})
default:
return nil
}
}
// MarshalX25519PublicKey is a simple helper to PEM encode an X25519 public key // MarshalX25519PublicKey is a simple helper to PEM encode an X25519 public key
func MarshalX25519PublicKey(b []byte) []byte { func MarshalX25519PublicKey(b []byte) []byte {
return pem.EncodeToMemory(&pem.Block{Type: X25519PublicKeyBanner, Bytes: b}) return pem.EncodeToMemory(&pem.Block{Type: X25519PublicKeyBanner, Bytes: b})
@@ -188,6 +458,30 @@ func MarshalEd25519PublicKey(key ed25519.PublicKey) []byte {
return pem.EncodeToMemory(&pem.Block{Type: Ed25519PublicKeyBanner, Bytes: key}) return pem.EncodeToMemory(&pem.Block{Type: Ed25519PublicKeyBanner, Bytes: key})
} }
func UnmarshalPublicKey(b []byte) ([]byte, []byte, Curve, error) {
k, r := pem.Decode(b)
if k == nil {
return nil, r, 0, fmt.Errorf("input did not contain a valid PEM encoded block")
}
var expectedLen int
var curve Curve
switch k.Type {
case X25519PublicKeyBanner:
expectedLen = 32
curve = Curve_CURVE25519
case P256PublicKeyBanner:
// Uncompressed
expectedLen = 65
curve = Curve_P256
default:
return nil, r, 0, fmt.Errorf("bytes did not contain a proper nebula public key banner")
}
if len(k.Bytes) != expectedLen {
return nil, r, 0, fmt.Errorf("key was not %d bytes, is invalid %s public key", expectedLen, curve)
}
return k.Bytes, r, curve, nil
}
// UnmarshalX25519PublicKey will try to pem decode an X25519 public key, returning any other bytes b // UnmarshalX25519PublicKey will try to pem decode an X25519 public key, returning any other bytes b
// or an error on failure // or an error on failure
func UnmarshalX25519PublicKey(b []byte) ([]byte, []byte, error) { func UnmarshalX25519PublicKey(b []byte) ([]byte, []byte, error) {
@@ -223,27 +517,86 @@ func UnmarshalEd25519PublicKey(b []byte) (ed25519.PublicKey, []byte, error) {
} }
// Sign signs a nebula cert with the provided private key // Sign signs a nebula cert with the provided private key
func (nc *NebulaCertificate) Sign(key ed25519.PrivateKey) error { func (nc *NebulaCertificate) Sign(curve Curve, key []byte) error {
if curve != nc.Details.Curve {
return fmt.Errorf("curve in cert and private key supplied don't match")
}
b, err := proto.Marshal(nc.getRawDetails()) b, err := proto.Marshal(nc.getRawDetails())
if err != nil { if err != nil {
return err return err
} }
sig, err := key.Sign(rand.Reader, b, crypto.Hash(0)) var sig []byte
switch curve {
case Curve_CURVE25519:
signer := ed25519.PrivateKey(key)
sig = ed25519.Sign(signer, b)
case Curve_P256:
signer := &ecdsa.PrivateKey{
PublicKey: ecdsa.PublicKey{
Curve: elliptic.P256(),
},
// ref: https://github.com/golang/go/blob/go1.19/src/crypto/x509/sec1.go#L95
D: new(big.Int).SetBytes(key),
}
// ref: https://github.com/golang/go/blob/go1.19/src/crypto/x509/sec1.go#L119
signer.X, signer.Y = signer.Curve.ScalarBaseMult(key)
// We need to hash first for ECDSA
// - https://pkg.go.dev/crypto/ecdsa#SignASN1
hashed := sha256.Sum256(b)
sig, err = ecdsa.SignASN1(rand.Reader, signer, hashed[:])
if err != nil { if err != nil {
return err return err
} }
default:
return fmt.Errorf("invalid curve: %s", nc.Details.Curve)
}
nc.Signature = sig nc.Signature = sig
return nil return nil
} }
// CheckSignature verifies the signature against the provided public key // CheckSignature verifies the signature against the provided public key
func (nc *NebulaCertificate) CheckSignature(key ed25519.PublicKey) bool { func (nc *NebulaCertificate) CheckSignature(key []byte) bool {
b, err := proto.Marshal(nc.getRawDetails()) b, err := proto.Marshal(nc.getRawDetails())
if err != nil { if err != nil {
return false return false
} }
return ed25519.Verify(key, b, nc.Signature) switch nc.Details.Curve {
case Curve_CURVE25519:
return ed25519.Verify(ed25519.PublicKey(key), b, nc.Signature)
case Curve_P256:
x, y := elliptic.Unmarshal(elliptic.P256(), key)
pubKey := &ecdsa.PublicKey{Curve: elliptic.P256(), X: x, Y: y}
hashed := sha256.Sum256(b)
return ecdsa.VerifyASN1(pubKey, hashed[:], nc.Signature)
default:
return false
}
}
// NOTE: This uses an internal cache that will not be invalidated automatically
// if you manually change any fields in the NebulaCertificate.
func (nc *NebulaCertificate) checkSignatureWithCache(key []byte, useCache bool) bool {
if !useCache {
return nc.CheckSignature(key)
}
if v := nc.signatureVerified.Load(); v != nil {
return bytes.Equal(*v, key)
}
verified := nc.CheckSignature(key)
if verified {
keyCopy := make([]byte, len(key))
copy(keyCopy, key)
nc.signatureVerified.Store(&keyCopy)
}
return verified
} }
// Expired will return true if the nebula cert is too young or too old compared to the provided time, otherwise false // Expired will return true if the nebula cert is too young or too old compared to the provided time, otherwise false
@@ -253,8 +606,27 @@ func (nc *NebulaCertificate) Expired(t time.Time) bool {
// Verify will ensure a certificate is good in all respects (expiry, group membership, signature, cert blocklist, etc) // Verify will ensure a certificate is good in all respects (expiry, group membership, signature, cert blocklist, etc)
func (nc *NebulaCertificate) Verify(t time.Time, ncp *NebulaCAPool) (bool, error) { func (nc *NebulaCertificate) Verify(t time.Time, ncp *NebulaCAPool) (bool, error) {
if ncp.IsBlocklisted(nc) { return nc.verify(t, ncp, false)
return false, fmt.Errorf("certificate has been blocked") }
// VerifyWithCache will ensure a certificate is good in all respects (expiry, group membership, signature, cert blocklist, etc)
//
// NOTE: This uses an internal cache that will not be invalidated automatically
// if you manually change any fields in the NebulaCertificate.
func (nc *NebulaCertificate) VerifyWithCache(t time.Time, ncp *NebulaCAPool) (bool, error) {
return nc.verify(t, ncp, true)
}
// ResetCache resets the cache used by VerifyWithCache.
func (nc *NebulaCertificate) ResetCache() {
nc.sha256sum.Store(nil)
nc.signatureVerified.Store(nil)
}
// Verify will ensure a certificate is good in all respects (expiry, group membership, signature, cert blocklist, etc)
func (nc *NebulaCertificate) verify(t time.Time, ncp *NebulaCAPool, useCache bool) (bool, error) {
if ncp.isBlocklistedWithCache(nc, useCache) {
return false, ErrBlockListed
} }
signer, err := ncp.GetCAForCert(nc) signer, err := ncp.GetCAForCert(nc)
@@ -263,15 +635,15 @@ func (nc *NebulaCertificate) Verify(t time.Time, ncp *NebulaCAPool) (bool, error
} }
if signer.Expired(t) { if signer.Expired(t) {
return false, fmt.Errorf("root certificate is expired") return false, ErrRootExpired
} }
if nc.Expired(t) { if nc.Expired(t) {
return false, fmt.Errorf("certificate is expired") return false, ErrExpired
} }
if !nc.CheckSignature(signer.Details.PublicKey) { if !nc.checkSignatureWithCache(signer.Details.PublicKey, useCache) {
return false, fmt.Errorf("certificate signature did not match") return false, ErrSignatureMismatch
} }
if err := nc.CheckRootConstrains(signer); err != nil { if err := nc.CheckRootConstrains(signer); err != nil {
@@ -324,8 +696,13 @@ func (nc *NebulaCertificate) CheckRootConstrains(signer *NebulaCertificate) erro
} }
// VerifyPrivateKey checks that the public key in the Nebula certificate and a supplied private key match // VerifyPrivateKey checks that the public key in the Nebula certificate and a supplied private key match
func (nc *NebulaCertificate) VerifyPrivateKey(key []byte) error { func (nc *NebulaCertificate) VerifyPrivateKey(curve Curve, key []byte) error {
if curve != nc.Details.Curve {
return fmt.Errorf("curve in cert and private key supplied don't match")
}
if nc.Details.IsCA { if nc.Details.IsCA {
switch curve {
case Curve_CURVE25519:
// the call to PublicKey below will panic slice bounds out of range otherwise // the call to PublicKey below will panic slice bounds out of range otherwise
if len(key) != ed25519.PrivateKeySize { if len(key) != ed25519.PrivateKeySize {
return fmt.Errorf("key was not 64 bytes, is invalid ed25519 private key") return fmt.Errorf("key was not 64 bytes, is invalid ed25519 private key")
@@ -334,13 +711,38 @@ func (nc *NebulaCertificate) VerifyPrivateKey(key []byte) error {
if !ed25519.PublicKey(nc.Details.PublicKey).Equal(ed25519.PrivateKey(key).Public()) { if !ed25519.PublicKey(nc.Details.PublicKey).Equal(ed25519.PrivateKey(key).Public()) {
return fmt.Errorf("public key in cert and private key supplied don't match") return fmt.Errorf("public key in cert and private key supplied don't match")
} }
case Curve_P256:
privkey, err := ecdh.P256().NewPrivateKey(key)
if err != nil {
return fmt.Errorf("cannot parse private key as P256")
}
pub := privkey.PublicKey().Bytes()
if !bytes.Equal(pub, nc.Details.PublicKey) {
return fmt.Errorf("public key in cert and private key supplied don't match")
}
default:
return fmt.Errorf("invalid curve: %s", curve)
}
return nil return nil
} }
pub, err := curve25519.X25519(key, curve25519.Basepoint) var pub []byte
switch curve {
case Curve_CURVE25519:
var err error
pub, err = curve25519.X25519(key, curve25519.Basepoint)
if err != nil { if err != nil {
return err return err
} }
case Curve_P256:
privkey, err := ecdh.P256().NewPrivateKey(key)
if err != nil {
return err
}
pub = privkey.PublicKey().Bytes()
default:
return fmt.Errorf("invalid curve: %s", curve)
}
if !bytes.Equal(pub, nc.Details.PublicKey) { if !bytes.Equal(pub, nc.Details.PublicKey) {
return fmt.Errorf("public key in cert and private key supplied don't match") return fmt.Errorf("public key in cert and private key supplied don't match")
} }
@@ -393,6 +795,7 @@ func (nc *NebulaCertificate) String() string {
s += fmt.Sprintf("\t\tIs CA: %v\n", nc.Details.IsCA) s += fmt.Sprintf("\t\tIs CA: %v\n", nc.Details.IsCA)
s += fmt.Sprintf("\t\tIssuer: %s\n", nc.Details.Issuer) s += fmt.Sprintf("\t\tIssuer: %s\n", nc.Details.Issuer)
s += fmt.Sprintf("\t\tPublic key: %x\n", nc.Details.PublicKey) s += fmt.Sprintf("\t\tPublic key: %x\n", nc.Details.PublicKey)
s += fmt.Sprintf("\t\tCurve: %s\n", nc.Details.Curve)
s += "\t}\n" s += "\t}\n"
fp, err := nc.Sha256Sum() fp, err := nc.Sha256Sum()
if err == nil { if err == nil {
@@ -413,6 +816,7 @@ func (nc *NebulaCertificate) getRawDetails() *RawNebulaCertificateDetails {
NotAfter: nc.Details.NotAfter.Unix(), NotAfter: nc.Details.NotAfter.Unix(),
PublicKey: make([]byte, len(nc.Details.PublicKey)), PublicKey: make([]byte, len(nc.Details.PublicKey)),
IsCA: nc.Details.IsCA, IsCA: nc.Details.IsCA,
Curve: nc.Details.Curve,
} }
for _, ipNet := range nc.Details.Ips { for _, ipNet := range nc.Details.Ips {
@@ -461,6 +865,25 @@ func (nc *NebulaCertificate) Sha256Sum() (string, error) {
return hex.EncodeToString(sum[:]), nil return hex.EncodeToString(sum[:]), nil
} }
// NOTE: This uses an internal cache that will not be invalidated automatically
// if you manually change any fields in the NebulaCertificate.
func (nc *NebulaCertificate) sha256SumWithCache(useCache bool) (string, error) {
if !useCache {
return nc.Sha256Sum()
}
if s := nc.sha256sum.Load(); s != nil {
return *s, nil
}
s, err := nc.Sha256Sum()
if err != nil {
return s, err
}
nc.sha256sum.Store(&s)
return s, nil
}
func (nc *NebulaCertificate) MarshalJSON() ([]byte, error) { func (nc *NebulaCertificate) MarshalJSON() ([]byte, error) {
toString := func(ips []*net.IPNet) []string { toString := func(ips []*net.IPNet) []string {
s := []string{} s := []string{}
@@ -482,6 +905,7 @@ func (nc *NebulaCertificate) MarshalJSON() ([]byte, error) {
"publicKey": fmt.Sprintf("%x", nc.Details.PublicKey), "publicKey": fmt.Sprintf("%x", nc.Details.PublicKey),
"isCa": nc.Details.IsCA, "isCa": nc.Details.IsCA,
"issuer": nc.Details.Issuer, "issuer": nc.Details.Issuer,
"curve": nc.Details.Curve.String(),
}, },
"fingerprint": fp, "fingerprint": fp,
"signature": fmt.Sprintf("%x", nc.Signature), "signature": fmt.Sprintf("%x", nc.Signature),

View File

@@ -1,7 +1,7 @@
// Code generated by protoc-gen-go. DO NOT EDIT. // Code generated by protoc-gen-go. DO NOT EDIT.
// versions: // versions:
// protoc-gen-go v1.26.0 // protoc-gen-go v1.30.0
// protoc v3.14.0 // protoc v3.21.5
// source: cert.proto // source: cert.proto
package cert package cert
@@ -20,6 +20,52 @@ const (
_ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20) _ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
) )
type Curve int32
const (
Curve_CURVE25519 Curve = 0
Curve_P256 Curve = 1
)
// Enum value maps for Curve.
var (
Curve_name = map[int32]string{
0: "CURVE25519",
1: "P256",
}
Curve_value = map[string]int32{
"CURVE25519": 0,
"P256": 1,
}
)
func (x Curve) Enum() *Curve {
p := new(Curve)
*p = x
return p
}
func (x Curve) String() string {
return protoimpl.X.EnumStringOf(x.Descriptor(), protoreflect.EnumNumber(x))
}
func (Curve) Descriptor() protoreflect.EnumDescriptor {
return file_cert_proto_enumTypes[0].Descriptor()
}
func (Curve) Type() protoreflect.EnumType {
return &file_cert_proto_enumTypes[0]
}
func (x Curve) Number() protoreflect.EnumNumber {
return protoreflect.EnumNumber(x)
}
// Deprecated: Use Curve.Descriptor instead.
func (Curve) EnumDescriptor() ([]byte, []int) {
return file_cert_proto_rawDescGZIP(), []int{0}
}
type RawNebulaCertificate struct { type RawNebulaCertificate struct {
state protoimpl.MessageState state protoimpl.MessageState
sizeCache protoimpl.SizeCache sizeCache protoimpl.SizeCache
@@ -91,6 +137,7 @@ type RawNebulaCertificateDetails struct {
IsCA bool `protobuf:"varint,8,opt,name=IsCA,proto3" json:"IsCA,omitempty"` IsCA bool `protobuf:"varint,8,opt,name=IsCA,proto3" json:"IsCA,omitempty"`
// sha-256 of the issuer certificate, if this field is blank the cert is self-signed // sha-256 of the issuer certificate, if this field is blank the cert is self-signed
Issuer []byte `protobuf:"bytes,9,opt,name=Issuer,proto3" json:"Issuer,omitempty"` Issuer []byte `protobuf:"bytes,9,opt,name=Issuer,proto3" json:"Issuer,omitempty"`
Curve Curve `protobuf:"varint,100,opt,name=curve,proto3,enum=cert.Curve" json:"curve,omitempty"`
} }
func (x *RawNebulaCertificateDetails) Reset() { func (x *RawNebulaCertificateDetails) Reset() {
@@ -188,6 +235,202 @@ func (x *RawNebulaCertificateDetails) GetIssuer() []byte {
return nil return nil
} }
func (x *RawNebulaCertificateDetails) GetCurve() Curve {
if x != nil {
return x.Curve
}
return Curve_CURVE25519
}
type RawNebulaEncryptedData struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
EncryptionMetadata *RawNebulaEncryptionMetadata `protobuf:"bytes,1,opt,name=EncryptionMetadata,proto3" json:"EncryptionMetadata,omitempty"`
Ciphertext []byte `protobuf:"bytes,2,opt,name=Ciphertext,proto3" json:"Ciphertext,omitempty"`
}
func (x *RawNebulaEncryptedData) Reset() {
*x = RawNebulaEncryptedData{}
if protoimpl.UnsafeEnabled {
mi := &file_cert_proto_msgTypes[2]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *RawNebulaEncryptedData) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*RawNebulaEncryptedData) ProtoMessage() {}
func (x *RawNebulaEncryptedData) ProtoReflect() protoreflect.Message {
mi := &file_cert_proto_msgTypes[2]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use RawNebulaEncryptedData.ProtoReflect.Descriptor instead.
func (*RawNebulaEncryptedData) Descriptor() ([]byte, []int) {
return file_cert_proto_rawDescGZIP(), []int{2}
}
func (x *RawNebulaEncryptedData) GetEncryptionMetadata() *RawNebulaEncryptionMetadata {
if x != nil {
return x.EncryptionMetadata
}
return nil
}
func (x *RawNebulaEncryptedData) GetCiphertext() []byte {
if x != nil {
return x.Ciphertext
}
return nil
}
type RawNebulaEncryptionMetadata struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
EncryptionAlgorithm string `protobuf:"bytes,1,opt,name=EncryptionAlgorithm,proto3" json:"EncryptionAlgorithm,omitempty"`
Argon2Parameters *RawNebulaArgon2Parameters `protobuf:"bytes,2,opt,name=Argon2Parameters,proto3" json:"Argon2Parameters,omitempty"`
}
func (x *RawNebulaEncryptionMetadata) Reset() {
*x = RawNebulaEncryptionMetadata{}
if protoimpl.UnsafeEnabled {
mi := &file_cert_proto_msgTypes[3]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *RawNebulaEncryptionMetadata) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*RawNebulaEncryptionMetadata) ProtoMessage() {}
func (x *RawNebulaEncryptionMetadata) ProtoReflect() protoreflect.Message {
mi := &file_cert_proto_msgTypes[3]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use RawNebulaEncryptionMetadata.ProtoReflect.Descriptor instead.
func (*RawNebulaEncryptionMetadata) Descriptor() ([]byte, []int) {
return file_cert_proto_rawDescGZIP(), []int{3}
}
func (x *RawNebulaEncryptionMetadata) GetEncryptionAlgorithm() string {
if x != nil {
return x.EncryptionAlgorithm
}
return ""
}
func (x *RawNebulaEncryptionMetadata) GetArgon2Parameters() *RawNebulaArgon2Parameters {
if x != nil {
return x.Argon2Parameters
}
return nil
}
type RawNebulaArgon2Parameters struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Version int32 `protobuf:"varint,1,opt,name=version,proto3" json:"version,omitempty"` // rune in Go
Memory uint32 `protobuf:"varint,2,opt,name=memory,proto3" json:"memory,omitempty"`
Parallelism uint32 `protobuf:"varint,4,opt,name=parallelism,proto3" json:"parallelism,omitempty"` // uint8 in Go
Iterations uint32 `protobuf:"varint,3,opt,name=iterations,proto3" json:"iterations,omitempty"`
Salt []byte `protobuf:"bytes,5,opt,name=salt,proto3" json:"salt,omitempty"`
}
func (x *RawNebulaArgon2Parameters) Reset() {
*x = RawNebulaArgon2Parameters{}
if protoimpl.UnsafeEnabled {
mi := &file_cert_proto_msgTypes[4]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *RawNebulaArgon2Parameters) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*RawNebulaArgon2Parameters) ProtoMessage() {}
func (x *RawNebulaArgon2Parameters) ProtoReflect() protoreflect.Message {
mi := &file_cert_proto_msgTypes[4]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use RawNebulaArgon2Parameters.ProtoReflect.Descriptor instead.
func (*RawNebulaArgon2Parameters) Descriptor() ([]byte, []int) {
return file_cert_proto_rawDescGZIP(), []int{4}
}
func (x *RawNebulaArgon2Parameters) GetVersion() int32 {
if x != nil {
return x.Version
}
return 0
}
func (x *RawNebulaArgon2Parameters) GetMemory() uint32 {
if x != nil {
return x.Memory
}
return 0
}
func (x *RawNebulaArgon2Parameters) GetParallelism() uint32 {
if x != nil {
return x.Parallelism
}
return 0
}
func (x *RawNebulaArgon2Parameters) GetIterations() uint32 {
if x != nil {
return x.Iterations
}
return 0
}
func (x *RawNebulaArgon2Parameters) GetSalt() []byte {
if x != nil {
return x.Salt
}
return nil
}
var File_cert_proto protoreflect.FileDescriptor var File_cert_proto protoreflect.FileDescriptor
var file_cert_proto_rawDesc = []byte{ var file_cert_proto_rawDesc = []byte{
@@ -199,7 +442,7 @@ var file_cert_proto_rawDesc = []byte{
0x69, 0x66, 0x69, 0x63, 0x61, 0x74, 0x65, 0x44, 0x65, 0x74, 0x61, 0x69, 0x6c, 0x73, 0x52, 0x07, 0x69, 0x66, 0x69, 0x63, 0x61, 0x74, 0x65, 0x44, 0x65, 0x74, 0x61, 0x69, 0x6c, 0x73, 0x52, 0x07,
0x44, 0x65, 0x74, 0x61, 0x69, 0x6c, 0x73, 0x12, 0x1c, 0x0a, 0x09, 0x53, 0x69, 0x67, 0x6e, 0x61, 0x44, 0x65, 0x74, 0x61, 0x69, 0x6c, 0x73, 0x12, 0x1c, 0x0a, 0x09, 0x53, 0x69, 0x67, 0x6e, 0x61,
0x74, 0x75, 0x72, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0c, 0x52, 0x09, 0x53, 0x69, 0x67, 0x6e, 0x74, 0x75, 0x72, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0c, 0x52, 0x09, 0x53, 0x69, 0x67, 0x6e,
0x61, 0x74, 0x75, 0x72, 0x65, 0x22, 0xf9, 0x01, 0x0a, 0x1b, 0x52, 0x61, 0x77, 0x4e, 0x65, 0x62, 0x61, 0x74, 0x75, 0x72, 0x65, 0x22, 0x9c, 0x02, 0x0a, 0x1b, 0x52, 0x61, 0x77, 0x4e, 0x65, 0x62,
0x75, 0x6c, 0x61, 0x43, 0x65, 0x72, 0x74, 0x69, 0x66, 0x69, 0x63, 0x61, 0x74, 0x65, 0x44, 0x65, 0x75, 0x6c, 0x61, 0x43, 0x65, 0x72, 0x74, 0x69, 0x66, 0x69, 0x63, 0x61, 0x74, 0x65, 0x44, 0x65,
0x74, 0x61, 0x69, 0x6c, 0x73, 0x12, 0x12, 0x0a, 0x04, 0x4e, 0x61, 0x6d, 0x65, 0x18, 0x01, 0x20, 0x74, 0x61, 0x69, 0x6c, 0x73, 0x12, 0x12, 0x0a, 0x04, 0x4e, 0x61, 0x6d, 0x65, 0x18, 0x01, 0x20,
0x01, 0x28, 0x09, 0x52, 0x04, 0x4e, 0x61, 0x6d, 0x65, 0x12, 0x10, 0x0a, 0x03, 0x49, 0x70, 0x73, 0x01, 0x28, 0x09, 0x52, 0x04, 0x4e, 0x61, 0x6d, 0x65, 0x12, 0x10, 0x0a, 0x03, 0x49, 0x70, 0x73,
@@ -215,9 +458,43 @@ var file_cert_proto_rawDesc = []byte{
0x69, 0x63, 0x4b, 0x65, 0x79, 0x12, 0x12, 0x0a, 0x04, 0x49, 0x73, 0x43, 0x41, 0x18, 0x08, 0x20, 0x69, 0x63, 0x4b, 0x65, 0x79, 0x12, 0x12, 0x0a, 0x04, 0x49, 0x73, 0x43, 0x41, 0x18, 0x08, 0x20,
0x01, 0x28, 0x08, 0x52, 0x04, 0x49, 0x73, 0x43, 0x41, 0x12, 0x16, 0x0a, 0x06, 0x49, 0x73, 0x73, 0x01, 0x28, 0x08, 0x52, 0x04, 0x49, 0x73, 0x43, 0x41, 0x12, 0x16, 0x0a, 0x06, 0x49, 0x73, 0x73,
0x75, 0x65, 0x72, 0x18, 0x09, 0x20, 0x01, 0x28, 0x0c, 0x52, 0x06, 0x49, 0x73, 0x73, 0x75, 0x65, 0x75, 0x65, 0x72, 0x18, 0x09, 0x20, 0x01, 0x28, 0x0c, 0x52, 0x06, 0x49, 0x73, 0x73, 0x75, 0x65,
0x72, 0x42, 0x20, 0x5a, 0x1e, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x72, 0x12, 0x21, 0x0a, 0x05, 0x63, 0x75, 0x72, 0x76, 0x65, 0x18, 0x64, 0x20, 0x01, 0x28, 0x0e,
0x73, 0x6c, 0x61, 0x63, 0x6b, 0x68, 0x71, 0x2f, 0x6e, 0x65, 0x62, 0x75, 0x6c, 0x61, 0x2f, 0x63, 0x32, 0x0b, 0x2e, 0x63, 0x65, 0x72, 0x74, 0x2e, 0x43, 0x75, 0x72, 0x76, 0x65, 0x52, 0x05, 0x63,
0x65, 0x72, 0x74, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33, 0x75, 0x72, 0x76, 0x65, 0x22, 0x8b, 0x01, 0x0a, 0x16, 0x52, 0x61, 0x77, 0x4e, 0x65, 0x62, 0x75,
0x6c, 0x61, 0x45, 0x6e, 0x63, 0x72, 0x79, 0x70, 0x74, 0x65, 0x64, 0x44, 0x61, 0x74, 0x61, 0x12,
0x51, 0x0a, 0x12, 0x45, 0x6e, 0x63, 0x72, 0x79, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x4d, 0x65, 0x74,
0x61, 0x64, 0x61, 0x74, 0x61, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x21, 0x2e, 0x63, 0x65,
0x72, 0x74, 0x2e, 0x52, 0x61, 0x77, 0x4e, 0x65, 0x62, 0x75, 0x6c, 0x61, 0x45, 0x6e, 0x63, 0x72,
0x79, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x4d, 0x65, 0x74, 0x61, 0x64, 0x61, 0x74, 0x61, 0x52, 0x12,
0x45, 0x6e, 0x63, 0x72, 0x79, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x4d, 0x65, 0x74, 0x61, 0x64, 0x61,
0x74, 0x61, 0x12, 0x1e, 0x0a, 0x0a, 0x43, 0x69, 0x70, 0x68, 0x65, 0x72, 0x74, 0x65, 0x78, 0x74,
0x18, 0x02, 0x20, 0x01, 0x28, 0x0c, 0x52, 0x0a, 0x43, 0x69, 0x70, 0x68, 0x65, 0x72, 0x74, 0x65,
0x78, 0x74, 0x22, 0x9c, 0x01, 0x0a, 0x1b, 0x52, 0x61, 0x77, 0x4e, 0x65, 0x62, 0x75, 0x6c, 0x61,
0x45, 0x6e, 0x63, 0x72, 0x79, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x4d, 0x65, 0x74, 0x61, 0x64, 0x61,
0x74, 0x61, 0x12, 0x30, 0x0a, 0x13, 0x45, 0x6e, 0x63, 0x72, 0x79, 0x70, 0x74, 0x69, 0x6f, 0x6e,
0x41, 0x6c, 0x67, 0x6f, 0x72, 0x69, 0x74, 0x68, 0x6d, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52,
0x13, 0x45, 0x6e, 0x63, 0x72, 0x79, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x41, 0x6c, 0x67, 0x6f, 0x72,
0x69, 0x74, 0x68, 0x6d, 0x12, 0x4b, 0x0a, 0x10, 0x41, 0x72, 0x67, 0x6f, 0x6e, 0x32, 0x50, 0x61,
0x72, 0x61, 0x6d, 0x65, 0x74, 0x65, 0x72, 0x73, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x1f,
0x2e, 0x63, 0x65, 0x72, 0x74, 0x2e, 0x52, 0x61, 0x77, 0x4e, 0x65, 0x62, 0x75, 0x6c, 0x61, 0x41,
0x72, 0x67, 0x6f, 0x6e, 0x32, 0x50, 0x61, 0x72, 0x61, 0x6d, 0x65, 0x74, 0x65, 0x72, 0x73, 0x52,
0x10, 0x41, 0x72, 0x67, 0x6f, 0x6e, 0x32, 0x50, 0x61, 0x72, 0x61, 0x6d, 0x65, 0x74, 0x65, 0x72,
0x73, 0x22, 0xa3, 0x01, 0x0a, 0x19, 0x52, 0x61, 0x77, 0x4e, 0x65, 0x62, 0x75, 0x6c, 0x61, 0x41,
0x72, 0x67, 0x6f, 0x6e, 0x32, 0x50, 0x61, 0x72, 0x61, 0x6d, 0x65, 0x74, 0x65, 0x72, 0x73, 0x12,
0x18, 0x0a, 0x07, 0x76, 0x65, 0x72, 0x73, 0x69, 0x6f, 0x6e, 0x18, 0x01, 0x20, 0x01, 0x28, 0x05,
0x52, 0x07, 0x76, 0x65, 0x72, 0x73, 0x69, 0x6f, 0x6e, 0x12, 0x16, 0x0a, 0x06, 0x6d, 0x65, 0x6d,
0x6f, 0x72, 0x79, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0d, 0x52, 0x06, 0x6d, 0x65, 0x6d, 0x6f, 0x72,
0x79, 0x12, 0x20, 0x0a, 0x0b, 0x70, 0x61, 0x72, 0x61, 0x6c, 0x6c, 0x65, 0x6c, 0x69, 0x73, 0x6d,
0x18, 0x04, 0x20, 0x01, 0x28, 0x0d, 0x52, 0x0b, 0x70, 0x61, 0x72, 0x61, 0x6c, 0x6c, 0x65, 0x6c,
0x69, 0x73, 0x6d, 0x12, 0x1e, 0x0a, 0x0a, 0x69, 0x74, 0x65, 0x72, 0x61, 0x74, 0x69, 0x6f, 0x6e,
0x73, 0x18, 0x03, 0x20, 0x01, 0x28, 0x0d, 0x52, 0x0a, 0x69, 0x74, 0x65, 0x72, 0x61, 0x74, 0x69,
0x6f, 0x6e, 0x73, 0x12, 0x12, 0x0a, 0x04, 0x73, 0x61, 0x6c, 0x74, 0x18, 0x05, 0x20, 0x01, 0x28,
0x0c, 0x52, 0x04, 0x73, 0x61, 0x6c, 0x74, 0x2a, 0x21, 0x0a, 0x05, 0x43, 0x75, 0x72, 0x76, 0x65,
0x12, 0x0e, 0x0a, 0x0a, 0x43, 0x55, 0x52, 0x56, 0x45, 0x32, 0x35, 0x35, 0x31, 0x39, 0x10, 0x00,
0x12, 0x08, 0x0a, 0x04, 0x50, 0x32, 0x35, 0x36, 0x10, 0x01, 0x42, 0x20, 0x5a, 0x1e, 0x67, 0x69,
0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x73, 0x6c, 0x61, 0x63, 0x6b, 0x68, 0x71,
0x2f, 0x6e, 0x65, 0x62, 0x75, 0x6c, 0x61, 0x2f, 0x63, 0x65, 0x72, 0x74, 0x62, 0x06, 0x70, 0x72,
0x6f, 0x74, 0x6f, 0x33,
} }
var ( var (
@@ -232,18 +509,26 @@ func file_cert_proto_rawDescGZIP() []byte {
return file_cert_proto_rawDescData return file_cert_proto_rawDescData
} }
var file_cert_proto_msgTypes = make([]protoimpl.MessageInfo, 2) var file_cert_proto_enumTypes = make([]protoimpl.EnumInfo, 1)
var file_cert_proto_msgTypes = make([]protoimpl.MessageInfo, 5)
var file_cert_proto_goTypes = []interface{}{ var file_cert_proto_goTypes = []interface{}{
(*RawNebulaCertificate)(nil), // 0: cert.RawNebulaCertificate (Curve)(0), // 0: cert.Curve
(*RawNebulaCertificateDetails)(nil), // 1: cert.RawNebulaCertificateDetails (*RawNebulaCertificate)(nil), // 1: cert.RawNebulaCertificate
(*RawNebulaCertificateDetails)(nil), // 2: cert.RawNebulaCertificateDetails
(*RawNebulaEncryptedData)(nil), // 3: cert.RawNebulaEncryptedData
(*RawNebulaEncryptionMetadata)(nil), // 4: cert.RawNebulaEncryptionMetadata
(*RawNebulaArgon2Parameters)(nil), // 5: cert.RawNebulaArgon2Parameters
} }
var file_cert_proto_depIdxs = []int32{ var file_cert_proto_depIdxs = []int32{
1, // 0: cert.RawNebulaCertificate.Details:type_name -> cert.RawNebulaCertificateDetails 2, // 0: cert.RawNebulaCertificate.Details:type_name -> cert.RawNebulaCertificateDetails
1, // [1:1] is the sub-list for method output_type 0, // 1: cert.RawNebulaCertificateDetails.curve:type_name -> cert.Curve
1, // [1:1] is the sub-list for method input_type 4, // 2: cert.RawNebulaEncryptedData.EncryptionMetadata:type_name -> cert.RawNebulaEncryptionMetadata
1, // [1:1] is the sub-list for extension type_name 5, // 3: cert.RawNebulaEncryptionMetadata.Argon2Parameters:type_name -> cert.RawNebulaArgon2Parameters
1, // [1:1] is the sub-list for extension extendee 4, // [4:4] is the sub-list for method output_type
0, // [0:1] is the sub-list for field type_name 4, // [4:4] is the sub-list for method input_type
4, // [4:4] is the sub-list for extension type_name
4, // [4:4] is the sub-list for extension extendee
0, // [0:4] is the sub-list for field type_name
} }
func init() { file_cert_proto_init() } func init() { file_cert_proto_init() }
@@ -276,19 +561,56 @@ func file_cert_proto_init() {
return nil return nil
} }
} }
file_cert_proto_msgTypes[2].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*RawNebulaEncryptedData); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_cert_proto_msgTypes[3].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*RawNebulaEncryptionMetadata); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_cert_proto_msgTypes[4].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*RawNebulaArgon2Parameters); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
} }
type x struct{} type x struct{}
out := protoimpl.TypeBuilder{ out := protoimpl.TypeBuilder{
File: protoimpl.DescBuilder{ File: protoimpl.DescBuilder{
GoPackagePath: reflect.TypeOf(x{}).PkgPath(), GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
RawDescriptor: file_cert_proto_rawDesc, RawDescriptor: file_cert_proto_rawDesc,
NumEnums: 0, NumEnums: 1,
NumMessages: 2, NumMessages: 5,
NumExtensions: 0, NumExtensions: 0,
NumServices: 0, NumServices: 0,
}, },
GoTypes: file_cert_proto_goTypes, GoTypes: file_cert_proto_goTypes,
DependencyIndexes: file_cert_proto_depIdxs, DependencyIndexes: file_cert_proto_depIdxs,
EnumInfos: file_cert_proto_enumTypes,
MessageInfos: file_cert_proto_msgTypes, MessageInfos: file_cert_proto_msgTypes,
}.Build() }.Build()
File_cert_proto = out.File File_cert_proto = out.File

View File

@@ -5,6 +5,11 @@ option go_package = "github.com/slackhq/nebula/cert";
//import "google/protobuf/timestamp.proto"; //import "google/protobuf/timestamp.proto";
enum Curve {
CURVE25519 = 0;
P256 = 1;
}
message RawNebulaCertificate { message RawNebulaCertificate {
RawNebulaCertificateDetails Details = 1; RawNebulaCertificateDetails Details = 1;
bytes Signature = 2; bytes Signature = 2;
@@ -26,4 +31,24 @@ message RawNebulaCertificateDetails {
// sha-256 of the issuer certificate, if this field is blank the cert is self-signed // sha-256 of the issuer certificate, if this field is blank the cert is self-signed
bytes Issuer = 9; bytes Issuer = 9;
Curve curve = 100;
}
message RawNebulaEncryptedData {
RawNebulaEncryptionMetadata EncryptionMetadata = 1;
bytes Ciphertext = 2;
}
message RawNebulaEncryptionMetadata {
string EncryptionAlgorithm = 1;
RawNebulaArgon2Parameters Argon2Parameters = 2;
}
message RawNebulaArgon2Parameters {
int32 version = 1; // rune in Go
uint32 memory = 2;
uint32 parallelism = 4; // uint8 in Go
uint32 iterations = 3;
bytes salt = 5;
} }

View File

@@ -1,18 +1,22 @@
package cert package cert
import ( import (
"crypto/ecdh"
"crypto/ecdsa"
"crypto/elliptic"
"crypto/rand" "crypto/rand"
"errors"
"fmt" "fmt"
"io" "io"
"net" "net"
"testing" "testing"
"time" "time"
"github.com/golang/protobuf/proto"
"github.com/slackhq/nebula/test" "github.com/slackhq/nebula/test"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
"golang.org/x/crypto/curve25519" "golang.org/x/crypto/curve25519"
"golang.org/x/crypto/ed25519" "golang.org/x/crypto/ed25519"
"google.golang.org/protobuf/proto"
) )
func TestMarshalingNebulaCertificate(t *testing.T) { func TestMarshalingNebulaCertificate(t *testing.T) {
@@ -101,7 +105,49 @@ func TestNebulaCertificate_Sign(t *testing.T) {
pub, priv, err := ed25519.GenerateKey(rand.Reader) pub, priv, err := ed25519.GenerateKey(rand.Reader)
assert.Nil(t, err) assert.Nil(t, err)
assert.False(t, nc.CheckSignature(pub)) assert.False(t, nc.CheckSignature(pub))
assert.Nil(t, nc.Sign(priv)) assert.Nil(t, nc.Sign(Curve_CURVE25519, priv))
assert.True(t, nc.CheckSignature(pub))
_, err = nc.Marshal()
assert.Nil(t, err)
//t.Log("Cert size:", len(b))
}
func TestNebulaCertificate_SignP256(t *testing.T) {
before := time.Now().Add(time.Second * -60).Round(time.Second)
after := time.Now().Add(time.Second * 60).Round(time.Second)
pubKey := []byte("01234567890abcedfghij1234567890ab1234567890abcedfghij1234567890ab")
nc := NebulaCertificate{
Details: NebulaCertificateDetails{
Name: "testing",
Ips: []*net.IPNet{
{IP: net.ParseIP("10.1.1.1"), Mask: net.IPMask(net.ParseIP("255.255.255.0"))},
{IP: net.ParseIP("10.1.1.2"), Mask: net.IPMask(net.ParseIP("255.255.0.0"))},
{IP: net.ParseIP("10.1.1.3"), Mask: net.IPMask(net.ParseIP("255.0.255.0"))},
},
Subnets: []*net.IPNet{
{IP: net.ParseIP("9.1.1.1"), Mask: net.IPMask(net.ParseIP("255.0.255.0"))},
{IP: net.ParseIP("9.1.1.2"), Mask: net.IPMask(net.ParseIP("255.255.255.0"))},
{IP: net.ParseIP("9.1.1.3"), Mask: net.IPMask(net.ParseIP("255.255.0.0"))},
},
Groups: []string{"test-group1", "test-group2", "test-group3"},
NotBefore: before,
NotAfter: after,
PublicKey: pubKey,
IsCA: false,
Curve: Curve_P256,
Issuer: "1234567890abcedfghij1234567890ab",
},
}
priv, err := ecdsa.GenerateKey(elliptic.P256(), rand.Reader)
pub := elliptic.Marshal(elliptic.P256(), priv.PublicKey.X, priv.PublicKey.Y)
rawPriv := priv.D.FillBytes(make([]byte, 32))
assert.Nil(t, err)
assert.False(t, nc.CheckSignature(pub))
assert.Nil(t, nc.Sign(Curve_P256, rawPriv))
assert.True(t, nc.CheckSignature(pub)) assert.True(t, nc.CheckSignature(pub))
_, err = nc.Marshal() _, err = nc.Marshal()
@@ -153,7 +199,7 @@ func TestNebulaCertificate_MarshalJSON(t *testing.T) {
assert.Nil(t, err) assert.Nil(t, err)
assert.Equal( assert.Equal(
t, t,
"{\"details\":{\"groups\":[\"test-group1\",\"test-group2\",\"test-group3\"],\"ips\":[\"10.1.1.1/24\",\"10.1.1.2/16\",\"10.1.1.3/ff00ff00\"],\"isCa\":false,\"issuer\":\"1234567890abcedfghij1234567890ab\",\"name\":\"testing\",\"notAfter\":\"0000-11-30T02:00:00Z\",\"notBefore\":\"0000-11-30T01:00:00Z\",\"publicKey\":\"313233343536373839306162636564666768696a313233343536373839306162\",\"subnets\":[\"9.1.1.1/ff00ff00\",\"9.1.1.2/24\",\"9.1.1.3/16\"]},\"fingerprint\":\"26cb1c30ad7872c804c166b5150fa372f437aa3856b04edb4334b4470ec728e4\",\"signature\":\"313233343536373839306162636564666768696a313233343536373839306162\"}", "{\"details\":{\"curve\":\"CURVE25519\",\"groups\":[\"test-group1\",\"test-group2\",\"test-group3\"],\"ips\":[\"10.1.1.1/24\",\"10.1.1.2/16\",\"10.1.1.3/ff00ff00\"],\"isCa\":false,\"issuer\":\"1234567890abcedfghij1234567890ab\",\"name\":\"testing\",\"notAfter\":\"0000-11-30T02:00:00Z\",\"notBefore\":\"0000-11-30T01:00:00Z\",\"publicKey\":\"313233343536373839306162636564666768696a313233343536373839306162\",\"subnets\":[\"9.1.1.1/ff00ff00\",\"9.1.1.2/24\",\"9.1.1.3/16\"]},\"fingerprint\":\"26cb1c30ad7872c804c166b5150fa372f437aa3856b04edb4334b4470ec728e4\",\"signature\":\"313233343536373839306162636564666768696a313233343536373839306162\"}",
string(b), string(b),
) )
} }
@@ -177,7 +223,7 @@ func TestNebulaCertificate_Verify(t *testing.T) {
v, err := c.Verify(time.Now(), caPool) v, err := c.Verify(time.Now(), caPool)
assert.False(t, v) assert.False(t, v)
assert.EqualError(t, err, "certificate has been blocked") assert.EqualError(t, err, "certificate is in the block list")
caPool.ResetCertBlocklist() caPool.ResetCertBlocklist()
v, err = c.Verify(time.Now(), caPool) v, err = c.Verify(time.Now(), caPool)
@@ -217,6 +263,65 @@ func TestNebulaCertificate_Verify(t *testing.T) {
assert.Nil(t, err) assert.Nil(t, err)
} }
func TestNebulaCertificate_VerifyP256(t *testing.T) {
ca, _, caKey, err := newTestCaCertP256(time.Now(), time.Now().Add(10*time.Minute), []*net.IPNet{}, []*net.IPNet{}, []string{})
assert.Nil(t, err)
c, _, _, err := newTestCert(ca, caKey, time.Now(), time.Now().Add(5*time.Minute), []*net.IPNet{}, []*net.IPNet{}, []string{})
assert.Nil(t, err)
h, err := ca.Sha256Sum()
assert.Nil(t, err)
caPool := NewCAPool()
caPool.CAs[h] = ca
f, err := c.Sha256Sum()
assert.Nil(t, err)
caPool.BlocklistFingerprint(f)
v, err := c.Verify(time.Now(), caPool)
assert.False(t, v)
assert.EqualError(t, err, "certificate is in the block list")
caPool.ResetCertBlocklist()
v, err = c.Verify(time.Now(), caPool)
assert.True(t, v)
assert.Nil(t, err)
v, err = c.Verify(time.Now().Add(time.Hour*1000), caPool)
assert.False(t, v)
assert.EqualError(t, err, "root certificate is expired")
c, _, _, err = newTestCert(ca, caKey, time.Time{}, time.Time{}, []*net.IPNet{}, []*net.IPNet{}, []string{})
assert.Nil(t, err)
v, err = c.Verify(time.Now().Add(time.Minute*6), caPool)
assert.False(t, v)
assert.EqualError(t, err, "certificate is expired")
// Test group assertion
ca, _, caKey, err = newTestCaCertP256(time.Now(), time.Now().Add(10*time.Minute), []*net.IPNet{}, []*net.IPNet{}, []string{"test1", "test2"})
assert.Nil(t, err)
caPem, err := ca.MarshalToPEM()
assert.Nil(t, err)
caPool = NewCAPool()
caPool.AddCACertificate(caPem)
c, _, _, err = newTestCert(ca, caKey, time.Now(), time.Now().Add(5*time.Minute), []*net.IPNet{}, []*net.IPNet{}, []string{"test1", "bad"})
assert.Nil(t, err)
v, err = c.Verify(time.Now(), caPool)
assert.False(t, v)
assert.EqualError(t, err, "certificate contained a group not present on the signing ca: bad")
c, _, _, err = newTestCert(ca, caKey, time.Now(), time.Now().Add(5*time.Minute), []*net.IPNet{}, []*net.IPNet{}, []string{"test1"})
assert.Nil(t, err)
v, err = c.Verify(time.Now(), caPool)
assert.True(t, v)
assert.Nil(t, err)
}
func TestNebulaCertificate_Verify_IPs(t *testing.T) { func TestNebulaCertificate_Verify_IPs(t *testing.T) {
_, caIp1, _ := net.ParseCIDR("10.0.0.0/16") _, caIp1, _ := net.ParseCIDR("10.0.0.0/16")
_, caIp2, _ := net.ParseCIDR("192.168.0.0/24") _, caIp2, _ := net.ParseCIDR("192.168.0.0/24")
@@ -378,20 +483,40 @@ func TestNebulaCertificate_Verify_Subnets(t *testing.T) {
func TestNebulaCertificate_VerifyPrivateKey(t *testing.T) { func TestNebulaCertificate_VerifyPrivateKey(t *testing.T) {
ca, _, caKey, err := newTestCaCert(time.Time{}, time.Time{}, []*net.IPNet{}, []*net.IPNet{}, []string{}) ca, _, caKey, err := newTestCaCert(time.Time{}, time.Time{}, []*net.IPNet{}, []*net.IPNet{}, []string{})
assert.Nil(t, err) assert.Nil(t, err)
err = ca.VerifyPrivateKey(caKey) err = ca.VerifyPrivateKey(Curve_CURVE25519, caKey)
assert.Nil(t, err) assert.Nil(t, err)
_, _, caKey2, err := newTestCaCert(time.Time{}, time.Time{}, []*net.IPNet{}, []*net.IPNet{}, []string{}) _, _, caKey2, err := newTestCaCert(time.Time{}, time.Time{}, []*net.IPNet{}, []*net.IPNet{}, []string{})
assert.Nil(t, err) assert.Nil(t, err)
err = ca.VerifyPrivateKey(caKey2) err = ca.VerifyPrivateKey(Curve_CURVE25519, caKey2)
assert.NotNil(t, err) assert.NotNil(t, err)
c, _, priv, err := newTestCert(ca, caKey, time.Time{}, time.Time{}, []*net.IPNet{}, []*net.IPNet{}, []string{}) c, _, priv, err := newTestCert(ca, caKey, time.Time{}, time.Time{}, []*net.IPNet{}, []*net.IPNet{}, []string{})
err = c.VerifyPrivateKey(priv) err = c.VerifyPrivateKey(Curve_CURVE25519, priv)
assert.Nil(t, err) assert.Nil(t, err)
_, priv2 := x25519Keypair() _, priv2 := x25519Keypair()
err = c.VerifyPrivateKey(priv2) err = c.VerifyPrivateKey(Curve_CURVE25519, priv2)
assert.NotNil(t, err)
}
func TestNebulaCertificate_VerifyPrivateKeyP256(t *testing.T) {
ca, _, caKey, err := newTestCaCertP256(time.Time{}, time.Time{}, []*net.IPNet{}, []*net.IPNet{}, []string{})
assert.Nil(t, err)
err = ca.VerifyPrivateKey(Curve_P256, caKey)
assert.Nil(t, err)
_, _, caKey2, err := newTestCaCertP256(time.Time{}, time.Time{}, []*net.IPNet{}, []*net.IPNet{}, []string{})
assert.Nil(t, err)
err = ca.VerifyPrivateKey(Curve_P256, caKey2)
assert.NotNil(t, err)
c, _, priv, err := newTestCert(ca, caKey, time.Time{}, time.Time{}, []*net.IPNet{}, []*net.IPNet{}, []string{})
err = c.VerifyPrivateKey(Curve_P256, priv)
assert.Nil(t, err)
_, priv2 := p256Keypair()
err = c.VerifyPrivateKey(Curve_P256, priv2)
assert.NotNil(t, err) assert.NotNil(t, err)
} }
@@ -438,6 +563,23 @@ CjkKB2V4cGlyZWQouPmWjQYwufmWjQY6ILCRaoCkJlqHgv5jfDN4lzLHBvDzaQm4
vZxfu144hmgjQAESQG4qlnZi8DncvD/LDZnLgJHOaX1DWCHHEh59epVsC+BNgTie vZxfu144hmgjQAESQG4qlnZi8DncvD/LDZnLgJHOaX1DWCHHEh59epVsC+BNgTie
WH1M9n4O7cFtGlM6sJJOS+rCVVEJ3ABS7+MPdQs= WH1M9n4O7cFtGlM6sJJOS+rCVVEJ3ABS7+MPdQs=
-----END NEBULA CERTIFICATE----- -----END NEBULA CERTIFICATE-----
`
p256 := `
# p256 certificate
-----BEGIN NEBULA CERTIFICATE-----
CmYKEG5lYnVsYSBQMjU2IHRlc3Qo4s+7mgYw4tXrsAc6QQRkaW2jFmllYvN4+/k2
6tctO9sPT3jOx8ES6M1nIqOhpTmZeabF/4rELDqPV4aH5jfJut798DUXql0FlF8H
76gvQAGgBgESRzBFAiEAib0/te6eMiZOKD8gdDeloMTS0wGuX2t0C7TFdUhAQzgC
IBNWYMep3ysx9zCgknfG5dKtwGTaqF++BWKDYdyl34KX
-----END NEBULA CERTIFICATE-----
`
v2 := `
# valid PEM with the V2 header
-----BEGIN NEBULA CERTIFICATE V2-----
CmYKEG5lYnVsYSBQMjU2IHRlc3Qo4s+7mgYw4tXrsAc6QQRkaW2jFmllYvN4+/k2
-----END NEBULA CERTIFICATE V2-----
` `
rootCA := NebulaCertificate{ rootCA := NebulaCertificate{
@@ -452,28 +594,52 @@ WH1M9n4O7cFtGlM6sJJOS+rCVVEJ3ABS7+MPdQs=
}, },
} }
p, err := NewCAPoolFromBytes([]byte(noNewLines)) rootCAP256 := NebulaCertificate{
Details: NebulaCertificateDetails{
Name: "nebula P256 test",
},
}
p, warn, err := NewCAPoolFromBytes([]byte(noNewLines))
assert.Nil(t, err) assert.Nil(t, err)
assert.Nil(t, warn)
assert.Equal(t, p.CAs[string("c9bfaf7ce8e84b2eeda2e27b469f4b9617bde192efd214b68891ecda6ed49522")].Details.Name, rootCA.Details.Name) assert.Equal(t, p.CAs[string("c9bfaf7ce8e84b2eeda2e27b469f4b9617bde192efd214b68891ecda6ed49522")].Details.Name, rootCA.Details.Name)
assert.Equal(t, p.CAs[string("5c9c3f23e7ee7fe97637cbd3a0a5b854154d1d9aaaf7b566a51f4a88f76b64cd")].Details.Name, rootCA01.Details.Name) assert.Equal(t, p.CAs[string("5c9c3f23e7ee7fe97637cbd3a0a5b854154d1d9aaaf7b566a51f4a88f76b64cd")].Details.Name, rootCA01.Details.Name)
pp, err := NewCAPoolFromBytes([]byte(withNewLines)) pp, warn, err := NewCAPoolFromBytes([]byte(withNewLines))
assert.Nil(t, err) assert.Nil(t, err)
assert.Nil(t, warn)
assert.Equal(t, pp.CAs[string("c9bfaf7ce8e84b2eeda2e27b469f4b9617bde192efd214b68891ecda6ed49522")].Details.Name, rootCA.Details.Name) assert.Equal(t, pp.CAs[string("c9bfaf7ce8e84b2eeda2e27b469f4b9617bde192efd214b68891ecda6ed49522")].Details.Name, rootCA.Details.Name)
assert.Equal(t, pp.CAs[string("5c9c3f23e7ee7fe97637cbd3a0a5b854154d1d9aaaf7b566a51f4a88f76b64cd")].Details.Name, rootCA01.Details.Name) assert.Equal(t, pp.CAs[string("5c9c3f23e7ee7fe97637cbd3a0a5b854154d1d9aaaf7b566a51f4a88f76b64cd")].Details.Name, rootCA01.Details.Name)
// expired cert, no valid certs // expired cert, no valid certs
ppp, err := NewCAPoolFromBytes([]byte(expired)) ppp, warn, err := NewCAPoolFromBytes([]byte(expired))
assert.Equal(t, ErrExpired, err) assert.Error(t, err, "no valid CA certificates present")
assert.Equal(t, ppp.CAs[string("152070be6bb19bc9e3bde4c2f0e7d8f4ff5448b4c9856b8eccb314fade0229b0")].Details.Name, "expired") assert.Len(t, warn, 1)
assert.Error(t, warn[0], ErrExpired)
assert.Nil(t, ppp)
// expired cert, with valid certs // expired cert, with valid certs
pppp, err := NewCAPoolFromBytes(append([]byte(expired), noNewLines...)) pppp, warn, err := NewCAPoolFromBytes(append([]byte(expired), noNewLines...))
assert.Equal(t, ErrExpired, err) assert.Len(t, warn, 1)
assert.Nil(t, err)
assert.Error(t, warn[0], ErrExpired)
assert.Equal(t, pppp.CAs[string("c9bfaf7ce8e84b2eeda2e27b469f4b9617bde192efd214b68891ecda6ed49522")].Details.Name, rootCA.Details.Name) assert.Equal(t, pppp.CAs[string("c9bfaf7ce8e84b2eeda2e27b469f4b9617bde192efd214b68891ecda6ed49522")].Details.Name, rootCA.Details.Name)
assert.Equal(t, pppp.CAs[string("5c9c3f23e7ee7fe97637cbd3a0a5b854154d1d9aaaf7b566a51f4a88f76b64cd")].Details.Name, rootCA01.Details.Name) assert.Equal(t, pppp.CAs[string("5c9c3f23e7ee7fe97637cbd3a0a5b854154d1d9aaaf7b566a51f4a88f76b64cd")].Details.Name, rootCA01.Details.Name)
assert.Equal(t, pppp.CAs[string("152070be6bb19bc9e3bde4c2f0e7d8f4ff5448b4c9856b8eccb314fade0229b0")].Details.Name, "expired") assert.Equal(t, pppp.CAs[string("152070be6bb19bc9e3bde4c2f0e7d8f4ff5448b4c9856b8eccb314fade0229b0")].Details.Name, "expired")
assert.Equal(t, len(pppp.CAs), 3) assert.Equal(t, len(pppp.CAs), 3)
ppppp, warn, err := NewCAPoolFromBytes([]byte(p256))
assert.Nil(t, err)
assert.Nil(t, warn)
assert.Equal(t, ppppp.CAs[string("a7938893ec8c4ef769b06d7f425e5e46f7a7f5ffa49c3bcf4a86b608caba9159")].Details.Name, rootCAP256.Details.Name)
assert.Equal(t, len(ppppp.CAs), 1)
pppppp, warn, err := NewCAPoolFromBytes(append([]byte(p256), []byte(v2)...))
assert.Nil(t, err)
assert.True(t, errors.Is(warn[0], ErrInvalidPEMCertificateUnsupported))
assert.Equal(t, pppppp.CAs[string("a7938893ec8c4ef769b06d7f425e5e46f7a7f5ffa49c3bcf4a86b608caba9159")].Details.Name, rootCAP256.Details.Name)
assert.Equal(t, len(pppppp.CAs), 1)
} }
func appendByteSlices(b ...[]byte) []byte { func appendByteSlices(b ...[]byte) []byte {
@@ -529,11 +695,16 @@ bzBEr00kERQxxTzTsH8cpYEgRoipvmExvg8WP8NdAJEYJosB
assert.EqualError(t, err, "input did not contain a valid PEM encoded block") assert.EqualError(t, err, "input did not contain a valid PEM encoded block")
} }
func TestUnmarshalEd25519PrivateKey(t *testing.T) { func TestUnmarshalSigningPrivateKey(t *testing.T) {
privKey := []byte(`# A good key privKey := []byte(`# A good key
-----BEGIN NEBULA ED25519 PRIVATE KEY----- -----BEGIN NEBULA ED25519 PRIVATE KEY-----
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA==
-----END NEBULA ED25519 PRIVATE KEY----- -----END NEBULA ED25519 PRIVATE KEY-----
`)
privP256Key := []byte(`# A good key
-----BEGIN NEBULA ECDSA P256 PRIVATE KEY-----
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=
-----END NEBULA ECDSA P256 PRIVATE KEY-----
`) `)
shortKey := []byte(`# A short key shortKey := []byte(`# A short key
-----BEGIN NEBULA ED25519 PRIVATE KEY----- -----BEGIN NEBULA ED25519 PRIVATE KEY-----
@@ -550,39 +721,139 @@ AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA==
-END NEBULA ED25519 PRIVATE KEY-----`) -END NEBULA ED25519 PRIVATE KEY-----`)
keyBundle := appendByteSlices(privKey, shortKey, invalidBanner, invalidPem) keyBundle := appendByteSlices(privKey, privP256Key, shortKey, invalidBanner, invalidPem)
// Success test case // Success test case
k, rest, err := UnmarshalEd25519PrivateKey(keyBundle) k, rest, curve, err := UnmarshalSigningPrivateKey(keyBundle)
assert.Len(t, k, 64) assert.Len(t, k, 64)
assert.Equal(t, rest, appendByteSlices(privP256Key, shortKey, invalidBanner, invalidPem))
assert.Equal(t, Curve_CURVE25519, curve)
assert.Nil(t, err)
// Success test case
k, rest, curve, err = UnmarshalSigningPrivateKey(rest)
assert.Len(t, k, 32)
assert.Equal(t, rest, appendByteSlices(shortKey, invalidBanner, invalidPem)) assert.Equal(t, rest, appendByteSlices(shortKey, invalidBanner, invalidPem))
assert.Equal(t, Curve_P256, curve)
assert.Nil(t, err) assert.Nil(t, err)
// Fail due to short key // Fail due to short key
k, rest, err = UnmarshalEd25519PrivateKey(rest) k, rest, curve, err = UnmarshalSigningPrivateKey(rest)
assert.Nil(t, k) assert.Nil(t, k)
assert.Equal(t, rest, appendByteSlices(invalidBanner, invalidPem)) assert.Equal(t, rest, appendByteSlices(invalidBanner, invalidPem))
assert.EqualError(t, err, "key was not 64 bytes, is invalid ed25519 private key") assert.EqualError(t, err, "key was not 64 bytes, is invalid Ed25519 private key")
// Fail due to invalid banner // Fail due to invalid banner
k, rest, err = UnmarshalEd25519PrivateKey(rest) k, rest, curve, err = UnmarshalSigningPrivateKey(rest)
assert.Nil(t, k) assert.Nil(t, k)
assert.Equal(t, rest, invalidPem) assert.Equal(t, rest, invalidPem)
assert.EqualError(t, err, "bytes did not contain a proper nebula Ed25519 private key banner") assert.EqualError(t, err, "bytes did not contain a proper nebula Ed25519/ECDSA private key banner")
// Fail due to ivalid PEM format, because // Fail due to ivalid PEM format, because
// it's missing the requisite pre-encapsulation boundary. // it's missing the requisite pre-encapsulation boundary.
k, rest, err = UnmarshalEd25519PrivateKey(rest) k, rest, curve, err = UnmarshalSigningPrivateKey(rest)
assert.Nil(t, k) assert.Nil(t, k)
assert.Equal(t, rest, invalidPem) assert.Equal(t, rest, invalidPem)
assert.EqualError(t, err, "input did not contain a valid PEM encoded block") assert.EqualError(t, err, "input did not contain a valid PEM encoded block")
} }
func TestUnmarshalX25519PrivateKey(t *testing.T) { func TestDecryptAndUnmarshalSigningPrivateKey(t *testing.T) {
passphrase := []byte("DO NOT USE THIS KEY")
privKey := []byte(`# A good key
-----BEGIN NEBULA ED25519 ENCRYPTED PRIVATE KEY-----
CjwKC0FFUy0yNTYtR0NNEi0IExCAgIABGAEgBCognnjujd67Vsv99p22wfAjQaDT
oCMW1mdjkU3gACKNW4MSXOWR9Sts4C81yk1RUku2gvGKs3TB9LYoklLsIizSYOLl
+Vs//O1T0I1Xbml2XBAROsb/VSoDln/6LMqR4B6fn6B3GOsLBBqRI8daDl9lRMPB
qrlJ69wer3ZUHFXA
-----END NEBULA ED25519 ENCRYPTED PRIVATE KEY-----
`)
shortKey := []byte(`# A key which, once decrypted, is too short
-----BEGIN NEBULA ED25519 ENCRYPTED PRIVATE KEY-----
CjwKC0FFUy0yNTYtR0NNEi0IExCAgIABGAEgBCoga5h8owMEBWRSMMJKzuUvWce7
k0qlBkQmCxiuLh80MuASW70YcKt8jeEIS2axo2V6zAKA9TSMcCsJW1kDDXEtL/xe
GLF5T7sDl5COp4LU3pGxpV+KoeQ/S3gQCAAcnaOtnJQX+aSDnbO3jCHyP7U9CHbs
rQr3bdH3Oy/WiYU=
-----END NEBULA ED25519 ENCRYPTED PRIVATE KEY-----
`)
invalidBanner := []byte(`# Invalid banner (not encrypted)
-----BEGIN NEBULA ED25519 PRIVATE KEY-----
bWRp2CTVFhW9HD/qCd28ltDgK3w8VXSeaEYczDWos8sMUBqDb9jP3+NYwcS4lURG
XgLvodMXZJuaFPssp+WwtA==
-----END NEBULA ED25519 PRIVATE KEY-----
`)
invalidPem := []byte(`# Not a valid PEM format
-BEGIN NEBULA ED25519 ENCRYPTED PRIVATE KEY-----
CjwKC0FFUy0yNTYtR0NNEi0IExCAgIABGAEgBCognnjujd67Vsv99p22wfAjQaDT
oCMW1mdjkU3gACKNW4MSXOWR9Sts4C81yk1RUku2gvGKs3TB9LYoklLsIizSYOLl
+Vs//O1T0I1Xbml2XBAROsb/VSoDln/6LMqR4B6fn6B3GOsLBBqRI8daDl9lRMPB
qrlJ69wer3ZUHFXA
-END NEBULA ED25519 ENCRYPTED PRIVATE KEY-----
`)
keyBundle := appendByteSlices(privKey, shortKey, invalidBanner, invalidPem)
// Success test case
curve, k, rest, err := DecryptAndUnmarshalSigningPrivateKey(passphrase, keyBundle)
assert.Nil(t, err)
assert.Equal(t, Curve_CURVE25519, curve)
assert.Len(t, k, 64)
assert.Equal(t, rest, appendByteSlices(shortKey, invalidBanner, invalidPem))
// Fail due to short key
curve, k, rest, err = DecryptAndUnmarshalSigningPrivateKey(passphrase, rest)
assert.EqualError(t, err, "key was not 64 bytes, is invalid ed25519 private key")
assert.Nil(t, k)
assert.Equal(t, rest, appendByteSlices(invalidBanner, invalidPem))
// Fail due to invalid banner
curve, k, rest, err = DecryptAndUnmarshalSigningPrivateKey(passphrase, rest)
assert.EqualError(t, err, "bytes did not contain a proper nebula encrypted Ed25519/ECDSA private key banner")
assert.Nil(t, k)
assert.Equal(t, rest, invalidPem)
// Fail due to ivalid PEM format, because
// it's missing the requisite pre-encapsulation boundary.
curve, k, rest, err = DecryptAndUnmarshalSigningPrivateKey(passphrase, rest)
assert.EqualError(t, err, "input did not contain a valid PEM encoded block")
assert.Nil(t, k)
assert.Equal(t, rest, invalidPem)
// Fail due to invalid passphrase
curve, k, rest, err = DecryptAndUnmarshalSigningPrivateKey([]byte("invalid passphrase"), privKey)
assert.EqualError(t, err, "invalid passphrase or corrupt private key")
assert.Nil(t, k)
assert.Equal(t, rest, []byte{})
}
func TestEncryptAndMarshalSigningPrivateKey(t *testing.T) {
// Having proved that decryption works correctly above, we can test the
// encryption function produces a value which can be decrypted
passphrase := []byte("passphrase")
bytes := []byte("AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA")
kdfParams := NewArgon2Parameters(64*1024, 4, 3)
key, err := EncryptAndMarshalSigningPrivateKey(Curve_CURVE25519, bytes, passphrase, kdfParams)
assert.Nil(t, err)
// Verify the "key" can be decrypted successfully
curve, k, rest, err := DecryptAndUnmarshalSigningPrivateKey(passphrase, key)
assert.Len(t, k, 64)
assert.Equal(t, Curve_CURVE25519, curve)
assert.Equal(t, rest, []byte{})
assert.Nil(t, err)
// EncryptAndMarshalEd25519PrivateKey does not create any errors itself
}
func TestUnmarshalPrivateKey(t *testing.T) {
privKey := []byte(`# A good key privKey := []byte(`# A good key
-----BEGIN NEBULA X25519 PRIVATE KEY----- -----BEGIN NEBULA X25519 PRIVATE KEY-----
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA= AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=
-----END NEBULA X25519 PRIVATE KEY----- -----END NEBULA X25519 PRIVATE KEY-----
`)
privP256Key := []byte(`# A good key
-----BEGIN NEBULA P256 PRIVATE KEY-----
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=
-----END NEBULA P256 PRIVATE KEY-----
`) `)
shortKey := []byte(`# A short key shortKey := []byte(`# A short key
-----BEGIN NEBULA X25519 PRIVATE KEY----- -----BEGIN NEBULA X25519 PRIVATE KEY-----
@@ -599,29 +870,37 @@ AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA= AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=
-END NEBULA X25519 PRIVATE KEY-----`) -END NEBULA X25519 PRIVATE KEY-----`)
keyBundle := appendByteSlices(privKey, shortKey, invalidBanner, invalidPem) keyBundle := appendByteSlices(privKey, privP256Key, shortKey, invalidBanner, invalidPem)
// Success test case // Success test case
k, rest, err := UnmarshalX25519PrivateKey(keyBundle) k, rest, curve, err := UnmarshalPrivateKey(keyBundle)
assert.Len(t, k, 32)
assert.Equal(t, rest, appendByteSlices(privP256Key, shortKey, invalidBanner, invalidPem))
assert.Equal(t, Curve_CURVE25519, curve)
assert.Nil(t, err)
// Success test case
k, rest, curve, err = UnmarshalPrivateKey(rest)
assert.Len(t, k, 32) assert.Len(t, k, 32)
assert.Equal(t, rest, appendByteSlices(shortKey, invalidBanner, invalidPem)) assert.Equal(t, rest, appendByteSlices(shortKey, invalidBanner, invalidPem))
assert.Equal(t, Curve_P256, curve)
assert.Nil(t, err) assert.Nil(t, err)
// Fail due to short key // Fail due to short key
k, rest, err = UnmarshalX25519PrivateKey(rest) k, rest, curve, err = UnmarshalPrivateKey(rest)
assert.Nil(t, k) assert.Nil(t, k)
assert.Equal(t, rest, appendByteSlices(invalidBanner, invalidPem)) assert.Equal(t, rest, appendByteSlices(invalidBanner, invalidPem))
assert.EqualError(t, err, "key was not 32 bytes, is invalid X25519 private key") assert.EqualError(t, err, "key was not 32 bytes, is invalid CURVE25519 private key")
// Fail due to invalid banner // Fail due to invalid banner
k, rest, err = UnmarshalX25519PrivateKey(rest) k, rest, curve, err = UnmarshalPrivateKey(rest)
assert.Nil(t, k) assert.Nil(t, k)
assert.Equal(t, rest, invalidPem) assert.Equal(t, rest, invalidPem)
assert.EqualError(t, err, "bytes did not contain a proper nebula X25519 private key banner") assert.EqualError(t, err, "bytes did not contain a proper nebula private key banner")
// Fail due to ivalid PEM format, because // Fail due to ivalid PEM format, because
// it's missing the requisite pre-encapsulation boundary. // it's missing the requisite pre-encapsulation boundary.
k, rest, err = UnmarshalX25519PrivateKey(rest) k, rest, curve, err = UnmarshalPrivateKey(rest)
assert.Nil(t, k) assert.Nil(t, k)
assert.Equal(t, rest, invalidPem) assert.Equal(t, rest, invalidPem)
assert.EqualError(t, err, "input did not contain a valid PEM encoded block") assert.EqualError(t, err, "input did not contain a valid PEM encoded block")
@@ -681,6 +960,12 @@ func TestUnmarshalX25519PublicKey(t *testing.T) {
-----BEGIN NEBULA X25519 PUBLIC KEY----- -----BEGIN NEBULA X25519 PUBLIC KEY-----
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA= AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=
-----END NEBULA X25519 PUBLIC KEY----- -----END NEBULA X25519 PUBLIC KEY-----
`)
pubP256Key := []byte(`# A good key
-----BEGIN NEBULA P256 PUBLIC KEY-----
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAA=
-----END NEBULA P256 PUBLIC KEY-----
`) `)
shortKey := []byte(`# A short key shortKey := []byte(`# A short key
-----BEGIN NEBULA X25519 PUBLIC KEY----- -----BEGIN NEBULA X25519 PUBLIC KEY-----
@@ -697,29 +982,37 @@ AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA= AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=
-END NEBULA X25519 PUBLIC KEY-----`) -END NEBULA X25519 PUBLIC KEY-----`)
keyBundle := appendByteSlices(pubKey, shortKey, invalidBanner, invalidPem) keyBundle := appendByteSlices(pubKey, pubP256Key, shortKey, invalidBanner, invalidPem)
// Success test case // Success test case
k, rest, err := UnmarshalX25519PublicKey(keyBundle) k, rest, curve, err := UnmarshalPublicKey(keyBundle)
assert.Equal(t, len(k), 32) assert.Equal(t, len(k), 32)
assert.Nil(t, err) assert.Nil(t, err)
assert.Equal(t, rest, appendByteSlices(pubP256Key, shortKey, invalidBanner, invalidPem))
assert.Equal(t, Curve_CURVE25519, curve)
// Success test case
k, rest, curve, err = UnmarshalPublicKey(rest)
assert.Equal(t, len(k), 65)
assert.Nil(t, err)
assert.Equal(t, rest, appendByteSlices(shortKey, invalidBanner, invalidPem)) assert.Equal(t, rest, appendByteSlices(shortKey, invalidBanner, invalidPem))
assert.Equal(t, Curve_P256, curve)
// Fail due to short key // Fail due to short key
k, rest, err = UnmarshalX25519PublicKey(rest) k, rest, curve, err = UnmarshalPublicKey(rest)
assert.Nil(t, k) assert.Nil(t, k)
assert.Equal(t, rest, appendByteSlices(invalidBanner, invalidPem)) assert.Equal(t, rest, appendByteSlices(invalidBanner, invalidPem))
assert.EqualError(t, err, "key was not 32 bytes, is invalid X25519 public key") assert.EqualError(t, err, "key was not 32 bytes, is invalid CURVE25519 public key")
// Fail due to invalid banner // Fail due to invalid banner
k, rest, err = UnmarshalX25519PublicKey(rest) k, rest, curve, err = UnmarshalPublicKey(rest)
assert.Nil(t, k) assert.Nil(t, k)
assert.EqualError(t, err, "bytes did not contain a proper nebula X25519 public key banner") assert.EqualError(t, err, "bytes did not contain a proper nebula public key banner")
assert.Equal(t, rest, invalidPem) assert.Equal(t, rest, invalidPem)
// Fail due to ivalid PEM format, because // Fail due to ivalid PEM format, because
// it's missing the requisite pre-encapsulation boundary. // it's missing the requisite pre-encapsulation boundary.
k, rest, err = UnmarshalX25519PublicKey(rest) k, rest, curve, err = UnmarshalPublicKey(rest)
assert.Nil(t, k) assert.Nil(t, k)
assert.Equal(t, rest, invalidPem) assert.Equal(t, rest, invalidPem)
assert.EqualError(t, err, "input did not contain a valid PEM encoded block") assert.EqualError(t, err, "input did not contain a valid PEM encoded block")
@@ -816,13 +1109,56 @@ func newTestCaCert(before, after time.Time, ips, subnets []*net.IPNet, groups []
nc.Details.Groups = groups nc.Details.Groups = groups
} }
err = nc.Sign(priv) err = nc.Sign(Curve_CURVE25519, priv)
if err != nil { if err != nil {
return nil, nil, nil, err return nil, nil, nil, err
} }
return nc, pub, priv, nil return nc, pub, priv, nil
} }
func newTestCaCertP256(before, after time.Time, ips, subnets []*net.IPNet, groups []string) (*NebulaCertificate, []byte, []byte, error) {
priv, err := ecdsa.GenerateKey(elliptic.P256(), rand.Reader)
pub := elliptic.Marshal(elliptic.P256(), priv.PublicKey.X, priv.PublicKey.Y)
rawPriv := priv.D.FillBytes(make([]byte, 32))
if before.IsZero() {
before = time.Now().Add(time.Second * -60).Round(time.Second)
}
if after.IsZero() {
after = time.Now().Add(time.Second * 60).Round(time.Second)
}
nc := &NebulaCertificate{
Details: NebulaCertificateDetails{
Name: "test ca",
NotBefore: time.Unix(before.Unix(), 0),
NotAfter: time.Unix(after.Unix(), 0),
PublicKey: pub,
IsCA: true,
Curve: Curve_P256,
InvertedGroups: make(map[string]struct{}),
},
}
if len(ips) > 0 {
nc.Details.Ips = ips
}
if len(subnets) > 0 {
nc.Details.Subnets = subnets
}
if len(groups) > 0 {
nc.Details.Groups = groups
}
err = nc.Sign(Curve_P256, rawPriv)
if err != nil {
return nil, nil, nil, err
}
return nc, pub, rawPriv, nil
}
func newTestCert(ca *NebulaCertificate, key []byte, before, after time.Time, ips, subnets []*net.IPNet, groups []string) (*NebulaCertificate, []byte, []byte, error) { func newTestCert(ca *NebulaCertificate, key []byte, before, after time.Time, ips, subnets []*net.IPNet, groups []string) (*NebulaCertificate, []byte, []byte, error) {
issuer, err := ca.Sha256Sum() issuer, err := ca.Sha256Sum()
if err != nil { if err != nil {
@@ -856,7 +1192,16 @@ func newTestCert(ca *NebulaCertificate, key []byte, before, after time.Time, ips
} }
} }
pub, rawPriv := x25519Keypair() var pub, rawPriv []byte
switch ca.Details.Curve {
case Curve_CURVE25519:
pub, rawPriv = x25519Keypair()
case Curve_P256:
pub, rawPriv = p256Keypair()
default:
return nil, nil, nil, fmt.Errorf("unknown curve: %v", ca.Details.Curve)
}
nc := &NebulaCertificate{ nc := &NebulaCertificate{
Details: NebulaCertificateDetails{ Details: NebulaCertificateDetails{
@@ -868,12 +1213,13 @@ func newTestCert(ca *NebulaCertificate, key []byte, before, after time.Time, ips
NotAfter: time.Unix(after.Unix(), 0), NotAfter: time.Unix(after.Unix(), 0),
PublicKey: pub, PublicKey: pub,
IsCA: false, IsCA: false,
Curve: ca.Details.Curve,
Issuer: issuer, Issuer: issuer,
InvertedGroups: make(map[string]struct{}), InvertedGroups: make(map[string]struct{}),
}, },
} }
err = nc.Sign(key) err = nc.Sign(ca.Details.Curve, key)
if err != nil { if err != nil {
return nil, nil, nil, err return nil, nil, nil, err
} }
@@ -894,3 +1240,12 @@ func x25519Keypair() ([]byte, []byte) {
return pubkey, privkey return pubkey, privkey
} }
func p256Keypair() ([]byte, []byte) {
privkey, err := ecdh.P256().GenerateKey(rand.Reader)
if err != nil {
panic(err)
}
pubkey := privkey.PublicKey()
return pubkey.Bytes(), privkey.Bytes()
}

143
cert/crypto.go Normal file
View File

@@ -0,0 +1,143 @@
package cert
import (
"crypto/aes"
"crypto/cipher"
"crypto/rand"
"fmt"
"io"
"golang.org/x/crypto/argon2"
)
// KDF factors
type Argon2Parameters struct {
version rune
Memory uint32 // KiB
Parallelism uint8
Iterations uint32
salt []byte
}
// Returns a new Argon2Parameters object with current version set
func NewArgon2Parameters(memory uint32, parallelism uint8, iterations uint32) *Argon2Parameters {
return &Argon2Parameters{
version: argon2.Version,
Memory: memory, // KiB
Parallelism: parallelism,
Iterations: iterations,
}
}
// Encrypts data using AES-256-GCM and the Argon2id key derivation function
func aes256Encrypt(passphrase []byte, kdfParams *Argon2Parameters, data []byte) ([]byte, error) {
key, err := aes256DeriveKey(passphrase, kdfParams)
if err != nil {
return nil, err
}
// this should never happen, but since this dictates how our calls into the
// aes package behave and could be catastraphic, let's sanity check this
if len(key) != 32 {
return nil, fmt.Errorf("invalid AES-256 key length (%d) - cowardly refusing to encrypt", len(key))
}
block, err := aes.NewCipher(key)
if err != nil {
return nil, err
}
gcm, err := cipher.NewGCM(block)
if err != nil {
return nil, err
}
nonce := make([]byte, gcm.NonceSize())
if _, err := io.ReadFull(rand.Reader, nonce); err != nil {
return nil, err
}
ciphertext := gcm.Seal(nil, nonce, data, nil)
blob := joinNonceCiphertext(nonce, ciphertext)
return blob, nil
}
// Decrypts data using AES-256-GCM and the Argon2id key derivation function
// Expects the data to include an Argon2id parameter string before the encrypted data
func aes256Decrypt(passphrase []byte, kdfParams *Argon2Parameters, data []byte) ([]byte, error) {
key, err := aes256DeriveKey(passphrase, kdfParams)
if err != nil {
return nil, err
}
block, err := aes.NewCipher(key)
if err != nil {
return nil, err
}
gcm, err := cipher.NewGCM(block)
if err != nil {
return nil, err
}
nonce, ciphertext, err := splitNonceCiphertext(data, gcm.NonceSize())
if err != nil {
return nil, err
}
plaintext, err := gcm.Open(nil, nonce, ciphertext, nil)
if err != nil {
return nil, fmt.Errorf("invalid passphrase or corrupt private key")
}
return plaintext, nil
}
func aes256DeriveKey(passphrase []byte, params *Argon2Parameters) ([]byte, error) {
if params.salt == nil {
params.salt = make([]byte, 32)
if _, err := rand.Read(params.salt); err != nil {
return nil, err
}
}
// keySize of 32 bytes will result in AES-256 encryption
key, err := deriveKey(passphrase, 32, params)
if err != nil {
return nil, err
}
return key, nil
}
// Derives a key from a passphrase using Argon2id
func deriveKey(passphrase []byte, keySize uint32, params *Argon2Parameters) ([]byte, error) {
if params.version != argon2.Version {
return nil, fmt.Errorf("incompatible Argon2 version: %d", params.version)
}
if params.salt == nil {
return nil, fmt.Errorf("salt must be set in argon2Parameters")
} else if len(params.salt) < 16 {
return nil, fmt.Errorf("salt must be at least 128 bits")
}
key := argon2.IDKey(passphrase, params.salt, params.Iterations, params.Memory, params.Parallelism, keySize)
return key, nil
}
// Prepends nonce to ciphertext
func joinNonceCiphertext(nonce []byte, ciphertext []byte) []byte {
return append(nonce, ciphertext...)
}
// Splits nonce from ciphertext
func splitNonceCiphertext(blob []byte, nonceSize int) ([]byte, []byte, error) {
if len(blob) <= nonceSize {
return nil, nil, fmt.Errorf("invalid ciphertext blob - blob shorter than nonce length")
}
return blob[:nonceSize], blob[nonceSize:], nil
}

25
cert/crypto_test.go Normal file
View File

@@ -0,0 +1,25 @@
package cert
import (
"testing"
"github.com/stretchr/testify/assert"
"golang.org/x/crypto/argon2"
)
func TestNewArgon2Parameters(t *testing.T) {
p := NewArgon2Parameters(64*1024, 4, 3)
assert.EqualValues(t, &Argon2Parameters{
version: argon2.Version,
Memory: 64 * 1024,
Parallelism: 4,
Iterations: 3,
}, p)
p = NewArgon2Parameters(2*1024*1024, 2, 1)
assert.EqualValues(t, &Argon2Parameters{
version: argon2.Version,
Memory: 2 * 1024 * 1024,
Parallelism: 2,
Iterations: 1,
}, p)
}

View File

@@ -1,9 +1,15 @@
package cert package cert
import "errors" import (
"errors"
)
var ( var (
ErrRootExpired = errors.New("root certificate is expired")
ErrExpired = errors.New("certificate is expired") ErrExpired = errors.New("certificate is expired")
ErrNotCA = errors.New("certificate is not a CA") ErrNotCA = errors.New("certificate is not a CA")
ErrNotSelfSigned = errors.New("certificate is not self-signed") ErrNotSelfSigned = errors.New("certificate is not self-signed")
ErrBlockListed = errors.New("certificate is in the block list")
ErrSignatureMismatch = errors.New("certificate signature did not match")
ErrInvalidPEMCertificateUnsupported = errors.New("bytes contain an unsupported certificate format")
) )

View File

@@ -1,10 +0,0 @@
package cidr
import "net"
// Parse is a convenience function that returns only the IPNet
// This function ignores errors since it is primarily a test helper, the result could be nil
func Parse(s string) *net.IPNet {
_, c, _ := net.ParseCIDR(s)
return c
}

View File

@@ -1,145 +0,0 @@
package cidr
import (
"net"
"github.com/slackhq/nebula/iputil"
)
type Node struct {
left *Node
right *Node
parent *Node
value interface{}
}
type Tree4 struct {
root *Node
}
const (
startbit = iputil.VpnIp(0x80000000)
)
func NewTree4() *Tree4 {
tree := new(Tree4)
tree.root = &Node{}
return tree
}
func (tree *Tree4) AddCIDR(cidr *net.IPNet, val interface{}) {
bit := startbit
node := tree.root
next := tree.root
ip := iputil.Ip2VpnIp(cidr.IP)
mask := iputil.Ip2VpnIp(cidr.Mask)
// Find our last ancestor in the tree
for bit&mask != 0 {
if ip&bit != 0 {
next = node.right
} else {
next = node.left
}
if next == nil {
break
}
bit = bit >> 1
node = next
}
// We already have this range so update the value
if next != nil {
node.value = val
return
}
// Build up the rest of the tree we don't already have
for bit&mask != 0 {
next = &Node{}
next.parent = node
if ip&bit != 0 {
node.right = next
} else {
node.left = next
}
bit >>= 1
node = next
}
// Final node marks our cidr, set the value
node.value = val
}
// Finds the first match, which may be the least specific
func (tree *Tree4) Contains(ip iputil.VpnIp) (value interface{}) {
bit := startbit
node := tree.root
for node != nil {
if node.value != nil {
return node.value
}
if ip&bit != 0 {
node = node.right
} else {
node = node.left
}
bit >>= 1
}
return value
}
// Finds the most specific match
func (tree *Tree4) MostSpecificContains(ip iputil.VpnIp) (value interface{}) {
bit := startbit
node := tree.root
for node != nil {
if node.value != nil {
value = node.value
}
if ip&bit != 0 {
node = node.right
} else {
node = node.left
}
bit >>= 1
}
return value
}
// Finds the most specific match
func (tree *Tree4) Match(ip iputil.VpnIp) (value interface{}) {
bit := startbit
node := tree.root
lastNode := node
for node != nil {
lastNode = node
if ip&bit != 0 {
node = node.right
} else {
node = node.left
}
bit >>= 1
}
if bit == 0 && lastNode != nil {
value = lastNode.value
}
return value
}

View File

@@ -1,153 +0,0 @@
package cidr
import (
"net"
"testing"
"github.com/slackhq/nebula/iputil"
"github.com/stretchr/testify/assert"
)
func TestCIDRTree_Contains(t *testing.T) {
tree := NewTree4()
tree.AddCIDR(Parse("1.0.0.0/8"), "1")
tree.AddCIDR(Parse("2.1.0.0/16"), "2")
tree.AddCIDR(Parse("3.1.1.0/24"), "3")
tree.AddCIDR(Parse("4.1.1.0/24"), "4a")
tree.AddCIDR(Parse("4.1.1.1/32"), "4b")
tree.AddCIDR(Parse("4.1.2.1/32"), "4c")
tree.AddCIDR(Parse("254.0.0.0/4"), "5")
tests := []struct {
Result interface{}
IP string
}{
{"1", "1.0.0.0"},
{"1", "1.255.255.255"},
{"2", "2.1.0.0"},
{"2", "2.1.255.255"},
{"3", "3.1.1.0"},
{"3", "3.1.1.255"},
{"4a", "4.1.1.255"},
{"4a", "4.1.1.1"},
{"5", "240.0.0.0"},
{"5", "255.255.255.255"},
{nil, "239.0.0.0"},
{nil, "4.1.2.2"},
}
for _, tt := range tests {
assert.Equal(t, tt.Result, tree.Contains(iputil.Ip2VpnIp(net.ParseIP(tt.IP))))
}
tree = NewTree4()
tree.AddCIDR(Parse("1.1.1.1/0"), "cool")
assert.Equal(t, "cool", tree.Contains(iputil.Ip2VpnIp(net.ParseIP("0.0.0.0"))))
assert.Equal(t, "cool", tree.Contains(iputil.Ip2VpnIp(net.ParseIP("255.255.255.255"))))
}
func TestCIDRTree_MostSpecificContains(t *testing.T) {
tree := NewTree4()
tree.AddCIDR(Parse("1.0.0.0/8"), "1")
tree.AddCIDR(Parse("2.1.0.0/16"), "2")
tree.AddCIDR(Parse("3.1.1.0/24"), "3")
tree.AddCIDR(Parse("4.1.1.0/24"), "4a")
tree.AddCIDR(Parse("4.1.1.0/30"), "4b")
tree.AddCIDR(Parse("4.1.1.1/32"), "4c")
tree.AddCIDR(Parse("254.0.0.0/4"), "5")
tests := []struct {
Result interface{}
IP string
}{
{"1", "1.0.0.0"},
{"1", "1.255.255.255"},
{"2", "2.1.0.0"},
{"2", "2.1.255.255"},
{"3", "3.1.1.0"},
{"3", "3.1.1.255"},
{"4a", "4.1.1.255"},
{"4b", "4.1.1.2"},
{"4c", "4.1.1.1"},
{"5", "240.0.0.0"},
{"5", "255.255.255.255"},
{nil, "239.0.0.0"},
{nil, "4.1.2.2"},
}
for _, tt := range tests {
assert.Equal(t, tt.Result, tree.MostSpecificContains(iputil.Ip2VpnIp(net.ParseIP(tt.IP))))
}
tree = NewTree4()
tree.AddCIDR(Parse("1.1.1.1/0"), "cool")
assert.Equal(t, "cool", tree.MostSpecificContains(iputil.Ip2VpnIp(net.ParseIP("0.0.0.0"))))
assert.Equal(t, "cool", tree.MostSpecificContains(iputil.Ip2VpnIp(net.ParseIP("255.255.255.255"))))
}
func TestCIDRTree_Match(t *testing.T) {
tree := NewTree4()
tree.AddCIDR(Parse("4.1.1.0/32"), "1a")
tree.AddCIDR(Parse("4.1.1.1/32"), "1b")
tests := []struct {
Result interface{}
IP string
}{
{"1a", "4.1.1.0"},
{"1b", "4.1.1.1"},
}
for _, tt := range tests {
assert.Equal(t, tt.Result, tree.Match(iputil.Ip2VpnIp(net.ParseIP(tt.IP))))
}
tree = NewTree4()
tree.AddCIDR(Parse("1.1.1.1/0"), "cool")
assert.Equal(t, "cool", tree.Contains(iputil.Ip2VpnIp(net.ParseIP("0.0.0.0"))))
assert.Equal(t, "cool", tree.Contains(iputil.Ip2VpnIp(net.ParseIP("255.255.255.255"))))
}
func BenchmarkCIDRTree_Contains(b *testing.B) {
tree := NewTree4()
tree.AddCIDR(Parse("1.1.0.0/16"), "1")
tree.AddCIDR(Parse("1.2.1.1/32"), "1")
tree.AddCIDR(Parse("192.2.1.1/32"), "1")
tree.AddCIDR(Parse("172.2.1.1/32"), "1")
ip := iputil.Ip2VpnIp(net.ParseIP("1.2.1.1"))
b.Run("found", func(b *testing.B) {
for i := 0; i < b.N; i++ {
tree.Contains(ip)
}
})
ip = iputil.Ip2VpnIp(net.ParseIP("1.2.1.255"))
b.Run("not found", func(b *testing.B) {
for i := 0; i < b.N; i++ {
tree.Contains(ip)
}
})
}
func BenchmarkCIDRTree_Match(b *testing.B) {
tree := NewTree4()
tree.AddCIDR(Parse("1.1.0.0/16"), "1")
tree.AddCIDR(Parse("1.2.1.1/32"), "1")
tree.AddCIDR(Parse("192.2.1.1/32"), "1")
tree.AddCIDR(Parse("172.2.1.1/32"), "1")
ip := iputil.Ip2VpnIp(net.ParseIP("1.2.1.1"))
b.Run("found", func(b *testing.B) {
for i := 0; i < b.N; i++ {
tree.Match(ip)
}
})
ip = iputil.Ip2VpnIp(net.ParseIP("1.2.1.255"))
b.Run("not found", func(b *testing.B) {
for i := 0; i < b.N; i++ {
tree.Match(ip)
}
})
}

View File

@@ -1,185 +0,0 @@
package cidr
import (
"net"
"github.com/slackhq/nebula/iputil"
)
const startbit6 = uint64(1 << 63)
type Tree6 struct {
root4 *Node
root6 *Node
}
func NewTree6() *Tree6 {
tree := new(Tree6)
tree.root4 = &Node{}
tree.root6 = &Node{}
return tree
}
func (tree *Tree6) AddCIDR(cidr *net.IPNet, val interface{}) {
var node, next *Node
cidrIP, ipv4 := isIPV4(cidr.IP)
if ipv4 {
node = tree.root4
next = tree.root4
} else {
node = tree.root6
next = tree.root6
}
for i := 0; i < len(cidrIP); i += 4 {
ip := iputil.Ip2VpnIp(cidrIP[i : i+4])
mask := iputil.Ip2VpnIp(cidr.Mask[i : i+4])
bit := startbit
// Find our last ancestor in the tree
for bit&mask != 0 {
if ip&bit != 0 {
next = node.right
} else {
next = node.left
}
if next == nil {
break
}
bit = bit >> 1
node = next
}
// Build up the rest of the tree we don't already have
for bit&mask != 0 {
next = &Node{}
next.parent = node
if ip&bit != 0 {
node.right = next
} else {
node.left = next
}
bit >>= 1
node = next
}
}
// Final node marks our cidr, set the value
node.value = val
}
// Finds the most specific match
func (tree *Tree6) MostSpecificContains(ip net.IP) (value interface{}) {
var node *Node
wholeIP, ipv4 := isIPV4(ip)
if ipv4 {
node = tree.root4
} else {
node = tree.root6
}
for i := 0; i < len(wholeIP); i += 4 {
ip := iputil.Ip2VpnIp(wholeIP[i : i+4])
bit := startbit
for node != nil {
if node.value != nil {
value = node.value
}
if bit == 0 {
break
}
if ip&bit != 0 {
node = node.right
} else {
node = node.left
}
bit >>= 1
}
}
return value
}
func (tree *Tree6) MostSpecificContainsIpV4(ip iputil.VpnIp) (value interface{}) {
bit := startbit
node := tree.root4
for node != nil {
if node.value != nil {
value = node.value
}
if ip&bit != 0 {
node = node.right
} else {
node = node.left
}
bit >>= 1
}
return value
}
func (tree *Tree6) MostSpecificContainsIpV6(hi, lo uint64) (value interface{}) {
ip := hi
node := tree.root6
for i := 0; i < 2; i++ {
bit := startbit6
for node != nil {
if node.value != nil {
value = node.value
}
if bit == 0 {
break
}
if ip&bit != 0 {
node = node.right
} else {
node = node.left
}
bit >>= 1
}
ip = lo
}
return value
}
func isIPV4(ip net.IP) (net.IP, bool) {
if len(ip) == net.IPv4len {
return ip, true
}
if len(ip) == net.IPv6len && isZeros(ip[0:10]) && ip[10] == 0xff && ip[11] == 0xff {
return ip[12:16], true
}
return ip, false
}
func isZeros(p net.IP) bool {
for i := 0; i < len(p); i++ {
if p[i] != 0 {
return false
}
}
return true
}

View File

@@ -1,81 +0,0 @@
package cidr
import (
"encoding/binary"
"net"
"testing"
"github.com/stretchr/testify/assert"
)
func TestCIDR6Tree_MostSpecificContains(t *testing.T) {
tree := NewTree6()
tree.AddCIDR(Parse("1.0.0.0/8"), "1")
tree.AddCIDR(Parse("2.1.0.0/16"), "2")
tree.AddCIDR(Parse("3.1.1.0/24"), "3")
tree.AddCIDR(Parse("4.1.1.1/24"), "4a")
tree.AddCIDR(Parse("4.1.1.1/30"), "4b")
tree.AddCIDR(Parse("4.1.1.1/32"), "4c")
tree.AddCIDR(Parse("254.0.0.0/4"), "5")
tree.AddCIDR(Parse("1:2:0:4:5:0:0:0/64"), "6a")
tree.AddCIDR(Parse("1:2:0:4:5:0:0:0/80"), "6b")
tree.AddCIDR(Parse("1:2:0:4:5:0:0:0/96"), "6c")
tests := []struct {
Result interface{}
IP string
}{
{"1", "1.0.0.0"},
{"1", "1.255.255.255"},
{"2", "2.1.0.0"},
{"2", "2.1.255.255"},
{"3", "3.1.1.0"},
{"3", "3.1.1.255"},
{"4a", "4.1.1.255"},
{"4b", "4.1.1.2"},
{"4c", "4.1.1.1"},
{"5", "240.0.0.0"},
{"5", "255.255.255.255"},
{"6a", "1:2:0:4:1:1:1:1"},
{"6b", "1:2:0:4:5:1:1:1"},
{"6c", "1:2:0:4:5:0:0:0"},
{nil, "239.0.0.0"},
{nil, "4.1.2.2"},
}
for _, tt := range tests {
assert.Equal(t, tt.Result, tree.MostSpecificContains(net.ParseIP(tt.IP)))
}
tree = NewTree6()
tree.AddCIDR(Parse("1.1.1.1/0"), "cool")
tree.AddCIDR(Parse("::/0"), "cool6")
assert.Equal(t, "cool", tree.MostSpecificContains(net.ParseIP("0.0.0.0")))
assert.Equal(t, "cool", tree.MostSpecificContains(net.ParseIP("255.255.255.255")))
assert.Equal(t, "cool6", tree.MostSpecificContains(net.ParseIP("::")))
assert.Equal(t, "cool6", tree.MostSpecificContains(net.ParseIP("1:2:3:4:5:6:7:8")))
}
func TestCIDR6Tree_MostSpecificContainsIpV6(t *testing.T) {
tree := NewTree6()
tree.AddCIDR(Parse("1:2:0:4:5:0:0:0/64"), "6a")
tree.AddCIDR(Parse("1:2:0:4:5:0:0:0/80"), "6b")
tree.AddCIDR(Parse("1:2:0:4:5:0:0:0/96"), "6c")
tests := []struct {
Result interface{}
IP string
}{
{"6a", "1:2:0:4:1:1:1:1"},
{"6b", "1:2:0:4:5:1:1:1"},
{"6c", "1:2:0:4:5:0:0:0"},
}
for _, tt := range tests {
ip := net.ParseIP(tt.IP)
hi := binary.BigEndian.Uint64(ip[:8])
lo := binary.BigEndian.Uint64(ip[8:])
assert.Equal(t, tt.Result, tree.MostSpecificContainsIpV6(hi, lo))
}
}

View File

@@ -1,11 +1,13 @@
package main package main
import ( import (
"crypto/ecdsa"
"crypto/elliptic"
"crypto/rand" "crypto/rand"
"flag" "flag"
"fmt" "fmt"
"io" "io"
"io/ioutil" "math"
"net" "net"
"os" "os"
"strings" "strings"
@@ -26,6 +28,12 @@ type caFlags struct {
groups *string groups *string
ips *string ips *string
subnets *string subnets *string
argonMemory *uint
argonIterations *uint
argonParallelism *uint
encryption *bool
curve *string
} }
func newCaFlags() *caFlags { func newCaFlags() *caFlags {
@@ -39,10 +47,29 @@ func newCaFlags() *caFlags {
cf.groups = cf.set.String("groups", "", "Optional: comma separated list of groups. This will limit which groups subordinate certs can use") cf.groups = cf.set.String("groups", "", "Optional: comma separated list of groups. This will limit which groups subordinate certs can use")
cf.ips = cf.set.String("ips", "", "Optional: comma separated list of ipv4 address and network in CIDR notation. This will limit which ipv4 addresses and networks subordinate certs can use for ip addresses") cf.ips = cf.set.String("ips", "", "Optional: comma separated list of ipv4 address and network in CIDR notation. This will limit which ipv4 addresses and networks subordinate certs can use for ip addresses")
cf.subnets = cf.set.String("subnets", "", "Optional: comma separated list of ipv4 address and network in CIDR notation. This will limit which ipv4 addresses and networks subordinate certs can use in subnets") cf.subnets = cf.set.String("subnets", "", "Optional: comma separated list of ipv4 address and network in CIDR notation. This will limit which ipv4 addresses and networks subordinate certs can use in subnets")
cf.argonMemory = cf.set.Uint("argon-memory", 2*1024*1024, "Optional: Argon2 memory parameter (in KiB) used for encrypted private key passphrase")
cf.argonParallelism = cf.set.Uint("argon-parallelism", 4, "Optional: Argon2 parallelism parameter used for encrypted private key passphrase")
cf.argonIterations = cf.set.Uint("argon-iterations", 1, "Optional: Argon2 iterations parameter used for encrypted private key passphrase")
cf.encryption = cf.set.Bool("encrypt", false, "Optional: prompt for passphrase and write out-key in an encrypted format")
cf.curve = cf.set.String("curve", "25519", "EdDSA/ECDSA Curve (25519, P256)")
return &cf return &cf
} }
func ca(args []string, out io.Writer, errOut io.Writer) error { func parseArgonParameters(memory uint, parallelism uint, iterations uint) (*cert.Argon2Parameters, error) {
if memory <= 0 || memory > math.MaxUint32 {
return nil, newHelpErrorf("-argon-memory must be be greater than 0 and no more than %d KiB", uint32(math.MaxUint32))
}
if parallelism <= 0 || parallelism > math.MaxUint8 {
return nil, newHelpErrorf("-argon-parallelism must be be greater than 0 and no more than %d", math.MaxUint8)
}
if iterations <= 0 || iterations > math.MaxUint32 {
return nil, newHelpErrorf("-argon-iterations must be be greater than 0 and no more than %d", uint32(math.MaxUint32))
}
return cert.NewArgon2Parameters(uint32(memory), uint8(parallelism), uint32(iterations)), nil
}
func ca(args []string, out io.Writer, errOut io.Writer, pr PasswordReader) error {
cf := newCaFlags() cf := newCaFlags()
err := cf.set.Parse(args) err := cf.set.Parse(args)
if err != nil { if err != nil {
@@ -58,6 +85,12 @@ func ca(args []string, out io.Writer, errOut io.Writer) error {
if err := mustFlagString("out-crt", cf.outCertPath); err != nil { if err := mustFlagString("out-crt", cf.outCertPath); err != nil {
return err return err
} }
var kdfParams *cert.Argon2Parameters
if *cf.encryption {
if kdfParams, err = parseArgonParameters(*cf.argonMemory, *cf.argonParallelism, *cf.argonIterations); err != nil {
return err
}
}
if *cf.duration <= 0 { if *cf.duration <= 0 {
return &helpError{"-duration must be greater than 0"} return &helpError{"-duration must be greater than 0"}
@@ -109,10 +142,54 @@ func ca(args []string, out io.Writer, errOut io.Writer) error {
} }
} }
pub, rawPriv, err := ed25519.GenerateKey(rand.Reader) var passphrase []byte
if *cf.encryption {
for i := 0; i < 5; i++ {
out.Write([]byte("Enter passphrase: "))
passphrase, err = pr.ReadPassword()
if err == ErrNoTerminal {
return fmt.Errorf("out-key must be encrypted interactively")
} else if err != nil {
return fmt.Errorf("error reading passphrase: %s", err)
}
if len(passphrase) > 0 {
break
}
}
if len(passphrase) == 0 {
return fmt.Errorf("no passphrase specified, remove -encrypt flag to write out-key in plaintext")
}
}
var curve cert.Curve
var pub, rawPriv []byte
switch *cf.curve {
case "25519", "X25519", "Curve25519", "CURVE25519":
curve = cert.Curve_CURVE25519
pub, rawPriv, err = ed25519.GenerateKey(rand.Reader)
if err != nil { if err != nil {
return fmt.Errorf("error while generating ed25519 keys: %s", err) return fmt.Errorf("error while generating ed25519 keys: %s", err)
} }
case "P256":
var key *ecdsa.PrivateKey
curve = cert.Curve_P256
key, err = ecdsa.GenerateKey(elliptic.P256(), rand.Reader)
if err != nil {
return fmt.Errorf("error while generating ecdsa keys: %s", err)
}
// ecdh.PrivateKey lets us get at the encoded bytes, even though
// we aren't using ECDH here.
eKey, err := key.ECDH()
if err != nil {
return fmt.Errorf("error while converting ecdsa key: %s", err)
}
rawPriv = eKey.Bytes()
pub = eKey.PublicKey().Bytes()
}
nc := cert.NebulaCertificate{ nc := cert.NebulaCertificate{
Details: cert.NebulaCertificateDetails{ Details: cert.NebulaCertificateDetails{
@@ -124,6 +201,7 @@ func ca(args []string, out io.Writer, errOut io.Writer) error {
NotAfter: time.Now().Add(*cf.duration), NotAfter: time.Now().Add(*cf.duration),
PublicKey: pub, PublicKey: pub,
IsCA: true, IsCA: true,
Curve: curve,
}, },
} }
@@ -135,22 +213,32 @@ func ca(args []string, out io.Writer, errOut io.Writer) error {
return fmt.Errorf("refusing to overwrite existing CA cert: %s", *cf.outCertPath) return fmt.Errorf("refusing to overwrite existing CA cert: %s", *cf.outCertPath)
} }
err = nc.Sign(rawPriv) err = nc.Sign(curve, rawPriv)
if err != nil { if err != nil {
return fmt.Errorf("error while signing: %s", err) return fmt.Errorf("error while signing: %s", err)
} }
err = ioutil.WriteFile(*cf.outKeyPath, cert.MarshalEd25519PrivateKey(rawPriv), 0600) var b []byte
if *cf.encryption {
b, err = cert.EncryptAndMarshalSigningPrivateKey(curve, rawPriv, passphrase, kdfParams)
if err != nil {
return fmt.Errorf("error while encrypting out-key: %s", err)
}
} else {
b = cert.MarshalSigningPrivateKey(curve, rawPriv)
}
err = os.WriteFile(*cf.outKeyPath, b, 0600)
if err != nil { if err != nil {
return fmt.Errorf("error while writing out-key: %s", err) return fmt.Errorf("error while writing out-key: %s", err)
} }
b, err := nc.MarshalToPEM() b, err = nc.MarshalToPEM()
if err != nil { if err != nil {
return fmt.Errorf("error while marshalling certificate: %s", err) return fmt.Errorf("error while marshalling certificate: %s", err)
} }
err = ioutil.WriteFile(*cf.outCertPath, b, 0600) err = os.WriteFile(*cf.outCertPath, b, 0600)
if err != nil { if err != nil {
return fmt.Errorf("error while writing out-crt: %s", err) return fmt.Errorf("error while writing out-crt: %s", err)
} }
@@ -161,7 +249,7 @@ func ca(args []string, out io.Writer, errOut io.Writer) error {
return fmt.Errorf("error while generating qr code: %s", err) return fmt.Errorf("error while generating qr code: %s", err)
} }
err = ioutil.WriteFile(*cf.outQRPath, b, 0600) err = os.WriteFile(*cf.outQRPath, b, 0600)
if err != nil { if err != nil {
return fmt.Errorf("error while writing out-qr: %s", err) return fmt.Errorf("error while writing out-qr: %s", err)
} }

View File

@@ -5,8 +5,10 @@ package main
import ( import (
"bytes" "bytes"
"io/ioutil" "encoding/pem"
"errors"
"os" "os"
"strings"
"testing" "testing"
"time" "time"
@@ -26,8 +28,18 @@ func Test_caHelp(t *testing.T) {
assert.Equal( assert.Equal(
t, t,
"Usage of "+os.Args[0]+" ca <flags>: create a self signed certificate authority\n"+ "Usage of "+os.Args[0]+" ca <flags>: create a self signed certificate authority\n"+
" -argon-iterations uint\n"+
" \tOptional: Argon2 iterations parameter used for encrypted private key passphrase (default 1)\n"+
" -argon-memory uint\n"+
" \tOptional: Argon2 memory parameter (in KiB) used for encrypted private key passphrase (default 2097152)\n"+
" -argon-parallelism uint\n"+
" \tOptional: Argon2 parallelism parameter used for encrypted private key passphrase (default 4)\n"+
" -curve string\n"+
" \tEdDSA/ECDSA Curve (25519, P256) (default \"25519\")\n"+
" -duration duration\n"+ " -duration duration\n"+
" \tOptional: amount of time the certificate should be valid for. Valid time units are seconds: \"s\", minutes: \"m\", hours: \"h\" (default 8760h0m0s)\n"+ " \tOptional: amount of time the certificate should be valid for. Valid time units are seconds: \"s\", minutes: \"m\", hours: \"h\" (default 8760h0m0s)\n"+
" -encrypt\n"+
" \tOptional: prompt for passphrase and write out-key in an encrypted format\n"+
" -groups string\n"+ " -groups string\n"+
" \tOptional: comma separated list of groups. This will limit which groups subordinate certs can use\n"+ " \tOptional: comma separated list of groups. This will limit which groups subordinate certs can use\n"+
" -ips string\n"+ " -ips string\n"+
@@ -50,18 +62,38 @@ func Test_ca(t *testing.T) {
ob := &bytes.Buffer{} ob := &bytes.Buffer{}
eb := &bytes.Buffer{} eb := &bytes.Buffer{}
nopw := &StubPasswordReader{
password: []byte(""),
err: nil,
}
errpw := &StubPasswordReader{
password: []byte(""),
err: errors.New("stub error"),
}
passphrase := []byte("DO NOT USE THIS KEY")
testpw := &StubPasswordReader{
password: passphrase,
err: nil,
}
pwPromptOb := "Enter passphrase: "
// required args // required args
assertHelpError(t, ca([]string{"-out-key", "nope", "-out-crt", "nope", "duration", "100m"}, ob, eb), "-name is required") assertHelpError(t, ca(
[]string{"-out-key", "nope", "-out-crt", "nope", "duration", "100m"}, ob, eb, nopw,
), "-name is required")
assert.Equal(t, "", ob.String()) assert.Equal(t, "", ob.String())
assert.Equal(t, "", eb.String()) assert.Equal(t, "", eb.String())
// ipv4 only ips // ipv4 only ips
assertHelpError(t, ca([]string{"-name", "ipv6", "-ips", "100::100/100"}, ob, eb), "invalid ip definition: can only be ipv4, have 100::100/100") assertHelpError(t, ca([]string{"-name", "ipv6", "-ips", "100::100/100"}, ob, eb, nopw), "invalid ip definition: can only be ipv4, have 100::100/100")
assert.Equal(t, "", ob.String()) assert.Equal(t, "", ob.String())
assert.Equal(t, "", eb.String()) assert.Equal(t, "", eb.String())
// ipv4 only subnets // ipv4 only subnets
assertHelpError(t, ca([]string{"-name", "ipv6", "-subnets", "100::100/100"}, ob, eb), "invalid subnet definition: can only be ipv4, have 100::100/100") assertHelpError(t, ca([]string{"-name", "ipv6", "-subnets", "100::100/100"}, ob, eb, nopw), "invalid subnet definition: can only be ipv4, have 100::100/100")
assert.Equal(t, "", ob.String()) assert.Equal(t, "", ob.String())
assert.Equal(t, "", eb.String()) assert.Equal(t, "", eb.String())
@@ -69,12 +101,12 @@ func Test_ca(t *testing.T) {
ob.Reset() ob.Reset()
eb.Reset() eb.Reset()
args := []string{"-name", "test", "-duration", "100m", "-out-crt", "/do/not/write/pleasecrt", "-out-key", "/do/not/write/pleasekey"} args := []string{"-name", "test", "-duration", "100m", "-out-crt", "/do/not/write/pleasecrt", "-out-key", "/do/not/write/pleasekey"}
assert.EqualError(t, ca(args, ob, eb), "error while writing out-key: open /do/not/write/pleasekey: "+NoSuchDirError) assert.EqualError(t, ca(args, ob, eb, nopw), "error while writing out-key: open /do/not/write/pleasekey: "+NoSuchDirError)
assert.Equal(t, "", ob.String()) assert.Equal(t, "", ob.String())
assert.Equal(t, "", eb.String()) assert.Equal(t, "", eb.String())
// create temp key file // create temp key file
keyF, err := ioutil.TempFile("", "test.key") keyF, err := os.CreateTemp("", "test.key")
assert.Nil(t, err) assert.Nil(t, err)
os.Remove(keyF.Name()) os.Remove(keyF.Name())
@@ -82,12 +114,12 @@ func Test_ca(t *testing.T) {
ob.Reset() ob.Reset()
eb.Reset() eb.Reset()
args = []string{"-name", "test", "-duration", "100m", "-out-crt", "/do/not/write/pleasecrt", "-out-key", keyF.Name()} args = []string{"-name", "test", "-duration", "100m", "-out-crt", "/do/not/write/pleasecrt", "-out-key", keyF.Name()}
assert.EqualError(t, ca(args, ob, eb), "error while writing out-crt: open /do/not/write/pleasecrt: "+NoSuchDirError) assert.EqualError(t, ca(args, ob, eb, nopw), "error while writing out-crt: open /do/not/write/pleasecrt: "+NoSuchDirError)
assert.Equal(t, "", ob.String()) assert.Equal(t, "", ob.String())
assert.Equal(t, "", eb.String()) assert.Equal(t, "", eb.String())
// create temp cert file // create temp cert file
crtF, err := ioutil.TempFile("", "test.crt") crtF, err := os.CreateTemp("", "test.crt")
assert.Nil(t, err) assert.Nil(t, err)
os.Remove(crtF.Name()) os.Remove(crtF.Name())
os.Remove(keyF.Name()) os.Remove(keyF.Name())
@@ -96,18 +128,18 @@ func Test_ca(t *testing.T) {
ob.Reset() ob.Reset()
eb.Reset() eb.Reset()
args = []string{"-name", "test", "-duration", "100m", "-groups", "1,, 2 , ,,,3,4,5", "-out-crt", crtF.Name(), "-out-key", keyF.Name()} args = []string{"-name", "test", "-duration", "100m", "-groups", "1,, 2 , ,,,3,4,5", "-out-crt", crtF.Name(), "-out-key", keyF.Name()}
assert.Nil(t, ca(args, ob, eb)) assert.Nil(t, ca(args, ob, eb, nopw))
assert.Equal(t, "", ob.String()) assert.Equal(t, "", ob.String())
assert.Equal(t, "", eb.String()) assert.Equal(t, "", eb.String())
// read cert and key files // read cert and key files
rb, _ := ioutil.ReadFile(keyF.Name()) rb, _ := os.ReadFile(keyF.Name())
lKey, b, err := cert.UnmarshalEd25519PrivateKey(rb) lKey, b, err := cert.UnmarshalEd25519PrivateKey(rb)
assert.Len(t, b, 0) assert.Len(t, b, 0)
assert.Nil(t, err) assert.Nil(t, err)
assert.Len(t, lKey, 64) assert.Len(t, lKey, 64)
rb, _ = ioutil.ReadFile(crtF.Name()) rb, _ = os.ReadFile(crtF.Name())
lCrt, b, err := cert.UnmarshalNebulaCertificateFromPEM(rb) lCrt, b, err := cert.UnmarshalNebulaCertificateFromPEM(rb)
assert.Len(t, b, 0) assert.Len(t, b, 0)
assert.Nil(t, err) assert.Nil(t, err)
@@ -122,19 +154,67 @@ func Test_ca(t *testing.T) {
assert.Equal(t, "", lCrt.Details.Issuer) assert.Equal(t, "", lCrt.Details.Issuer)
assert.True(t, lCrt.CheckSignature(lCrt.Details.PublicKey)) assert.True(t, lCrt.CheckSignature(lCrt.Details.PublicKey))
// test encrypted key
os.Remove(keyF.Name())
os.Remove(crtF.Name())
ob.Reset()
eb.Reset()
args = []string{"-encrypt", "-name", "test", "-duration", "100m", "-groups", "1,2,3,4,5", "-out-crt", crtF.Name(), "-out-key", keyF.Name()}
assert.Nil(t, ca(args, ob, eb, testpw))
assert.Equal(t, pwPromptOb, ob.String())
assert.Equal(t, "", eb.String())
// read encrypted key file and verify default params
rb, _ = os.ReadFile(keyF.Name())
k, _ := pem.Decode(rb)
ned, err := cert.UnmarshalNebulaEncryptedData(k.Bytes)
assert.Nil(t, err)
// we won't know salt in advance, so just check start of string
assert.Equal(t, uint32(2*1024*1024), ned.EncryptionMetadata.Argon2Parameters.Memory)
assert.Equal(t, uint8(4), ned.EncryptionMetadata.Argon2Parameters.Parallelism)
assert.Equal(t, uint32(1), ned.EncryptionMetadata.Argon2Parameters.Iterations)
// verify the key is valid and decrypt-able
var curve cert.Curve
curve, lKey, b, err = cert.DecryptAndUnmarshalSigningPrivateKey(passphrase, rb)
assert.Equal(t, cert.Curve_CURVE25519, curve)
assert.Nil(t, err)
assert.Len(t, b, 0)
assert.Len(t, lKey, 64)
// test when reading passsword results in an error
os.Remove(keyF.Name())
os.Remove(crtF.Name())
ob.Reset()
eb.Reset()
args = []string{"-encrypt", "-name", "test", "-duration", "100m", "-groups", "1,2,3,4,5", "-out-crt", crtF.Name(), "-out-key", keyF.Name()}
assert.Error(t, ca(args, ob, eb, errpw))
assert.Equal(t, pwPromptOb, ob.String())
assert.Equal(t, "", eb.String())
// test when user fails to enter a password
os.Remove(keyF.Name())
os.Remove(crtF.Name())
ob.Reset()
eb.Reset()
args = []string{"-encrypt", "-name", "test", "-duration", "100m", "-groups", "1,2,3,4,5", "-out-crt", crtF.Name(), "-out-key", keyF.Name()}
assert.EqualError(t, ca(args, ob, eb, nopw), "no passphrase specified, remove -encrypt flag to write out-key in plaintext")
assert.Equal(t, strings.Repeat(pwPromptOb, 5), ob.String()) // prompts 5 times before giving up
assert.Equal(t, "", eb.String())
// create valid cert/key for overwrite tests // create valid cert/key for overwrite tests
os.Remove(keyF.Name()) os.Remove(keyF.Name())
os.Remove(crtF.Name()) os.Remove(crtF.Name())
ob.Reset() ob.Reset()
eb.Reset() eb.Reset()
args = []string{"-name", "test", "-duration", "100m", "-groups", "1,, 2 , ,,,3,4,5", "-out-crt", crtF.Name(), "-out-key", keyF.Name()} args = []string{"-name", "test", "-duration", "100m", "-groups", "1,, 2 , ,,,3,4,5", "-out-crt", crtF.Name(), "-out-key", keyF.Name()}
assert.Nil(t, ca(args, ob, eb)) assert.Nil(t, ca(args, ob, eb, nopw))
// test that we won't overwrite existing certificate file // test that we won't overwrite existing certificate file
ob.Reset() ob.Reset()
eb.Reset() eb.Reset()
args = []string{"-name", "test", "-duration", "100m", "-groups", "1,, 2 , ,,,3,4,5", "-out-crt", crtF.Name(), "-out-key", keyF.Name()} args = []string{"-name", "test", "-duration", "100m", "-groups", "1,, 2 , ,,,3,4,5", "-out-crt", crtF.Name(), "-out-key", keyF.Name()}
assert.EqualError(t, ca(args, ob, eb), "refusing to overwrite existing CA key: "+keyF.Name()) assert.EqualError(t, ca(args, ob, eb, nopw), "refusing to overwrite existing CA key: "+keyF.Name())
assert.Equal(t, "", ob.String()) assert.Equal(t, "", ob.String())
assert.Equal(t, "", eb.String()) assert.Equal(t, "", eb.String())
@@ -143,7 +223,7 @@ func Test_ca(t *testing.T) {
ob.Reset() ob.Reset()
eb.Reset() eb.Reset()
args = []string{"-name", "test", "-duration", "100m", "-groups", "1,, 2 , ,,,3,4,5", "-out-crt", crtF.Name(), "-out-key", keyF.Name()} args = []string{"-name", "test", "-duration", "100m", "-groups", "1,, 2 , ,,,3,4,5", "-out-crt", crtF.Name(), "-out-key", keyF.Name()}
assert.EqualError(t, ca(args, ob, eb), "refusing to overwrite existing CA cert: "+crtF.Name()) assert.EqualError(t, ca(args, ob, eb, nopw), "refusing to overwrite existing CA cert: "+crtF.Name())
assert.Equal(t, "", ob.String()) assert.Equal(t, "", ob.String())
assert.Equal(t, "", eb.String()) assert.Equal(t, "", eb.String())
os.Remove(keyF.Name()) os.Remove(keyF.Name())

View File

@@ -4,7 +4,6 @@ import (
"flag" "flag"
"fmt" "fmt"
"io" "io"
"io/ioutil"
"os" "os"
"github.com/slackhq/nebula/cert" "github.com/slackhq/nebula/cert"
@@ -14,6 +13,8 @@ type keygenFlags struct {
set *flag.FlagSet set *flag.FlagSet
outKeyPath *string outKeyPath *string
outPubPath *string outPubPath *string
curve *string
} }
func newKeygenFlags() *keygenFlags { func newKeygenFlags() *keygenFlags {
@@ -21,6 +22,7 @@ func newKeygenFlags() *keygenFlags {
cf.set.Usage = func() {} cf.set.Usage = func() {}
cf.outPubPath = cf.set.String("out-pub", "", "Required: path to write the public key to") cf.outPubPath = cf.set.String("out-pub", "", "Required: path to write the public key to")
cf.outKeyPath = cf.set.String("out-key", "", "Required: path to write the private key to") cf.outKeyPath = cf.set.String("out-key", "", "Required: path to write the private key to")
cf.curve = cf.set.String("curve", "25519", "ECDH Curve (25519, P256)")
return &cf return &cf
} }
@@ -38,14 +40,25 @@ func keygen(args []string, out io.Writer, errOut io.Writer) error {
return err return err
} }
pub, rawPriv := x25519Keypair() var pub, rawPriv []byte
var curve cert.Curve
switch *cf.curve {
case "25519", "X25519", "Curve25519", "CURVE25519":
pub, rawPriv = x25519Keypair()
curve = cert.Curve_CURVE25519
case "P256":
pub, rawPriv = p256Keypair()
curve = cert.Curve_P256
default:
return fmt.Errorf("invalid curve: %s", *cf.curve)
}
err = ioutil.WriteFile(*cf.outKeyPath, cert.MarshalX25519PrivateKey(rawPriv), 0600) err = os.WriteFile(*cf.outKeyPath, cert.MarshalPrivateKey(curve, rawPriv), 0600)
if err != nil { if err != nil {
return fmt.Errorf("error while writing out-key: %s", err) return fmt.Errorf("error while writing out-key: %s", err)
} }
err = ioutil.WriteFile(*cf.outPubPath, cert.MarshalX25519PublicKey(pub), 0600) err = os.WriteFile(*cf.outPubPath, cert.MarshalPublicKey(curve, pub), 0600)
if err != nil { if err != nil {
return fmt.Errorf("error while writing out-pub: %s", err) return fmt.Errorf("error while writing out-pub: %s", err)
} }

View File

@@ -2,7 +2,6 @@ package main
import ( import (
"bytes" "bytes"
"io/ioutil"
"os" "os"
"testing" "testing"
@@ -22,6 +21,8 @@ func Test_keygenHelp(t *testing.T) {
assert.Equal( assert.Equal(
t, t,
"Usage of "+os.Args[0]+" keygen <flags>: create a public/private key pair. the public key can be passed to `nebula-cert sign`\n"+ "Usage of "+os.Args[0]+" keygen <flags>: create a public/private key pair. the public key can be passed to `nebula-cert sign`\n"+
" -curve string\n"+
" \tECDH Curve (25519, P256) (default \"25519\")\n"+
" -out-key string\n"+ " -out-key string\n"+
" \tRequired: path to write the private key to\n"+ " \tRequired: path to write the private key to\n"+
" -out-pub string\n"+ " -out-pub string\n"+
@@ -52,7 +53,7 @@ func Test_keygen(t *testing.T) {
assert.Equal(t, "", eb.String()) assert.Equal(t, "", eb.String())
// create temp key file // create temp key file
keyF, err := ioutil.TempFile("", "test.key") keyF, err := os.CreateTemp("", "test.key")
assert.Nil(t, err) assert.Nil(t, err)
defer os.Remove(keyF.Name()) defer os.Remove(keyF.Name())
@@ -65,7 +66,7 @@ func Test_keygen(t *testing.T) {
assert.Equal(t, "", eb.String()) assert.Equal(t, "", eb.String())
// create temp pub file // create temp pub file
pubF, err := ioutil.TempFile("", "test.pub") pubF, err := os.CreateTemp("", "test.pub")
assert.Nil(t, err) assert.Nil(t, err)
defer os.Remove(pubF.Name()) defer os.Remove(pubF.Name())
@@ -78,13 +79,13 @@ func Test_keygen(t *testing.T) {
assert.Equal(t, "", eb.String()) assert.Equal(t, "", eb.String())
// read cert and key files // read cert and key files
rb, _ := ioutil.ReadFile(keyF.Name()) rb, _ := os.ReadFile(keyF.Name())
lKey, b, err := cert.UnmarshalX25519PrivateKey(rb) lKey, b, err := cert.UnmarshalX25519PrivateKey(rb)
assert.Len(t, b, 0) assert.Len(t, b, 0)
assert.Nil(t, err) assert.Nil(t, err)
assert.Len(t, lKey, 32) assert.Len(t, lKey, 32)
rb, _ = ioutil.ReadFile(pubF.Name()) rb, _ = os.ReadFile(pubF.Name())
lPub, b, err := cert.UnmarshalX25519PublicKey(rb) lPub, b, err := cert.UnmarshalX25519PublicKey(rb)
assert.Len(t, b, 0) assert.Len(t, b, 0)
assert.Nil(t, err) assert.Nil(t, err)

View File

@@ -62,11 +62,11 @@ func main() {
switch args[0] { switch args[0] {
case "ca": case "ca":
err = ca(args[1:], os.Stdout, os.Stderr) err = ca(args[1:], os.Stdout, os.Stderr, StdinPasswordReader{})
case "keygen": case "keygen":
err = keygen(args[1:], os.Stdout, os.Stderr) err = keygen(args[1:], os.Stdout, os.Stderr)
case "sign": case "sign":
err = signCert(args[1:], os.Stdout, os.Stderr) err = signCert(args[1:], os.Stdout, os.Stderr, StdinPasswordReader{})
case "print": case "print":
err = printCert(args[1:], os.Stdout, os.Stderr) err = printCert(args[1:], os.Stdout, os.Stderr)
case "verify": case "verify":
@@ -127,6 +127,8 @@ func help(err string, out io.Writer) {
fmt.Fprintln(out, " "+signSummary()) fmt.Fprintln(out, " "+signSummary())
fmt.Fprintln(out, " "+printSummary()) fmt.Fprintln(out, " "+printSummary())
fmt.Fprintln(out, " "+verifySummary()) fmt.Fprintln(out, " "+verifySummary())
fmt.Fprintln(out, "")
fmt.Fprintf(out, " To see usage for a given mode, use %s <mode> -h\n", os.Args[0])
} }
func mustFlagString(name string, val *string) error { func mustFlagString(name string, val *string) error {

View File

@@ -22,7 +22,9 @@ func Test_help(t *testing.T) {
" " + keygenSummary() + "\n" + " " + keygenSummary() + "\n" +
" " + signSummary() + "\n" + " " + signSummary() + "\n" +
" " + printSummary() + "\n" + " " + printSummary() + "\n" +
" " + verifySummary() + "\n" " " + verifySummary() + "\n" +
"\n" +
" To see usage for a given mode, use " + os.Args[0] + " <mode> -h\n"
ob := &bytes.Buffer{} ob := &bytes.Buffer{}

View File

@@ -0,0 +1,28 @@
package main
import (
"errors"
"fmt"
"os"
"golang.org/x/term"
)
var ErrNoTerminal = errors.New("cannot read password from nonexistent terminal")
type PasswordReader interface {
ReadPassword() ([]byte, error)
}
type StdinPasswordReader struct{}
func (pr StdinPasswordReader) ReadPassword() ([]byte, error) {
if !term.IsTerminal(int(os.Stdin.Fd())) {
return nil, ErrNoTerminal
}
password, err := term.ReadPassword(int(os.Stdin.Fd()))
fmt.Println()
return password, err
}

View File

@@ -0,0 +1,10 @@
package main
type StubPasswordReader struct {
password []byte
err error
}
func (pr *StubPasswordReader) ReadPassword() ([]byte, error) {
return pr.password, pr.err
}

View File

@@ -5,7 +5,6 @@ import (
"flag" "flag"
"fmt" "fmt"
"io" "io"
"io/ioutil"
"os" "os"
"strings" "strings"
@@ -41,7 +40,7 @@ func printCert(args []string, out io.Writer, errOut io.Writer) error {
return err return err
} }
rawCert, err := ioutil.ReadFile(*pf.path) rawCert, err := os.ReadFile(*pf.path)
if err != nil { if err != nil {
return fmt.Errorf("unable to read cert; %s", err) return fmt.Errorf("unable to read cert; %s", err)
} }
@@ -87,7 +86,7 @@ func printCert(args []string, out io.Writer, errOut io.Writer) error {
return fmt.Errorf("error while generating qr code: %s", err) return fmt.Errorf("error while generating qr code: %s", err)
} }
err = ioutil.WriteFile(*pf.outQRPath, b, 0600) err = os.WriteFile(*pf.outQRPath, b, 0600)
if err != nil { if err != nil {
return fmt.Errorf("error while writing out-qr: %s", err) return fmt.Errorf("error while writing out-qr: %s", err)
} }

View File

@@ -2,7 +2,6 @@ package main
import ( import (
"bytes" "bytes"
"io/ioutil"
"os" "os"
"testing" "testing"
"time" "time"
@@ -54,7 +53,7 @@ func Test_printCert(t *testing.T) {
// invalid cert at path // invalid cert at path
ob.Reset() ob.Reset()
eb.Reset() eb.Reset()
tf, err := ioutil.TempFile("", "print-cert") tf, err := os.CreateTemp("", "print-cert")
assert.Nil(t, err) assert.Nil(t, err)
defer os.Remove(tf.Name()) defer os.Remove(tf.Name())
@@ -87,7 +86,7 @@ func Test_printCert(t *testing.T) {
assert.Nil(t, err) assert.Nil(t, err)
assert.Equal( assert.Equal(
t, t,
"NebulaCertificate {\n\tDetails {\n\t\tName: test\n\t\tIps: []\n\t\tSubnets: []\n\t\tGroups: [\n\t\t\t\"hi\"\n\t\t]\n\t\tNot before: 0001-01-01 00:00:00 +0000 UTC\n\t\tNot After: 0001-01-01 00:00:00 +0000 UTC\n\t\tIs CA: false\n\t\tIssuer: \n\t\tPublic key: 0102030405060708090001020304050607080900010203040506070809000102\n\t}\n\tFingerprint: cc3492c0e9c48f17547f5987ea807462ebb3451e622590a10bb3763c344c82bd\n\tSignature: 0102030405060708090001020304050607080900010203040506070809000102\n}\nNebulaCertificate {\n\tDetails {\n\t\tName: test\n\t\tIps: []\n\t\tSubnets: []\n\t\tGroups: [\n\t\t\t\"hi\"\n\t\t]\n\t\tNot before: 0001-01-01 00:00:00 +0000 UTC\n\t\tNot After: 0001-01-01 00:00:00 +0000 UTC\n\t\tIs CA: false\n\t\tIssuer: \n\t\tPublic key: 0102030405060708090001020304050607080900010203040506070809000102\n\t}\n\tFingerprint: cc3492c0e9c48f17547f5987ea807462ebb3451e622590a10bb3763c344c82bd\n\tSignature: 0102030405060708090001020304050607080900010203040506070809000102\n}\nNebulaCertificate {\n\tDetails {\n\t\tName: test\n\t\tIps: []\n\t\tSubnets: []\n\t\tGroups: [\n\t\t\t\"hi\"\n\t\t]\n\t\tNot before: 0001-01-01 00:00:00 +0000 UTC\n\t\tNot After: 0001-01-01 00:00:00 +0000 UTC\n\t\tIs CA: false\n\t\tIssuer: \n\t\tPublic key: 0102030405060708090001020304050607080900010203040506070809000102\n\t}\n\tFingerprint: cc3492c0e9c48f17547f5987ea807462ebb3451e622590a10bb3763c344c82bd\n\tSignature: 0102030405060708090001020304050607080900010203040506070809000102\n}\n", "NebulaCertificate {\n\tDetails {\n\t\tName: test\n\t\tIps: []\n\t\tSubnets: []\n\t\tGroups: [\n\t\t\t\"hi\"\n\t\t]\n\t\tNot before: 0001-01-01 00:00:00 +0000 UTC\n\t\tNot After: 0001-01-01 00:00:00 +0000 UTC\n\t\tIs CA: false\n\t\tIssuer: \n\t\tPublic key: 0102030405060708090001020304050607080900010203040506070809000102\n\t\tCurve: CURVE25519\n\t}\n\tFingerprint: cc3492c0e9c48f17547f5987ea807462ebb3451e622590a10bb3763c344c82bd\n\tSignature: 0102030405060708090001020304050607080900010203040506070809000102\n}\nNebulaCertificate {\n\tDetails {\n\t\tName: test\n\t\tIps: []\n\t\tSubnets: []\n\t\tGroups: [\n\t\t\t\"hi\"\n\t\t]\n\t\tNot before: 0001-01-01 00:00:00 +0000 UTC\n\t\tNot After: 0001-01-01 00:00:00 +0000 UTC\n\t\tIs CA: false\n\t\tIssuer: \n\t\tPublic key: 0102030405060708090001020304050607080900010203040506070809000102\n\t\tCurve: CURVE25519\n\t}\n\tFingerprint: cc3492c0e9c48f17547f5987ea807462ebb3451e622590a10bb3763c344c82bd\n\tSignature: 0102030405060708090001020304050607080900010203040506070809000102\n}\nNebulaCertificate {\n\tDetails {\n\t\tName: test\n\t\tIps: []\n\t\tSubnets: []\n\t\tGroups: [\n\t\t\t\"hi\"\n\t\t]\n\t\tNot before: 0001-01-01 00:00:00 +0000 UTC\n\t\tNot After: 0001-01-01 00:00:00 +0000 UTC\n\t\tIs CA: false\n\t\tIssuer: \n\t\tPublic key: 0102030405060708090001020304050607080900010203040506070809000102\n\t\tCurve: CURVE25519\n\t}\n\tFingerprint: cc3492c0e9c48f17547f5987ea807462ebb3451e622590a10bb3763c344c82bd\n\tSignature: 0102030405060708090001020304050607080900010203040506070809000102\n}\n",
ob.String(), ob.String(),
) )
assert.Equal(t, "", eb.String()) assert.Equal(t, "", eb.String())
@@ -115,7 +114,7 @@ func Test_printCert(t *testing.T) {
assert.Nil(t, err) assert.Nil(t, err)
assert.Equal( assert.Equal(
t, t,
"{\"details\":{\"groups\":[\"hi\"],\"ips\":[],\"isCa\":false,\"issuer\":\"\",\"name\":\"test\",\"notAfter\":\"0001-01-01T00:00:00Z\",\"notBefore\":\"0001-01-01T00:00:00Z\",\"publicKey\":\"0102030405060708090001020304050607080900010203040506070809000102\",\"subnets\":[]},\"fingerprint\":\"cc3492c0e9c48f17547f5987ea807462ebb3451e622590a10bb3763c344c82bd\",\"signature\":\"0102030405060708090001020304050607080900010203040506070809000102\"}\n{\"details\":{\"groups\":[\"hi\"],\"ips\":[],\"isCa\":false,\"issuer\":\"\",\"name\":\"test\",\"notAfter\":\"0001-01-01T00:00:00Z\",\"notBefore\":\"0001-01-01T00:00:00Z\",\"publicKey\":\"0102030405060708090001020304050607080900010203040506070809000102\",\"subnets\":[]},\"fingerprint\":\"cc3492c0e9c48f17547f5987ea807462ebb3451e622590a10bb3763c344c82bd\",\"signature\":\"0102030405060708090001020304050607080900010203040506070809000102\"}\n{\"details\":{\"groups\":[\"hi\"],\"ips\":[],\"isCa\":false,\"issuer\":\"\",\"name\":\"test\",\"notAfter\":\"0001-01-01T00:00:00Z\",\"notBefore\":\"0001-01-01T00:00:00Z\",\"publicKey\":\"0102030405060708090001020304050607080900010203040506070809000102\",\"subnets\":[]},\"fingerprint\":\"cc3492c0e9c48f17547f5987ea807462ebb3451e622590a10bb3763c344c82bd\",\"signature\":\"0102030405060708090001020304050607080900010203040506070809000102\"}\n", "{\"details\":{\"curve\":\"CURVE25519\",\"groups\":[\"hi\"],\"ips\":[],\"isCa\":false,\"issuer\":\"\",\"name\":\"test\",\"notAfter\":\"0001-01-01T00:00:00Z\",\"notBefore\":\"0001-01-01T00:00:00Z\",\"publicKey\":\"0102030405060708090001020304050607080900010203040506070809000102\",\"subnets\":[]},\"fingerprint\":\"cc3492c0e9c48f17547f5987ea807462ebb3451e622590a10bb3763c344c82bd\",\"signature\":\"0102030405060708090001020304050607080900010203040506070809000102\"}\n{\"details\":{\"curve\":\"CURVE25519\",\"groups\":[\"hi\"],\"ips\":[],\"isCa\":false,\"issuer\":\"\",\"name\":\"test\",\"notAfter\":\"0001-01-01T00:00:00Z\",\"notBefore\":\"0001-01-01T00:00:00Z\",\"publicKey\":\"0102030405060708090001020304050607080900010203040506070809000102\",\"subnets\":[]},\"fingerprint\":\"cc3492c0e9c48f17547f5987ea807462ebb3451e622590a10bb3763c344c82bd\",\"signature\":\"0102030405060708090001020304050607080900010203040506070809000102\"}\n{\"details\":{\"curve\":\"CURVE25519\",\"groups\":[\"hi\"],\"ips\":[],\"isCa\":false,\"issuer\":\"\",\"name\":\"test\",\"notAfter\":\"0001-01-01T00:00:00Z\",\"notBefore\":\"0001-01-01T00:00:00Z\",\"publicKey\":\"0102030405060708090001020304050607080900010203040506070809000102\",\"subnets\":[]},\"fingerprint\":\"cc3492c0e9c48f17547f5987ea807462ebb3451e622590a10bb3763c344c82bd\",\"signature\":\"0102030405060708090001020304050607080900010203040506070809000102\"}\n",
ob.String(), ob.String(),
) )
assert.Equal(t, "", eb.String()) assert.Equal(t, "", eb.String())

View File

@@ -1,11 +1,11 @@
package main package main
import ( import (
"crypto/ecdh"
"crypto/rand" "crypto/rand"
"flag" "flag"
"fmt" "fmt"
"io" "io"
"io/ioutil"
"net" "net"
"os" "os"
"strings" "strings"
@@ -49,7 +49,7 @@ func newSignFlags() *signFlags {
} }
func signCert(args []string, out io.Writer, errOut io.Writer) error { func signCert(args []string, out io.Writer, errOut io.Writer, pr PasswordReader) error {
sf := newSignFlags() sf := newSignFlags()
err := sf.set.Parse(args) err := sf.set.Parse(args)
if err != nil { if err != nil {
@@ -72,17 +72,46 @@ func signCert(args []string, out io.Writer, errOut io.Writer) error {
return newHelpErrorf("cannot set both -in-pub and -out-key") return newHelpErrorf("cannot set both -in-pub and -out-key")
} }
rawCAKey, err := ioutil.ReadFile(*sf.caKeyPath) rawCAKey, err := os.ReadFile(*sf.caKeyPath)
if err != nil { if err != nil {
return fmt.Errorf("error while reading ca-key: %s", err) return fmt.Errorf("error while reading ca-key: %s", err)
} }
caKey, _, err := cert.UnmarshalEd25519PrivateKey(rawCAKey) var curve cert.Curve
var caKey []byte
// naively attempt to decode the private key as though it is not encrypted
caKey, _, curve, err = cert.UnmarshalSigningPrivateKey(rawCAKey)
if err == cert.ErrPrivateKeyEncrypted {
// ask for a passphrase until we get one
var passphrase []byte
for i := 0; i < 5; i++ {
out.Write([]byte("Enter passphrase: "))
passphrase, err = pr.ReadPassword()
if err == ErrNoTerminal {
return fmt.Errorf("ca-key is encrypted and must be decrypted interactively")
} else if err != nil {
return fmt.Errorf("error reading password: %s", err)
}
if len(passphrase) > 0 {
break
}
}
if len(passphrase) == 0 {
return fmt.Errorf("cannot open encrypted ca-key without passphrase")
}
curve, caKey, _, err = cert.DecryptAndUnmarshalSigningPrivateKey(passphrase, rawCAKey)
if err != nil { if err != nil {
return fmt.Errorf("error while parsing encrypted ca-key: %s", err)
}
} else if err != nil {
return fmt.Errorf("error while parsing ca-key: %s", err) return fmt.Errorf("error while parsing ca-key: %s", err)
} }
rawCACert, err := ioutil.ReadFile(*sf.caCertPath) rawCACert, err := os.ReadFile(*sf.caCertPath)
if err != nil { if err != nil {
return fmt.Errorf("error while reading ca-crt: %s", err) return fmt.Errorf("error while reading ca-crt: %s", err)
} }
@@ -92,7 +121,7 @@ func signCert(args []string, out io.Writer, errOut io.Writer) error {
return fmt.Errorf("error while parsing ca-crt: %s", err) return fmt.Errorf("error while parsing ca-crt: %s", err)
} }
if err := caCert.VerifyPrivateKey(caKey); err != nil { if err := caCert.VerifyPrivateKey(curve, caKey); err != nil {
return fmt.Errorf("refusing to sign, root certificate does not match private key") return fmt.Errorf("refusing to sign, root certificate does not match private key")
} }
@@ -148,16 +177,20 @@ func signCert(args []string, out io.Writer, errOut io.Writer) error {
var pub, rawPriv []byte var pub, rawPriv []byte
if *sf.inPubPath != "" { if *sf.inPubPath != "" {
rawPub, err := ioutil.ReadFile(*sf.inPubPath) rawPub, err := os.ReadFile(*sf.inPubPath)
if err != nil { if err != nil {
return fmt.Errorf("error while reading in-pub: %s", err) return fmt.Errorf("error while reading in-pub: %s", err)
} }
pub, _, err = cert.UnmarshalX25519PublicKey(rawPub) var pubCurve cert.Curve
pub, _, pubCurve, err = cert.UnmarshalPublicKey(rawPub)
if err != nil { if err != nil {
return fmt.Errorf("error while parsing in-pub: %s", err) return fmt.Errorf("error while parsing in-pub: %s", err)
} }
if pubCurve != curve {
return fmt.Errorf("curve of in-pub does not match ca")
}
} else { } else {
pub, rawPriv = x25519Keypair() pub, rawPriv = newKeypair(curve)
} }
nc := cert.NebulaCertificate{ nc := cert.NebulaCertificate{
@@ -171,6 +204,7 @@ func signCert(args []string, out io.Writer, errOut io.Writer) error {
PublicKey: pub, PublicKey: pub,
IsCA: false, IsCA: false,
Issuer: issuer, Issuer: issuer,
Curve: curve,
}, },
} }
@@ -190,7 +224,7 @@ func signCert(args []string, out io.Writer, errOut io.Writer) error {
return fmt.Errorf("refusing to overwrite existing cert: %s", *sf.outCertPath) return fmt.Errorf("refusing to overwrite existing cert: %s", *sf.outCertPath)
} }
err = nc.Sign(caKey) err = nc.Sign(curve, caKey)
if err != nil { if err != nil {
return fmt.Errorf("error while signing: %s", err) return fmt.Errorf("error while signing: %s", err)
} }
@@ -200,7 +234,7 @@ func signCert(args []string, out io.Writer, errOut io.Writer) error {
return fmt.Errorf("refusing to overwrite existing key: %s", *sf.outKeyPath) return fmt.Errorf("refusing to overwrite existing key: %s", *sf.outKeyPath)
} }
err = ioutil.WriteFile(*sf.outKeyPath, cert.MarshalX25519PrivateKey(rawPriv), 0600) err = os.WriteFile(*sf.outKeyPath, cert.MarshalPrivateKey(curve, rawPriv), 0600)
if err != nil { if err != nil {
return fmt.Errorf("error while writing out-key: %s", err) return fmt.Errorf("error while writing out-key: %s", err)
} }
@@ -211,7 +245,7 @@ func signCert(args []string, out io.Writer, errOut io.Writer) error {
return fmt.Errorf("error while marshalling certificate: %s", err) return fmt.Errorf("error while marshalling certificate: %s", err)
} }
err = ioutil.WriteFile(*sf.outCertPath, b, 0600) err = os.WriteFile(*sf.outCertPath, b, 0600)
if err != nil { if err != nil {
return fmt.Errorf("error while writing out-crt: %s", err) return fmt.Errorf("error while writing out-crt: %s", err)
} }
@@ -222,7 +256,7 @@ func signCert(args []string, out io.Writer, errOut io.Writer) error {
return fmt.Errorf("error while generating qr code: %s", err) return fmt.Errorf("error while generating qr code: %s", err)
} }
err = ioutil.WriteFile(*sf.outQRPath, b, 0600) err = os.WriteFile(*sf.outQRPath, b, 0600)
if err != nil { if err != nil {
return fmt.Errorf("error while writing out-qr: %s", err) return fmt.Errorf("error while writing out-qr: %s", err)
} }
@@ -231,6 +265,17 @@ func signCert(args []string, out io.Writer, errOut io.Writer) error {
return nil return nil
} }
func newKeypair(curve cert.Curve) ([]byte, []byte) {
switch curve {
case cert.Curve_CURVE25519:
return x25519Keypair()
case cert.Curve_P256:
return p256Keypair()
default:
return nil, nil
}
}
func x25519Keypair() ([]byte, []byte) { func x25519Keypair() ([]byte, []byte) {
privkey := make([]byte, 32) privkey := make([]byte, 32)
if _, err := io.ReadFull(rand.Reader, privkey); err != nil { if _, err := io.ReadFull(rand.Reader, privkey); err != nil {
@@ -245,6 +290,15 @@ func x25519Keypair() ([]byte, []byte) {
return pubkey, privkey return pubkey, privkey
} }
func p256Keypair() ([]byte, []byte) {
privkey, err := ecdh.P256().GenerateKey(rand.Reader)
if err != nil {
panic(err)
}
pubkey := privkey.PublicKey()
return pubkey.Bytes(), privkey.Bytes()
}
func signSummary() string { func signSummary() string {
return "sign <flags>: create and sign a certificate" return "sign <flags>: create and sign a certificate"
} }

View File

@@ -6,7 +6,7 @@ package main
import ( import (
"bytes" "bytes"
"crypto/rand" "crypto/rand"
"io/ioutil" "errors"
"os" "os"
"testing" "testing"
"time" "time"
@@ -58,17 +58,39 @@ func Test_signCert(t *testing.T) {
ob := &bytes.Buffer{} ob := &bytes.Buffer{}
eb := &bytes.Buffer{} eb := &bytes.Buffer{}
nopw := &StubPasswordReader{
password: []byte(""),
err: nil,
}
errpw := &StubPasswordReader{
password: []byte(""),
err: errors.New("stub error"),
}
passphrase := []byte("DO NOT USE THIS KEY")
testpw := &StubPasswordReader{
password: passphrase,
err: nil,
}
// required args // required args
assertHelpError(t, signCert([]string{"-ca-crt", "./nope", "-ca-key", "./nope", "-ip", "1.1.1.1/24", "-out-key", "nope", "-out-crt", "nope"}, ob, eb), "-name is required") assertHelpError(t, signCert(
[]string{"-ca-crt", "./nope", "-ca-key", "./nope", "-ip", "1.1.1.1/24", "-out-key", "nope", "-out-crt", "nope"}, ob, eb, nopw,
), "-name is required")
assert.Empty(t, ob.String()) assert.Empty(t, ob.String())
assert.Empty(t, eb.String()) assert.Empty(t, eb.String())
assertHelpError(t, signCert([]string{"-ca-crt", "./nope", "-ca-key", "./nope", "-name", "test", "-out-key", "nope", "-out-crt", "nope"}, ob, eb), "-ip is required") assertHelpError(t, signCert(
[]string{"-ca-crt", "./nope", "-ca-key", "./nope", "-name", "test", "-out-key", "nope", "-out-crt", "nope"}, ob, eb, nopw,
), "-ip is required")
assert.Empty(t, ob.String()) assert.Empty(t, ob.String())
assert.Empty(t, eb.String()) assert.Empty(t, eb.String())
// cannot set -in-pub and -out-key // cannot set -in-pub and -out-key
assertHelpError(t, signCert([]string{"-ca-crt", "./nope", "-ca-key", "./nope", "-name", "test", "-in-pub", "nope", "-ip", "1.1.1.1/24", "-out-crt", "nope", "-out-key", "nope"}, ob, eb), "cannot set both -in-pub and -out-key") assertHelpError(t, signCert(
[]string{"-ca-crt", "./nope", "-ca-key", "./nope", "-name", "test", "-in-pub", "nope", "-ip", "1.1.1.1/24", "-out-crt", "nope", "-out-key", "nope"}, ob, eb, nopw,
), "cannot set both -in-pub and -out-key")
assert.Empty(t, ob.String()) assert.Empty(t, ob.String())
assert.Empty(t, eb.String()) assert.Empty(t, eb.String())
@@ -76,17 +98,17 @@ func Test_signCert(t *testing.T) {
ob.Reset() ob.Reset()
eb.Reset() eb.Reset()
args := []string{"-ca-crt", "./nope", "-ca-key", "./nope", "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", "nope", "-out-key", "nope", "-duration", "100m"} args := []string{"-ca-crt", "./nope", "-ca-key", "./nope", "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", "nope", "-out-key", "nope", "-duration", "100m"}
assert.EqualError(t, signCert(args, ob, eb), "error while reading ca-key: open ./nope: "+NoSuchFileError) assert.EqualError(t, signCert(args, ob, eb, nopw), "error while reading ca-key: open ./nope: "+NoSuchFileError)
// failed to unmarshal key // failed to unmarshal key
ob.Reset() ob.Reset()
eb.Reset() eb.Reset()
caKeyF, err := ioutil.TempFile("", "sign-cert.key") caKeyF, err := os.CreateTemp("", "sign-cert.key")
assert.Nil(t, err) assert.Nil(t, err)
defer os.Remove(caKeyF.Name()) defer os.Remove(caKeyF.Name())
args = []string{"-ca-crt", "./nope", "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", "nope", "-out-key", "nope", "-duration", "100m"} args = []string{"-ca-crt", "./nope", "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", "nope", "-out-key", "nope", "-duration", "100m"}
assert.EqualError(t, signCert(args, ob, eb), "error while parsing ca-key: input did not contain a valid PEM encoded block") assert.EqualError(t, signCert(args, ob, eb, nopw), "error while parsing ca-key: input did not contain a valid PEM encoded block")
assert.Empty(t, ob.String()) assert.Empty(t, ob.String())
assert.Empty(t, eb.String()) assert.Empty(t, eb.String())
@@ -98,19 +120,19 @@ func Test_signCert(t *testing.T) {
// failed to read cert // failed to read cert
args = []string{"-ca-crt", "./nope", "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", "nope", "-out-key", "nope", "-duration", "100m"} args = []string{"-ca-crt", "./nope", "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", "nope", "-out-key", "nope", "-duration", "100m"}
assert.EqualError(t, signCert(args, ob, eb), "error while reading ca-crt: open ./nope: "+NoSuchFileError) assert.EqualError(t, signCert(args, ob, eb, nopw), "error while reading ca-crt: open ./nope: "+NoSuchFileError)
assert.Empty(t, ob.String()) assert.Empty(t, ob.String())
assert.Empty(t, eb.String()) assert.Empty(t, eb.String())
// failed to unmarshal cert // failed to unmarshal cert
ob.Reset() ob.Reset()
eb.Reset() eb.Reset()
caCrtF, err := ioutil.TempFile("", "sign-cert.crt") caCrtF, err := os.CreateTemp("", "sign-cert.crt")
assert.Nil(t, err) assert.Nil(t, err)
defer os.Remove(caCrtF.Name()) defer os.Remove(caCrtF.Name())
args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", "nope", "-out-key", "nope", "-duration", "100m"} args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", "nope", "-out-key", "nope", "-duration", "100m"}
assert.EqualError(t, signCert(args, ob, eb), "error while parsing ca-crt: input did not contain a valid PEM encoded block") assert.EqualError(t, signCert(args, ob, eb, nopw), "error while parsing ca-crt: input did not contain a valid PEM encoded block")
assert.Empty(t, ob.String()) assert.Empty(t, ob.String())
assert.Empty(t, eb.String()) assert.Empty(t, eb.String())
@@ -129,19 +151,19 @@ func Test_signCert(t *testing.T) {
// failed to read pub // failed to read pub
args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", "nope", "-in-pub", "./nope", "-duration", "100m"} args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", "nope", "-in-pub", "./nope", "-duration", "100m"}
assert.EqualError(t, signCert(args, ob, eb), "error while reading in-pub: open ./nope: "+NoSuchFileError) assert.EqualError(t, signCert(args, ob, eb, nopw), "error while reading in-pub: open ./nope: "+NoSuchFileError)
assert.Empty(t, ob.String()) assert.Empty(t, ob.String())
assert.Empty(t, eb.String()) assert.Empty(t, eb.String())
// failed to unmarshal pub // failed to unmarshal pub
ob.Reset() ob.Reset()
eb.Reset() eb.Reset()
inPubF, err := ioutil.TempFile("", "in.pub") inPubF, err := os.CreateTemp("", "in.pub")
assert.Nil(t, err) assert.Nil(t, err)
defer os.Remove(inPubF.Name()) defer os.Remove(inPubF.Name())
args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", "nope", "-in-pub", inPubF.Name(), "-duration", "100m"} args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", "nope", "-in-pub", inPubF.Name(), "-duration", "100m"}
assert.EqualError(t, signCert(args, ob, eb), "error while parsing in-pub: input did not contain a valid PEM encoded block") assert.EqualError(t, signCert(args, ob, eb, nopw), "error while parsing in-pub: input did not contain a valid PEM encoded block")
assert.Empty(t, ob.String()) assert.Empty(t, ob.String())
assert.Empty(t, eb.String()) assert.Empty(t, eb.String())
@@ -155,14 +177,14 @@ func Test_signCert(t *testing.T) {
ob.Reset() ob.Reset()
eb.Reset() eb.Reset()
args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "a1.1.1.1/24", "-out-crt", "nope", "-out-key", "nope", "-duration", "100m"} args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "a1.1.1.1/24", "-out-crt", "nope", "-out-key", "nope", "-duration", "100m"}
assertHelpError(t, signCert(args, ob, eb), "invalid ip definition: invalid CIDR address: a1.1.1.1/24") assertHelpError(t, signCert(args, ob, eb, nopw), "invalid ip definition: invalid CIDR address: a1.1.1.1/24")
assert.Empty(t, ob.String()) assert.Empty(t, ob.String())
assert.Empty(t, eb.String()) assert.Empty(t, eb.String())
ob.Reset() ob.Reset()
eb.Reset() eb.Reset()
args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "100::100/100", "-out-crt", "nope", "-out-key", "nope", "-duration", "100m"} args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "100::100/100", "-out-crt", "nope", "-out-key", "nope", "-duration", "100m"}
assertHelpError(t, signCert(args, ob, eb), "invalid ip definition: can only be ipv4, have 100::100/100") assertHelpError(t, signCert(args, ob, eb, nopw), "invalid ip definition: can only be ipv4, have 100::100/100")
assert.Empty(t, ob.String()) assert.Empty(t, ob.String())
assert.Empty(t, eb.String()) assert.Empty(t, eb.String())
@@ -170,20 +192,20 @@ func Test_signCert(t *testing.T) {
ob.Reset() ob.Reset()
eb.Reset() eb.Reset()
args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", "nope", "-out-key", "nope", "-duration", "100m", "-subnets", "a"} args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", "nope", "-out-key", "nope", "-duration", "100m", "-subnets", "a"}
assertHelpError(t, signCert(args, ob, eb), "invalid subnet definition: invalid CIDR address: a") assertHelpError(t, signCert(args, ob, eb, nopw), "invalid subnet definition: invalid CIDR address: a")
assert.Empty(t, ob.String()) assert.Empty(t, ob.String())
assert.Empty(t, eb.String()) assert.Empty(t, eb.String())
ob.Reset() ob.Reset()
eb.Reset() eb.Reset()
args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", "nope", "-out-key", "nope", "-duration", "100m", "-subnets", "100::100/100"} args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", "nope", "-out-key", "nope", "-duration", "100m", "-subnets", "100::100/100"}
assertHelpError(t, signCert(args, ob, eb), "invalid subnet definition: can only be ipv4, have 100::100/100") assertHelpError(t, signCert(args, ob, eb, nopw), "invalid subnet definition: can only be ipv4, have 100::100/100")
assert.Empty(t, ob.String()) assert.Empty(t, ob.String())
assert.Empty(t, eb.String()) assert.Empty(t, eb.String())
// mismatched ca key // mismatched ca key
_, caPriv2, _ := ed25519.GenerateKey(rand.Reader) _, caPriv2, _ := ed25519.GenerateKey(rand.Reader)
caKeyF2, err := ioutil.TempFile("", "sign-cert-2.key") caKeyF2, err := os.CreateTemp("", "sign-cert-2.key")
assert.Nil(t, err) assert.Nil(t, err)
defer os.Remove(caKeyF2.Name()) defer os.Remove(caKeyF2.Name())
caKeyF2.Write(cert.MarshalEd25519PrivateKey(caPriv2)) caKeyF2.Write(cert.MarshalEd25519PrivateKey(caPriv2))
@@ -191,7 +213,7 @@ func Test_signCert(t *testing.T) {
ob.Reset() ob.Reset()
eb.Reset() eb.Reset()
args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF2.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", "nope", "-out-key", "nope", "-duration", "100m", "-subnets", "a"} args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF2.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", "nope", "-out-key", "nope", "-duration", "100m", "-subnets", "a"}
assert.EqualError(t, signCert(args, ob, eb), "refusing to sign, root certificate does not match private key") assert.EqualError(t, signCert(args, ob, eb, nopw), "refusing to sign, root certificate does not match private key")
assert.Empty(t, ob.String()) assert.Empty(t, ob.String())
assert.Empty(t, eb.String()) assert.Empty(t, eb.String())
@@ -199,12 +221,12 @@ func Test_signCert(t *testing.T) {
ob.Reset() ob.Reset()
eb.Reset() eb.Reset()
args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", "/do/not/write/pleasecrt", "-out-key", "/do/not/write/pleasekey", "-duration", "100m", "-subnets", "10.1.1.1/32"} args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", "/do/not/write/pleasecrt", "-out-key", "/do/not/write/pleasekey", "-duration", "100m", "-subnets", "10.1.1.1/32"}
assert.EqualError(t, signCert(args, ob, eb), "error while writing out-key: open /do/not/write/pleasekey: "+NoSuchDirError) assert.EqualError(t, signCert(args, ob, eb, nopw), "error while writing out-key: open /do/not/write/pleasekey: "+NoSuchDirError)
assert.Empty(t, ob.String()) assert.Empty(t, ob.String())
assert.Empty(t, eb.String()) assert.Empty(t, eb.String())
// create temp key file // create temp key file
keyF, err := ioutil.TempFile("", "test.key") keyF, err := os.CreateTemp("", "test.key")
assert.Nil(t, err) assert.Nil(t, err)
os.Remove(keyF.Name()) os.Remove(keyF.Name())
@@ -212,13 +234,13 @@ func Test_signCert(t *testing.T) {
ob.Reset() ob.Reset()
eb.Reset() eb.Reset()
args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", "/do/not/write/pleasecrt", "-out-key", keyF.Name(), "-duration", "100m", "-subnets", "10.1.1.1/32"} args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", "/do/not/write/pleasecrt", "-out-key", keyF.Name(), "-duration", "100m", "-subnets", "10.1.1.1/32"}
assert.EqualError(t, signCert(args, ob, eb), "error while writing out-crt: open /do/not/write/pleasecrt: "+NoSuchDirError) assert.EqualError(t, signCert(args, ob, eb, nopw), "error while writing out-crt: open /do/not/write/pleasecrt: "+NoSuchDirError)
assert.Empty(t, ob.String()) assert.Empty(t, ob.String())
assert.Empty(t, eb.String()) assert.Empty(t, eb.String())
os.Remove(keyF.Name()) os.Remove(keyF.Name())
// create temp cert file // create temp cert file
crtF, err := ioutil.TempFile("", "test.crt") crtF, err := os.CreateTemp("", "test.crt")
assert.Nil(t, err) assert.Nil(t, err)
os.Remove(crtF.Name()) os.Remove(crtF.Name())
@@ -226,18 +248,18 @@ func Test_signCert(t *testing.T) {
ob.Reset() ob.Reset()
eb.Reset() eb.Reset()
args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", crtF.Name(), "-out-key", keyF.Name(), "-duration", "100m", "-subnets", "10.1.1.1/32, , 10.2.2.2/32 , , ,, 10.5.5.5/32", "-groups", "1,, 2 , ,,,3,4,5"} args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", crtF.Name(), "-out-key", keyF.Name(), "-duration", "100m", "-subnets", "10.1.1.1/32, , 10.2.2.2/32 , , ,, 10.5.5.5/32", "-groups", "1,, 2 , ,,,3,4,5"}
assert.Nil(t, signCert(args, ob, eb)) assert.Nil(t, signCert(args, ob, eb, nopw))
assert.Empty(t, ob.String()) assert.Empty(t, ob.String())
assert.Empty(t, eb.String()) assert.Empty(t, eb.String())
// read cert and key files // read cert and key files
rb, _ := ioutil.ReadFile(keyF.Name()) rb, _ := os.ReadFile(keyF.Name())
lKey, b, err := cert.UnmarshalX25519PrivateKey(rb) lKey, b, err := cert.UnmarshalX25519PrivateKey(rb)
assert.Len(t, b, 0) assert.Len(t, b, 0)
assert.Nil(t, err) assert.Nil(t, err)
assert.Len(t, lKey, 32) assert.Len(t, lKey, 32)
rb, _ = ioutil.ReadFile(crtF.Name()) rb, _ = os.ReadFile(crtF.Name())
lCrt, b, err := cert.UnmarshalNebulaCertificateFromPEM(rb) lCrt, b, err := cert.UnmarshalNebulaCertificateFromPEM(rb)
assert.Len(t, b, 0) assert.Len(t, b, 0)
assert.Nil(t, err) assert.Nil(t, err)
@@ -268,12 +290,12 @@ func Test_signCert(t *testing.T) {
ob.Reset() ob.Reset()
eb.Reset() eb.Reset()
args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", crtF.Name(), "-in-pub", inPubF.Name(), "-duration", "100m", "-groups", "1"} args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", crtF.Name(), "-in-pub", inPubF.Name(), "-duration", "100m", "-groups", "1"}
assert.Nil(t, signCert(args, ob, eb)) assert.Nil(t, signCert(args, ob, eb, nopw))
assert.Empty(t, ob.String()) assert.Empty(t, ob.String())
assert.Empty(t, eb.String()) assert.Empty(t, eb.String())
// read cert file and check pub key matches in-pub // read cert file and check pub key matches in-pub
rb, _ = ioutil.ReadFile(crtF.Name()) rb, _ = os.ReadFile(crtF.Name())
lCrt, b, err = cert.UnmarshalNebulaCertificateFromPEM(rb) lCrt, b, err = cert.UnmarshalNebulaCertificateFromPEM(rb)
assert.Len(t, b, 0) assert.Len(t, b, 0)
assert.Nil(t, err) assert.Nil(t, err)
@@ -283,7 +305,7 @@ func Test_signCert(t *testing.T) {
ob.Reset() ob.Reset()
eb.Reset() eb.Reset()
args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", crtF.Name(), "-out-key", keyF.Name(), "-duration", "1000m", "-subnets", "10.1.1.1/32, , 10.2.2.2/32 , , ,, 10.5.5.5/32", "-groups", "1,, 2 , ,,,3,4,5"} args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", crtF.Name(), "-out-key", keyF.Name(), "-duration", "1000m", "-subnets", "10.1.1.1/32, , 10.2.2.2/32 , , ,, 10.5.5.5/32", "-groups", "1,, 2 , ,,,3,4,5"}
assert.EqualError(t, signCert(args, ob, eb), "refusing to sign, root certificate constraints violated: certificate expires after signing certificate") assert.EqualError(t, signCert(args, ob, eb, nopw), "refusing to sign, root certificate constraints violated: certificate expires after signing certificate")
assert.Empty(t, ob.String()) assert.Empty(t, ob.String())
assert.Empty(t, eb.String()) assert.Empty(t, eb.String())
@@ -291,14 +313,14 @@ func Test_signCert(t *testing.T) {
os.Remove(keyF.Name()) os.Remove(keyF.Name())
os.Remove(crtF.Name()) os.Remove(crtF.Name())
args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", crtF.Name(), "-out-key", keyF.Name(), "-duration", "100m", "-subnets", "10.1.1.1/32, , 10.2.2.2/32 , , ,, 10.5.5.5/32", "-groups", "1,, 2 , ,,,3,4,5"} args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", crtF.Name(), "-out-key", keyF.Name(), "-duration", "100m", "-subnets", "10.1.1.1/32, , 10.2.2.2/32 , , ,, 10.5.5.5/32", "-groups", "1,, 2 , ,,,3,4,5"}
assert.Nil(t, signCert(args, ob, eb)) assert.Nil(t, signCert(args, ob, eb, nopw))
// test that we won't overwrite existing key file // test that we won't overwrite existing key file
os.Remove(crtF.Name()) os.Remove(crtF.Name())
ob.Reset() ob.Reset()
eb.Reset() eb.Reset()
args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", crtF.Name(), "-out-key", keyF.Name(), "-duration", "100m", "-subnets", "10.1.1.1/32, , 10.2.2.2/32 , , ,, 10.5.5.5/32", "-groups", "1,, 2 , ,,,3,4,5"} args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", crtF.Name(), "-out-key", keyF.Name(), "-duration", "100m", "-subnets", "10.1.1.1/32, , 10.2.2.2/32 , , ,, 10.5.5.5/32", "-groups", "1,, 2 , ,,,3,4,5"}
assert.EqualError(t, signCert(args, ob, eb), "refusing to overwrite existing key: "+keyF.Name()) assert.EqualError(t, signCert(args, ob, eb, nopw), "refusing to overwrite existing key: "+keyF.Name())
assert.Empty(t, ob.String()) assert.Empty(t, ob.String())
assert.Empty(t, eb.String()) assert.Empty(t, eb.String())
@@ -306,14 +328,83 @@ func Test_signCert(t *testing.T) {
os.Remove(keyF.Name()) os.Remove(keyF.Name())
os.Remove(crtF.Name()) os.Remove(crtF.Name())
args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", crtF.Name(), "-out-key", keyF.Name(), "-duration", "100m", "-subnets", "10.1.1.1/32, , 10.2.2.2/32 , , ,, 10.5.5.5/32", "-groups", "1,, 2 , ,,,3,4,5"} args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", crtF.Name(), "-out-key", keyF.Name(), "-duration", "100m", "-subnets", "10.1.1.1/32, , 10.2.2.2/32 , , ,, 10.5.5.5/32", "-groups", "1,, 2 , ,,,3,4,5"}
assert.Nil(t, signCert(args, ob, eb)) assert.Nil(t, signCert(args, ob, eb, nopw))
// test that we won't overwrite existing certificate file // test that we won't overwrite existing certificate file
os.Remove(keyF.Name()) os.Remove(keyF.Name())
ob.Reset() ob.Reset()
eb.Reset() eb.Reset()
args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", crtF.Name(), "-out-key", keyF.Name(), "-duration", "100m", "-subnets", "10.1.1.1/32, , 10.2.2.2/32 , , ,, 10.5.5.5/32", "-groups", "1,, 2 , ,,,3,4,5"} args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", crtF.Name(), "-out-key", keyF.Name(), "-duration", "100m", "-subnets", "10.1.1.1/32, , 10.2.2.2/32 , , ,, 10.5.5.5/32", "-groups", "1,, 2 , ,,,3,4,5"}
assert.EqualError(t, signCert(args, ob, eb), "refusing to overwrite existing cert: "+crtF.Name()) assert.EqualError(t, signCert(args, ob, eb, nopw), "refusing to overwrite existing cert: "+crtF.Name())
assert.Empty(t, ob.String()) assert.Empty(t, ob.String())
assert.Empty(t, eb.String()) assert.Empty(t, eb.String())
// create valid cert/key using encrypted CA key
os.Remove(caKeyF.Name())
os.Remove(caCrtF.Name())
os.Remove(keyF.Name())
os.Remove(crtF.Name())
ob.Reset()
eb.Reset()
caKeyF, err = os.CreateTemp("", "sign-cert.key")
assert.Nil(t, err)
defer os.Remove(caKeyF.Name())
caCrtF, err = os.CreateTemp("", "sign-cert.crt")
assert.Nil(t, err)
defer os.Remove(caCrtF.Name())
// generate the encrypted key
caPub, caPriv, _ = ed25519.GenerateKey(rand.Reader)
kdfParams := cert.NewArgon2Parameters(64*1024, 4, 3)
b, _ = cert.EncryptAndMarshalSigningPrivateKey(cert.Curve_CURVE25519, caPriv, passphrase, kdfParams)
caKeyF.Write(b)
ca = cert.NebulaCertificate{
Details: cert.NebulaCertificateDetails{
Name: "ca",
NotBefore: time.Now(),
NotAfter: time.Now().Add(time.Minute * 200),
PublicKey: caPub,
IsCA: true,
},
}
b, _ = ca.MarshalToPEM()
caCrtF.Write(b)
// test with the proper password
args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", crtF.Name(), "-out-key", keyF.Name(), "-duration", "100m", "-subnets", "10.1.1.1/32, , 10.2.2.2/32 , , ,, 10.5.5.5/32", "-groups", "1,, 2 , ,,,3,4,5"}
assert.Nil(t, signCert(args, ob, eb, testpw))
assert.Equal(t, "Enter passphrase: ", ob.String())
assert.Empty(t, eb.String())
// test with the wrong password
ob.Reset()
eb.Reset()
testpw.password = []byte("invalid password")
args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", crtF.Name(), "-out-key", keyF.Name(), "-duration", "100m", "-subnets", "10.1.1.1/32, , 10.2.2.2/32 , , ,, 10.5.5.5/32", "-groups", "1,, 2 , ,,,3,4,5"}
assert.Error(t, signCert(args, ob, eb, testpw))
assert.Equal(t, "Enter passphrase: ", ob.String())
assert.Empty(t, eb.String())
// test with the user not entering a password
ob.Reset()
eb.Reset()
args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", crtF.Name(), "-out-key", keyF.Name(), "-duration", "100m", "-subnets", "10.1.1.1/32, , 10.2.2.2/32 , , ,, 10.5.5.5/32", "-groups", "1,, 2 , ,,,3,4,5"}
assert.Error(t, signCert(args, ob, eb, nopw))
// normally the user hitting enter on the prompt would add newlines between these
assert.Equal(t, "Enter passphrase: Enter passphrase: Enter passphrase: Enter passphrase: Enter passphrase: ", ob.String())
assert.Empty(t, eb.String())
// test an error condition
ob.Reset()
eb.Reset()
args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", crtF.Name(), "-out-key", keyF.Name(), "-duration", "100m", "-subnets", "10.1.1.1/32, , 10.2.2.2/32 , , ,, 10.5.5.5/32", "-groups", "1,, 2 , ,,,3,4,5"}
assert.Error(t, signCert(args, ob, eb, errpw))
assert.Equal(t, "Enter passphrase: ", ob.String())
assert.Empty(t, eb.String())
} }

View File

@@ -4,7 +4,6 @@ import (
"flag" "flag"
"fmt" "fmt"
"io" "io"
"io/ioutil"
"os" "os"
"strings" "strings"
"time" "time"
@@ -40,7 +39,7 @@ func verify(args []string, out io.Writer, errOut io.Writer) error {
return err return err
} }
rawCACert, err := ioutil.ReadFile(*vf.caPath) rawCACert, err := os.ReadFile(*vf.caPath)
if err != nil { if err != nil {
return fmt.Errorf("error while reading ca: %s", err) return fmt.Errorf("error while reading ca: %s", err)
} }
@@ -57,7 +56,7 @@ func verify(args []string, out io.Writer, errOut io.Writer) error {
} }
} }
rawCert, err := ioutil.ReadFile(*vf.certPath) rawCert, err := os.ReadFile(*vf.certPath)
if err != nil { if err != nil {
return fmt.Errorf("unable to read crt; %s", err) return fmt.Errorf("unable to read crt; %s", err)
} }

View File

@@ -3,7 +3,6 @@ package main
import ( import (
"bytes" "bytes"
"crypto/rand" "crypto/rand"
"io/ioutil"
"os" "os"
"testing" "testing"
"time" "time"
@@ -56,7 +55,7 @@ func Test_verify(t *testing.T) {
// invalid ca at path // invalid ca at path
ob.Reset() ob.Reset()
eb.Reset() eb.Reset()
caFile, err := ioutil.TempFile("", "verify-ca") caFile, err := os.CreateTemp("", "verify-ca")
assert.Nil(t, err) assert.Nil(t, err)
defer os.Remove(caFile.Name()) defer os.Remove(caFile.Name())
@@ -77,7 +76,7 @@ func Test_verify(t *testing.T) {
IsCA: true, IsCA: true,
}, },
} }
ca.Sign(caPriv) ca.Sign(cert.Curve_CURVE25519, caPriv)
b, _ := ca.MarshalToPEM() b, _ := ca.MarshalToPEM()
caFile.Truncate(0) caFile.Truncate(0)
caFile.Seek(0, 0) caFile.Seek(0, 0)
@@ -92,7 +91,7 @@ func Test_verify(t *testing.T) {
// invalid crt at path // invalid crt at path
ob.Reset() ob.Reset()
eb.Reset() eb.Reset()
certFile, err := ioutil.TempFile("", "verify-cert") certFile, err := os.CreateTemp("", "verify-cert")
assert.Nil(t, err) assert.Nil(t, err)
defer os.Remove(certFile.Name()) defer os.Remove(certFile.Name())
@@ -117,7 +116,7 @@ func Test_verify(t *testing.T) {
}, },
} }
crt.Sign(badPriv) crt.Sign(cert.Curve_CURVE25519, badPriv)
b, _ = crt.MarshalToPEM() b, _ = crt.MarshalToPEM()
certFile.Truncate(0) certFile.Truncate(0)
certFile.Seek(0, 0) certFile.Seek(0, 0)
@@ -129,7 +128,7 @@ func Test_verify(t *testing.T) {
assert.EqualError(t, err, "certificate signature did not match") assert.EqualError(t, err, "certificate signature did not match")
// verified cert at path // verified cert at path
crt.Sign(caPriv) crt.Sign(cert.Curve_CURVE25519, caPriv)
b, _ = crt.MarshalToPEM() b, _ = crt.MarshalToPEM()
certFile.Truncate(0) certFile.Truncate(0)
certFile.Seek(0, 0) certFile.Seek(0, 0)

View File

@@ -59,13 +59,8 @@ func main() {
} }
ctrl, err := nebula.Main(c, *configTest, Build, l, nil) ctrl, err := nebula.Main(c, *configTest, Build, l, nil)
if err != nil {
switch v := err.(type) { util.LogWithContextIfNeeded("Failed to start", err, l)
case util.ContextualError:
v.Log(l)
os.Exit(1)
case error:
l.WithError(err).Error("Failed to start")
os.Exit(1) os.Exit(1)
} }

View File

@@ -49,6 +49,14 @@ func (p *program) Stop(s service.Service) error {
return nil return nil
} }
func fileExists(filename string) bool {
_, err := os.Stat(filename)
if os.IsNotExist(err) {
return false
}
return true
}
func doService(configPath *string, configTest *bool, build string, serviceFlag *string) { func doService(configPath *string, configTest *bool, build string, serviceFlag *string) {
if *configPath == "" { if *configPath == "" {
ex, err := os.Executable() ex, err := os.Executable()
@@ -56,6 +64,9 @@ func doService(configPath *string, configTest *bool, build string, serviceFlag *
panic(err) panic(err)
} }
*configPath = filepath.Dir(ex) + "/config.yaml" *configPath = filepath.Dir(ex) + "/config.yaml"
if !fileExists(*configPath) {
*configPath = filepath.Dir(ex) + "/config.yml"
}
} }
svcConfig := &service.Config{ svcConfig := &service.Config{

View File

@@ -53,18 +53,14 @@ func main() {
} }
ctrl, err := nebula.Main(c, *configTest, Build, l, nil) ctrl, err := nebula.Main(c, *configTest, Build, l, nil)
if err != nil {
switch v := err.(type) { util.LogWithContextIfNeeded("Failed to start", err, l)
case util.ContextualError:
v.Log(l)
os.Exit(1)
case error:
l.WithError(err).Error("Failed to start")
os.Exit(1) os.Exit(1)
} }
if !*configTest { if !*configTest {
ctrl.Start() ctrl.Start()
notifyReady(l)
ctrl.ShutdownBlock() ctrl.ShutdownBlock()
} }

View File

@@ -0,0 +1,42 @@
package main
import (
"net"
"os"
"time"
"github.com/sirupsen/logrus"
)
// SdNotifyReady tells systemd the service is ready and dependent services can now be started
// https://www.freedesktop.org/software/systemd/man/sd_notify.html
// https://www.freedesktop.org/software/systemd/man/systemd.service.html
const SdNotifyReady = "READY=1"
func notifyReady(l *logrus.Logger) {
sockName := os.Getenv("NOTIFY_SOCKET")
if sockName == "" {
l.Debugln("NOTIFY_SOCKET systemd env var not set, not sending ready signal")
return
}
conn, err := net.DialTimeout("unixgram", sockName, time.Second)
if err != nil {
l.WithError(err).Error("failed to connect to systemd notification socket")
return
}
defer conn.Close()
err = conn.SetWriteDeadline(time.Now().Add(time.Second))
if err != nil {
l.WithError(err).Error("failed to set the write deadline for the systemd notification socket")
return
}
if _, err = conn.Write([]byte(SdNotifyReady)); err != nil {
l.WithError(err).Error("failed to signal the systemd notification socket")
return
}
l.Debugln("notified systemd the service is ready")
}

View File

@@ -0,0 +1,10 @@
//go:build !linux
// +build !linux
package main
import "github.com/sirupsen/logrus"
func notifyReady(_ *logrus.Logger) {
// No init service to notify
}

View File

@@ -4,17 +4,18 @@ import (
"context" "context"
"errors" "errors"
"fmt" "fmt"
"io/ioutil" "math"
"os" "os"
"os/signal" "os/signal"
"path/filepath" "path/filepath"
"sort" "sort"
"strconv" "strconv"
"strings" "strings"
"sync"
"syscall" "syscall"
"time" "time"
"github.com/imdario/mergo" "dario.cat/mergo"
"github.com/sirupsen/logrus" "github.com/sirupsen/logrus"
"gopkg.in/yaml.v2" "gopkg.in/yaml.v2"
) )
@@ -26,6 +27,7 @@ type C struct {
oldSettings map[interface{}]interface{} oldSettings map[interface{}]interface{}
callbacks []func(*C) callbacks []func(*C)
l *logrus.Logger l *logrus.Logger
reloadLock sync.Mutex
} }
func NewC(l *logrus.Logger) *C { func NewC(l *logrus.Logger) *C {
@@ -74,6 +76,11 @@ func (c *C) RegisterReloadCallback(f func(*C)) {
c.callbacks = append(c.callbacks, f) c.callbacks = append(c.callbacks, f)
} }
// InitialLoad returns true if this is the first load of the config, and ReloadConfig has not been called yet.
func (c *C) InitialLoad() bool {
return c.oldSettings == nil
}
// HasChanged checks if the underlying structure of the provided key has changed after a config reload. The value of // HasChanged checks if the underlying structure of the provided key has changed after a config reload. The value of
// k in both the old and new settings will be serialized, the result of the string comparison is returned. // k in both the old and new settings will be serialized, the result of the string comparison is returned.
// If k is an empty string the entire config is tested. // If k is an empty string the entire config is tested.
@@ -114,6 +121,10 @@ func (c *C) HasChanged(k string) bool {
// CatchHUP will listen for the HUP signal in a go routine and reload all configs found in the // CatchHUP will listen for the HUP signal in a go routine and reload all configs found in the
// original path provided to Load. The old settings are shallow copied for change detection after the reload. // original path provided to Load. The old settings are shallow copied for change detection after the reload.
func (c *C) CatchHUP(ctx context.Context) { func (c *C) CatchHUP(ctx context.Context) {
if c.path == "" {
return
}
ch := make(chan os.Signal, 1) ch := make(chan os.Signal, 1)
signal.Notify(ch, syscall.SIGHUP) signal.Notify(ch, syscall.SIGHUP)
@@ -133,6 +144,9 @@ func (c *C) CatchHUP(ctx context.Context) {
} }
func (c *C) ReloadConfig() { func (c *C) ReloadConfig() {
c.reloadLock.Lock()
defer c.reloadLock.Unlock()
c.oldSettings = make(map[interface{}]interface{}) c.oldSettings = make(map[interface{}]interface{})
for k, v := range c.Settings { for k, v := range c.Settings {
c.oldSettings[k] = v c.oldSettings[k] = v
@@ -149,6 +163,27 @@ func (c *C) ReloadConfig() {
} }
} }
func (c *C) ReloadConfigString(raw string) error {
c.reloadLock.Lock()
defer c.reloadLock.Unlock()
c.oldSettings = make(map[interface{}]interface{})
for k, v := range c.Settings {
c.oldSettings[k] = v
}
err := c.LoadString(raw)
if err != nil {
return err
}
for _, v := range c.callbacks {
v(c)
}
return nil
}
// GetString will get the string for k or return the default d if not found or invalid // GetString will get the string for k or return the default d if not found or invalid
func (c *C) GetString(k, d string) string { func (c *C) GetString(k, d string) string {
r := c.Get(k) r := c.Get(k)
@@ -205,6 +240,15 @@ func (c *C) GetInt(k string, d int) int {
return v return v
} }
// GetUint32 will get the uint32 for k or return the default d if not found or invalid
func (c *C) GetUint32(k string, d uint32) uint32 {
r := c.GetInt(k, int(d))
if uint64(r) > uint64(math.MaxUint32) {
return d
}
return uint32(r)
}
// GetBool will get the bool for k or return the default d if not found or invalid // GetBool will get the bool for k or return the default d if not found or invalid
func (c *C) GetBool(k string, d bool) bool { func (c *C) GetBool(k string, d bool) bool {
r := strings.ToLower(c.GetString(k, fmt.Sprintf("%v", d))) r := strings.ToLower(c.GetString(k, fmt.Sprintf("%v", d)))
@@ -317,7 +361,7 @@ func (c *C) parse() error {
var m map[interface{}]interface{} var m map[interface{}]interface{}
for _, path := range c.files { for _, path := range c.files {
b, err := ioutil.ReadFile(path) b, err := os.ReadFile(path)
if err != nil { if err != nil {
return err return err
} }

View File

@@ -1,22 +1,24 @@
package config package config
import ( import (
"io/ioutil"
"os" "os"
"path/filepath" "path/filepath"
"testing" "testing"
"time" "time"
"dario.cat/mergo"
"github.com/slackhq/nebula/test" "github.com/slackhq/nebula/test"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"gopkg.in/yaml.v2"
) )
func TestConfig_Load(t *testing.T) { func TestConfig_Load(t *testing.T) {
l := test.NewLogger() l := test.NewLogger()
dir, err := ioutil.TempDir("", "config-test") dir, err := os.MkdirTemp("", "config-test")
// invalid yaml // invalid yaml
c := NewC(l) c := NewC(l)
ioutil.WriteFile(filepath.Join(dir, "01.yaml"), []byte(" invalid yaml"), 0644) os.WriteFile(filepath.Join(dir, "01.yaml"), []byte(" invalid yaml"), 0644)
assert.EqualError(t, c.Load(dir), "yaml: unmarshal errors:\n line 1: cannot unmarshal !!str `invalid...` into map[interface {}]interface {}") assert.EqualError(t, c.Load(dir), "yaml: unmarshal errors:\n line 1: cannot unmarshal !!str `invalid...` into map[interface {}]interface {}")
// simple multi config merge // simple multi config merge
@@ -26,8 +28,8 @@ func TestConfig_Load(t *testing.T) {
assert.Nil(t, err) assert.Nil(t, err)
ioutil.WriteFile(filepath.Join(dir, "01.yaml"), []byte("outer:\n inner: hi"), 0644) os.WriteFile(filepath.Join(dir, "01.yaml"), []byte("outer:\n inner: hi"), 0644)
ioutil.WriteFile(filepath.Join(dir, "02.yml"), []byte("outer:\n inner: override\nnew: hi"), 0644) os.WriteFile(filepath.Join(dir, "02.yml"), []byte("outer:\n inner: override\nnew: hi"), 0644)
assert.Nil(t, c.Load(dir)) assert.Nil(t, c.Load(dir))
expected := map[interface{}]interface{}{ expected := map[interface{}]interface{}{
"outer": map[interface{}]interface{}{ "outer": map[interface{}]interface{}{
@@ -117,9 +119,9 @@ func TestConfig_HasChanged(t *testing.T) {
func TestConfig_ReloadConfig(t *testing.T) { func TestConfig_ReloadConfig(t *testing.T) {
l := test.NewLogger() l := test.NewLogger()
done := make(chan bool, 1) done := make(chan bool, 1)
dir, err := ioutil.TempDir("", "config-test") dir, err := os.MkdirTemp("", "config-test")
assert.Nil(t, err) assert.Nil(t, err)
ioutil.WriteFile(filepath.Join(dir, "01.yaml"), []byte("outer:\n inner: hi"), 0644) os.WriteFile(filepath.Join(dir, "01.yaml"), []byte("outer:\n inner: hi"), 0644)
c := NewC(l) c := NewC(l)
assert.Nil(t, c.Load(dir)) assert.Nil(t, c.Load(dir))
@@ -128,7 +130,7 @@ func TestConfig_ReloadConfig(t *testing.T) {
assert.False(t, c.HasChanged("outer")) assert.False(t, c.HasChanged("outer"))
assert.False(t, c.HasChanged("")) assert.False(t, c.HasChanged(""))
ioutil.WriteFile(filepath.Join(dir, "01.yaml"), []byte("outer:\n inner: ho"), 0644) os.WriteFile(filepath.Join(dir, "01.yaml"), []byte("outer:\n inner: ho"), 0644)
c.RegisterReloadCallback(func(c *C) { c.RegisterReloadCallback(func(c *C) {
done <- true done <- true
@@ -147,3 +149,77 @@ func TestConfig_ReloadConfig(t *testing.T) {
} }
} }
// Ensure mergo merges are done the way we expect.
// This is needed to test for potential regressions, like:
// - https://github.com/imdario/mergo/issues/187
func TestConfig_MergoMerge(t *testing.T) {
configs := [][]byte{
[]byte(`
listen:
port: 1234
`),
[]byte(`
firewall:
inbound:
- port: 443
proto: tcp
groups:
- server
- port: 443
proto: tcp
groups:
- webapp
`),
[]byte(`
listen:
host: 0.0.0.0
port: 4242
firewall:
outbound:
- port: any
proto: any
host: any
inbound:
- port: any
proto: icmp
host: any
`),
}
var m map[any]any
// merge the same way config.parse() merges
for _, b := range configs {
var nm map[any]any
err := yaml.Unmarshal(b, &nm)
require.NoError(t, err)
// We need to use WithAppendSlice so that firewall rules in separate
// files are appended together
err = mergo.Merge(&nm, m, mergo.WithAppendSlice)
m = nm
require.NoError(t, err)
}
t.Logf("Merged Config: %#v", m)
mYaml, err := yaml.Marshal(m)
require.NoError(t, err)
t.Logf("Merged Config as YAML:\n%s", mYaml)
// If a bug is present, some items might be replaced instead of merged like we expect
expected := map[any]any{
"firewall": map[any]any{
"inbound": []any{
map[any]any{"host": "any", "port": "any", "proto": "icmp"},
map[any]any{"groups": []any{"server"}, "port": 443, "proto": "tcp"},
map[any]any{"groups": []any{"webapp"}, "port": 443, "proto": "tcp"}},
"outbound": []any{
map[any]any{"host": "any", "port": "any", "proto": "any"}}},
"listen": map[any]any{
"host": "0.0.0.0",
"port": 4242,
},
}
assert.Equal(t, expected, m)
}

View File

@@ -1,151 +1,153 @@
package nebula package nebula
import ( import (
"bytes"
"context" "context"
"encoding/binary"
"fmt"
"net/netip"
"sync" "sync"
"sync/atomic"
"time" "time"
"github.com/rcrowley/go-metrics"
"github.com/sirupsen/logrus" "github.com/sirupsen/logrus"
"github.com/slackhq/nebula/cert"
"github.com/slackhq/nebula/config"
"github.com/slackhq/nebula/header" "github.com/slackhq/nebula/header"
"github.com/slackhq/nebula/iputil"
) )
// TODO: incount and outcount are intended as a shortcut to locking the mutexes for every single packet type trafficDecision int
// and something like every 10 packets we could lock, send 10, then unlock for a moment
const (
doNothing trafficDecision = 0
deleteTunnel trafficDecision = 1 // delete the hostinfo on our side, do not notify the remote
closeTunnel trafficDecision = 2 // delete the hostinfo and notify the remote
swapPrimary trafficDecision = 3
migrateRelays trafficDecision = 4
tryRehandshake trafficDecision = 5
sendTestPacket trafficDecision = 6
)
type connectionManager struct { type connectionManager struct {
// relayUsed holds which relay localIndexs are in use
relayUsed map[uint32]struct{}
relayUsedLock *sync.RWMutex
hostMap *HostMap hostMap *HostMap
in map[iputil.VpnIp]struct{} trafficTimer *LockingTimerWheel[uint32]
inLock *sync.RWMutex
inCount int
out map[iputil.VpnIp]struct{}
outLock *sync.RWMutex
outCount int
TrafficTimer *SystemTimerWheel
intf *Interface intf *Interface
punchy *Punchy
pendingDeletion map[iputil.VpnIp]int // Configuration settings
pendingDeletionLock *sync.RWMutex checkInterval time.Duration
pendingDeletionTimer *SystemTimerWheel pendingDeletionInterval time.Duration
inactivityTimeout atomic.Int64
dropInactive atomic.Bool
checkInterval int metricsTxPunchy metrics.Counter
pendingDeletionInterval int
l *logrus.Logger l *logrus.Logger
// I wanted to call one matLock
} }
func newConnectionManager(ctx context.Context, l *logrus.Logger, intf *Interface, checkInterval, pendingDeletionInterval int) *connectionManager { func newConnectionManagerFromConfig(l *logrus.Logger, c *config.C, hm *HostMap, p *Punchy) *connectionManager {
nc := &connectionManager{ cm := &connectionManager{
hostMap: intf.hostMap, hostMap: hm,
in: make(map[iputil.VpnIp]struct{}),
inLock: &sync.RWMutex{},
inCount: 0,
out: make(map[iputil.VpnIp]struct{}),
outLock: &sync.RWMutex{},
outCount: 0,
TrafficTimer: NewSystemTimerWheel(time.Millisecond*500, time.Second*60),
intf: intf,
pendingDeletion: make(map[iputil.VpnIp]int),
pendingDeletionLock: &sync.RWMutex{},
pendingDeletionTimer: NewSystemTimerWheel(time.Millisecond*500, time.Second*60),
checkInterval: checkInterval,
pendingDeletionInterval: pendingDeletionInterval,
l: l, l: l,
punchy: p,
relayUsed: make(map[uint32]struct{}),
relayUsedLock: &sync.RWMutex{},
metricsTxPunchy: metrics.GetOrRegisterCounter("messages.tx.punchy", nil),
} }
nc.Start(ctx)
return nc cm.reload(c, true)
c.RegisterReloadCallback(func(c *config.C) {
cm.reload(c, false)
})
return cm
} }
func (n *connectionManager) In(ip iputil.VpnIp) { func (cm *connectionManager) reload(c *config.C, initial bool) {
n.inLock.RLock() if initial {
cm.checkInterval = time.Duration(c.GetInt("timers.connection_alive_interval", 5)) * time.Second
cm.pendingDeletionInterval = time.Duration(c.GetInt("timers.pending_deletion_interval", 10)) * time.Second
// We want at least a minimum resolution of 500ms per tick so that we can hit these intervals
// pretty close to their configured duration.
// The inactivity duration is checked each time a hostinfo ticks through so we don't need the wheel to contain it.
minDuration := min(time.Millisecond*500, cm.checkInterval, cm.pendingDeletionInterval)
maxDuration := max(cm.checkInterval, cm.pendingDeletionInterval)
cm.trafficTimer = NewLockingTimerWheel[uint32](minDuration, maxDuration)
}
if initial || c.HasChanged("tunnels.inactivity_timeout") {
old := cm.getInactivityTimeout()
cm.inactivityTimeout.Store((int64)(c.GetDuration("tunnels.inactivity_timeout", 10*time.Minute)))
if !initial {
cm.l.WithField("oldDuration", old).
WithField("newDuration", cm.getInactivityTimeout()).
Info("Inactivity timeout has changed")
}
}
if initial || c.HasChanged("tunnels.drop_inactive") {
old := cm.dropInactive.Load()
cm.dropInactive.Store(c.GetBool("tunnels.drop_inactive", false))
if !initial {
cm.l.WithField("oldBool", old).
WithField("newBool", cm.dropInactive.Load()).
Info("Drop inactive setting has changed")
}
}
}
func (cm *connectionManager) getInactivityTimeout() time.Duration {
return (time.Duration)(cm.inactivityTimeout.Load())
}
func (cm *connectionManager) In(h *HostInfo) {
h.in.Store(true)
}
func (cm *connectionManager) Out(h *HostInfo) {
h.out.Store(true)
}
func (cm *connectionManager) RelayUsed(localIndex uint32) {
cm.relayUsedLock.RLock()
// If this already exists, return // If this already exists, return
if _, ok := n.in[ip]; ok { if _, ok := cm.relayUsed[localIndex]; ok {
n.inLock.RUnlock() cm.relayUsedLock.RUnlock()
return return
} }
n.inLock.RUnlock() cm.relayUsedLock.RUnlock()
n.inLock.Lock() cm.relayUsedLock.Lock()
n.in[ip] = struct{}{} cm.relayUsed[localIndex] = struct{}{}
n.inLock.Unlock() cm.relayUsedLock.Unlock()
} }
func (n *connectionManager) Out(ip iputil.VpnIp) { // getAndResetTrafficCheck returns if there was any inbound or outbound traffic within the last tick and
n.outLock.RLock() // resets the state for this local index
// If this already exists, return func (cm *connectionManager) getAndResetTrafficCheck(h *HostInfo, now time.Time) (bool, bool) {
if _, ok := n.out[ip]; ok { in := h.in.Swap(false)
n.outLock.RUnlock() out := h.out.Swap(false)
return if in || out {
h.lastUsed = now
} }
n.outLock.RUnlock() return in, out
n.outLock.Lock() }
// double check since we dropped the lock temporarily
if _, ok := n.out[ip]; ok { // AddTrafficWatch must be called for every new HostInfo.
n.outLock.Unlock() // We will continue to monitor the HostInfo until the tunnel is dropped.
return func (cm *connectionManager) AddTrafficWatch(h *HostInfo) {
if h.out.Swap(true) == false {
cm.trafficTimer.Add(h.localIndexId, cm.checkInterval)
} }
n.out[ip] = struct{}{}
n.AddTrafficWatch(ip, n.checkInterval)
n.outLock.Unlock()
} }
func (n *connectionManager) CheckIn(vpnIp iputil.VpnIp) bool { func (cm *connectionManager) Start(ctx context.Context) {
n.inLock.RLock() clockSource := time.NewTicker(cm.trafficTimer.t.tickDuration)
if _, ok := n.in[vpnIp]; ok {
n.inLock.RUnlock()
return true
}
n.inLock.RUnlock()
return false
}
func (n *connectionManager) ClearIP(ip iputil.VpnIp) {
n.inLock.Lock()
n.outLock.Lock()
delete(n.in, ip)
delete(n.out, ip)
n.inLock.Unlock()
n.outLock.Unlock()
}
func (n *connectionManager) ClearPendingDeletion(ip iputil.VpnIp) {
n.pendingDeletionLock.Lock()
delete(n.pendingDeletion, ip)
n.pendingDeletionLock.Unlock()
}
func (n *connectionManager) AddPendingDeletion(ip iputil.VpnIp) {
n.pendingDeletionLock.Lock()
if _, ok := n.pendingDeletion[ip]; ok {
n.pendingDeletion[ip] += 1
} else {
n.pendingDeletion[ip] = 0
}
n.pendingDeletionTimer.Add(ip, time.Second*time.Duration(n.pendingDeletionInterval))
n.pendingDeletionLock.Unlock()
}
func (n *connectionManager) checkPendingDeletion(ip iputil.VpnIp) bool {
n.pendingDeletionLock.RLock()
if _, ok := n.pendingDeletion[ip]; ok {
n.pendingDeletionLock.RUnlock()
return true
}
n.pendingDeletionLock.RUnlock()
return false
}
func (n *connectionManager) AddTrafficWatch(vpnIp iputil.VpnIp, seconds int) {
n.TrafficTimer.Add(vpnIp, time.Second*time.Duration(seconds))
}
func (n *connectionManager) Start(ctx context.Context) {
go n.Run(ctx)
}
func (n *connectionManager) Run(ctx context.Context) {
clockSource := time.NewTicker(500 * time.Millisecond)
defer clockSource.Stop() defer clockSource.Stop()
p := []byte("") p := []byte("")
@@ -156,160 +158,362 @@ func (n *connectionManager) Run(ctx context.Context) {
select { select {
case <-ctx.Done(): case <-ctx.Done():
return return
case now := <-clockSource.C: case now := <-clockSource.C:
n.HandleMonitorTick(now, p, nb, out) cm.trafficTimer.Advance(now)
n.HandleDeletionTick(now) for {
localIndex, has := cm.trafficTimer.Purge()
if !has {
break
}
cm.doTrafficCheck(localIndex, p, nb, out, now)
}
} }
} }
} }
func (n *connectionManager) HandleMonitorTick(now time.Time, p, nb, out []byte) { func (cm *connectionManager) doTrafficCheck(localIndex uint32, p, nb, out []byte, now time.Time) {
n.TrafficTimer.advance(now) decision, hostinfo, primary := cm.makeTrafficDecision(localIndex, now)
for {
ep := n.TrafficTimer.Purge() switch decision {
if ep == nil { case deleteTunnel:
break if cm.hostMap.DeleteHostInfo(hostinfo) {
// Only clearing the lighthouse cache if this is the last hostinfo for this vpn ip in the hostmap
cm.intf.lightHouse.DeleteVpnIp(hostinfo.vpnIp)
} }
vpnIp := ep.(iputil.VpnIp) case closeTunnel:
cm.intf.sendCloseTunnel(hostinfo)
cm.intf.closeTunnel(hostinfo)
// Check for traffic coming back in from this host. case swapPrimary:
traf := n.CheckIn(vpnIp) cm.swapPrimary(hostinfo, primary)
hostinfo, err := n.hostMap.QueryVpnIp(vpnIp) case migrateRelays:
cm.migrateRelayUsed(hostinfo, primary)
case tryRehandshake:
cm.tryRehandshake(hostinfo)
case sendTestPacket:
cm.intf.SendMessageToHostInfo(header.Test, header.TestRequest, hostinfo, p, nb, out)
}
cm.resetRelayTrafficCheck(hostinfo)
}
func (cm *connectionManager) resetRelayTrafficCheck(hostinfo *HostInfo) {
if hostinfo != nil {
cm.relayUsedLock.Lock()
defer cm.relayUsedLock.Unlock()
// No need to migrate any relays, delete usage info now.
for _, idx := range hostinfo.relayState.CopyRelayForIdxs() {
delete(cm.relayUsed, idx)
}
}
}
func (cm *connectionManager) migrateRelayUsed(oldhostinfo, newhostinfo *HostInfo) {
relayFor := oldhostinfo.relayState.CopyAllRelayFor()
for _, r := range relayFor {
existing, ok := newhostinfo.relayState.QueryRelayForByIp(r.PeerIp)
var index uint32
var relayFrom netip.Addr
var relayTo netip.Addr
switch {
case ok:
switch existing.State {
case Established, PeerRequested, Disestablished:
// This relay already exists in newhostinfo, then do nothing.
continue
case Requested:
// The relayed connection exists in a Requested state; re-send the request
index = existing.LocalIndex
switch r.Type {
case TerminalType:
relayFrom = cm.intf.myVpnNet.Addr()
relayTo = existing.PeerIp
case ForwardingType:
relayFrom = existing.PeerIp
relayTo = newhostinfo.vpnIp
default:
// should never happen
panic(fmt.Sprintf("Migrating unknown relay type: %v", r.Type))
}
}
case !ok:
cm.relayUsedLock.RLock()
if _, relayUsed := cm.relayUsed[r.LocalIndex]; !relayUsed {
// The relay hasn't been used; don't migrate it.
cm.relayUsedLock.RUnlock()
continue
}
cm.relayUsedLock.RUnlock()
// The relay doesn't exist at all; create some relay state and send the request.
var err error
index, err = AddRelay(cm.l, newhostinfo, cm.hostMap, r.PeerIp, nil, r.Type, Requested)
if err != nil { if err != nil {
n.l.Debugf("Not found in hostmap: %s", vpnIp) cm.l.WithError(err).Error("failed to migrate relay to new hostinfo")
if !n.intf.disconnectInvalid {
n.ClearIP(vpnIp)
n.ClearPendingDeletion(vpnIp)
continue continue
} }
switch r.Type {
case TerminalType:
relayFrom = cm.intf.myVpnNet.Addr()
relayTo = r.PeerIp
case ForwardingType:
relayFrom = r.PeerIp
relayTo = newhostinfo.vpnIp
default:
// should never happen
panic(fmt.Sprintf("Migrating unknown relay type: %v", r.Type))
}
} }
if n.handleInvalidCertificate(now, vpnIp, hostinfo) { //TODO: IPV6-WORK
continue relayFromB := relayFrom.As4()
relayToB := relayTo.As4()
// Send a CreateRelayRequest to the peer.
req := NebulaControl{
Type: NebulaControl_CreateRelayRequest,
InitiatorRelayIndex: index,
RelayFromIp: binary.BigEndian.Uint32(relayFromB[:]),
RelayToIp: binary.BigEndian.Uint32(relayToB[:]),
}
msg, err := req.Marshal()
if err != nil {
cm.l.WithError(err).Error("failed to marshal Control message to migrate relay")
} else {
cm.intf.SendMessageToHostInfo(header.Control, 0, newhostinfo, msg, make([]byte, 12), make([]byte, mtu))
cm.l.WithFields(logrus.Fields{
"relayFrom": req.RelayFromIp,
"relayTo": req.RelayToIp,
"initiatorRelayIndex": req.InitiatorRelayIndex,
"responderRelayIndex": req.ResponderRelayIndex,
"vpnIp": newhostinfo.vpnIp}).
Info("send CreateRelayRequest")
}
}
}
func (cm *connectionManager) makeTrafficDecision(localIndex uint32, now time.Time) (trafficDecision, *HostInfo, *HostInfo) {
// Read lock the main hostmap to order decisions based on tunnels being the primary tunnel
cm.hostMap.RLock()
defer cm.hostMap.RUnlock()
hostinfo := cm.hostMap.Indexes[localIndex]
if hostinfo == nil {
cm.l.WithField("localIndex", localIndex).Debugln("Not found in hostmap")
return doNothing, nil, nil
} }
// If we saw an incoming packets from this ip and peer's certificate is not if cm.isInvalidCertificate(now, hostinfo) {
// expired, just ignore. return closeTunnel, hostinfo, nil
if traf { }
if n.l.Level >= logrus.DebugLevel {
n.l.WithField("vpnIp", vpnIp). primary := cm.hostMap.Hosts[hostinfo.vpnIp]
mainHostInfo := true
if primary != nil && primary != hostinfo {
mainHostInfo = false
}
// Check for traffic on this hostinfo
inTraffic, outTraffic := cm.getAndResetTrafficCheck(hostinfo, now)
// A hostinfo is determined alive if there is incoming traffic
if inTraffic {
decision := doNothing
if cm.l.Level >= logrus.DebugLevel {
hostinfo.logger(cm.l).
WithField("tunnelCheck", m{"state": "alive", "method": "passive"}). WithField("tunnelCheck", m{"state": "alive", "method": "passive"}).
Debug("Tunnel status") Debug("Tunnel status")
} }
n.ClearIP(vpnIp) hostinfo.pendingDeletion.Store(false)
n.ClearPendingDeletion(vpnIp)
continue
}
hostinfo.logger(n.l). if mainHostInfo {
WithField("tunnelCheck", m{"state": "testing", "method": "active"}). decision = tryRehandshake
Debug("Tunnel status")
if hostinfo != nil && hostinfo.ConnectionState != nil {
// Send a test packet to trigger an authenticated tunnel test, this should suss out any lingering tunnel issues
n.intf.SendMessageToVpnIp(header.Test, header.TestRequest, vpnIp, p, nb, out)
} else { } else {
hostinfo.logger(n.l).Debugf("Hostinfo sadness: %s", vpnIp) if cm.shouldSwapPrimary(hostinfo, primary) {
} decision = swapPrimary
n.AddPendingDeletion(vpnIp) } else {
} // migrate the relays to the primary, if in use.
decision = migrateRelays
}
func (n *connectionManager) HandleDeletionTick(now time.Time) {
n.pendingDeletionTimer.advance(now)
for {
ep := n.pendingDeletionTimer.Purge()
if ep == nil {
break
}
vpnIp := ep.(iputil.VpnIp)
hostinfo, err := n.hostMap.QueryVpnIp(vpnIp)
if err != nil {
n.l.Debugf("Not found in hostmap: %s", vpnIp)
if !n.intf.disconnectInvalid {
n.ClearIP(vpnIp)
n.ClearPendingDeletion(vpnIp)
continue
} }
} }
if n.handleInvalidCertificate(now, vpnIp, hostinfo) { cm.trafficTimer.Add(hostinfo.localIndexId, cm.checkInterval)
continue
if !outTraffic {
// Send a punch packet to keep the NAT state alive
cm.sendPunch(hostinfo)
} }
// If we saw an incoming packets from this ip and peer's certificate is not return decision, hostinfo, primary
// expired, just ignore.
traf := n.CheckIn(vpnIp)
if traf {
n.l.WithField("vpnIp", vpnIp).
WithField("tunnelCheck", m{"state": "alive", "method": "active"}).
Debug("Tunnel status")
n.ClearIP(vpnIp)
n.ClearPendingDeletion(vpnIp)
continue
} }
// If it comes around on deletion wheel and hasn't resolved itself, delete if hostinfo.pendingDeletion.Load() {
if n.checkPendingDeletion(vpnIp) { // We have already sent a test packet and nothing was returned, this hostinfo is dead
cn := "" hostinfo.logger(cm.l).
if hostinfo.ConnectionState != nil && hostinfo.ConnectionState.peerCert != nil {
cn = hostinfo.ConnectionState.peerCert.Details.Name
}
hostinfo.logger(n.l).
WithField("tunnelCheck", m{"state": "dead", "method": "active"}). WithField("tunnelCheck", m{"state": "dead", "method": "active"}).
WithField("certName", cn).
Info("Tunnel status") Info("Tunnel status")
n.ClearIP(vpnIp) return deleteTunnel, hostinfo, nil
n.ClearPendingDeletion(vpnIp)
// TODO: This is only here to let tests work. Should do proper mocking
if n.intf.lightHouse != nil {
n.intf.lightHouse.DeleteVpnIp(vpnIp)
} }
n.hostMap.DeleteHostInfo(hostinfo)
decision := doNothing
if hostinfo != nil && hostinfo.ConnectionState != nil && mainHostInfo {
if !outTraffic {
inactiveFor, isInactive := cm.isInactive(hostinfo, now)
if isInactive {
// Tunnel is inactive, tear it down
hostinfo.logger(cm.l).
WithField("inactiveDuration", inactiveFor).
WithField("primary", mainHostInfo).
Info("Dropping tunnel due to inactivity")
return closeTunnel, hostinfo, primary
}
// If we aren't sending or receiving traffic then its an unused tunnel and we don't to test the tunnel.
// Just maintain NAT state if configured to do so.
cm.sendPunch(hostinfo)
cm.trafficTimer.Add(hostinfo.localIndexId, cm.checkInterval)
return doNothing, nil, nil
}
if cm.punchy.GetTargetEverything() {
// This is similar to the old punchy behavior with a slight optimization.
// We aren't receiving traffic but we are sending it, punch on all known
// ips in case we need to re-prime NAT state
cm.sendPunch(hostinfo)
}
if cm.l.Level >= logrus.DebugLevel {
hostinfo.logger(cm.l).
WithField("tunnelCheck", m{"state": "testing", "method": "active"}).
Debug("Tunnel status")
}
// Send a test packet to trigger an authenticated tunnel test, this should suss out any lingering tunnel issues
decision = sendTestPacket
} else { } else {
n.ClearIP(vpnIp) if cm.l.Level >= logrus.DebugLevel {
n.ClearPendingDeletion(vpnIp) hostinfo.logger(cm.l).Debugf("Hostinfo sadness")
} }
} }
hostinfo.pendingDeletion.Store(true)
cm.trafficTimer.Add(hostinfo.localIndexId, cm.pendingDeletionInterval)
return decision, hostinfo, nil
} }
// handleInvalidCertificates will destroy a tunnel if pki.disconnect_invalid is true and the certificate is no longer valid func (cm *connectionManager) isInactive(hostinfo *HostInfo, now time.Time) (time.Duration, bool) {
func (n *connectionManager) handleInvalidCertificate(now time.Time, vpnIp iputil.VpnIp, hostinfo *HostInfo) bool { if cm.dropInactive.Load() == false {
if !n.intf.disconnectInvalid { // We aren't configured to drop inactive tunnels
return 0, false
}
inactiveDuration := now.Sub(hostinfo.lastUsed)
if inactiveDuration < cm.getInactivityTimeout() {
// It's not considered inactive
return inactiveDuration, false
}
// The tunnel is inactive
return inactiveDuration, true
}
func (cm *connectionManager) shouldSwapPrimary(current, primary *HostInfo) bool {
// The primary tunnel is the most recent handshake to complete locally and should work entirely fine.
// If we are here then we have multiple tunnels for a host pair and neither side believes the same tunnel is primary.
// Let's sort this out.
if current.vpnIp.Compare(cm.intf.myVpnNet.Addr()) < 0 {
// Only one side should flip primary because if both flip then we may never resolve to a single tunnel.
// vpn ip is static across all tunnels for this host pair so lets use that to determine who is flipping.
// The remotes vpn ip is lower than mine. I will not flip.
return false return false
} }
certState := cm.intf.pki.GetCertState()
return bytes.Equal(current.ConnectionState.myCert.Signature, certState.Certificate.Signature)
}
func (cm *connectionManager) swapPrimary(current, primary *HostInfo) {
cm.hostMap.Lock()
// Make sure the primary is still the same after the write lock. This avoids a race with a rehandshake.
if cm.hostMap.Hosts[current.vpnIp] == primary {
cm.hostMap.unlockedMakePrimary(current)
}
cm.hostMap.Unlock()
}
// isInvalidCertificate will check if we should destroy a tunnel if pki.disconnect_invalid is true and
// the certificate is no longer valid. Block listed certificates will skip the pki.disconnect_invalid
// check and return true.
func (cm *connectionManager) isInvalidCertificate(now time.Time, hostinfo *HostInfo) bool {
remoteCert := hostinfo.GetCert() remoteCert := hostinfo.GetCert()
if remoteCert == nil { if remoteCert == nil {
return false return false
} }
valid, err := remoteCert.Verify(now, n.intf.caPool) valid, err := remoteCert.VerifyWithCache(now, cm.intf.pki.GetCAPool())
if valid { if valid {
return false return false
} }
if !cm.intf.disconnectInvalid.Load() && err != cert.ErrBlockListed {
// Block listed certificates should always be disconnected
return false
}
fingerprint, _ := remoteCert.Sha256Sum() fingerprint, _ := remoteCert.Sha256Sum()
n.l.WithField("vpnIp", vpnIp).WithError(err). hostinfo.logger(cm.l).WithError(err).
WithField("certName", remoteCert.Details.Name).
WithField("fingerprint", fingerprint). WithField("fingerprint", fingerprint).
Info("Remote certificate is no longer valid, tearing down the tunnel") Info("Remote certificate is no longer valid, tearing down the tunnel")
// Inform the remote and close the tunnel locally
n.intf.sendCloseTunnel(hostinfo)
n.intf.closeTunnel(hostinfo, false)
n.ClearIP(vpnIp)
n.ClearPendingDeletion(vpnIp)
return true return true
} }
func (cm *connectionManager) sendPunch(hostinfo *HostInfo) {
if !cm.punchy.GetPunch() {
// Punching is disabled
return
}
if cm.intf.lightHouse.IsLighthouseIP(hostinfo.vpnIp) {
// Do not punch to lighthouses, we assume our lighthouse update interval is good enough.
// In the event the update interval is not sufficient to maintain NAT state then a publicly available lighthouse
// would lose the ability to notify us and punchy.respond would become unreliable.
return
}
if cm.punchy.GetTargetEverything() {
hostinfo.remotes.ForEach(cm.hostMap.GetPreferredRanges(), func(addr netip.AddrPort, preferred bool) {
cm.metricsTxPunchy.Inc(1)
cm.intf.outside.WriteTo([]byte{1}, addr)
})
} else if hostinfo.remote.IsValid() {
cm.metricsTxPunchy.Inc(1)
cm.intf.outside.WriteTo([]byte{1}, hostinfo.remote)
}
}
func (cm *connectionManager) tryRehandshake(hostinfo *HostInfo) {
certState := cm.intf.pki.GetCertState()
if bytes.Equal(hostinfo.ConnectionState.myCert.Signature, certState.Certificate.Signature) {
return
}
cm.l.WithField("vpnIp", hostinfo.vpnIp).
WithField("reason", "local certificate is not current").
Info("Re-handshaking with remote")
cm.intf.handshakeManager.StartHandshake(hostinfo.vpnIp, nil)
}

View File

@@ -1,162 +1,300 @@
package nebula package nebula
import ( import (
"context"
"crypto/ed25519" "crypto/ed25519"
"crypto/rand" "crypto/rand"
"net" "net"
"net/netip"
"testing" "testing"
"time" "time"
"github.com/flynn/noise" "github.com/flynn/noise"
"github.com/slackhq/nebula/cert" "github.com/slackhq/nebula/cert"
"github.com/slackhq/nebula/iputil" "github.com/slackhq/nebula/config"
"github.com/slackhq/nebula/test" "github.com/slackhq/nebula/test"
"github.com/slackhq/nebula/udp" "github.com/slackhq/nebula/udp"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
) )
var vpnIp iputil.VpnIp func newTestLighthouse() *LightHouse {
lh := &LightHouse{
l: test.NewLogger(),
addrMap: map[netip.Addr]*RemoteList{},
queryChan: make(chan netip.Addr, 10),
}
lighthouses := map[netip.Addr]struct{}{}
staticList := map[netip.Addr]struct{}{}
lh.lighthouses.Store(&lighthouses)
lh.staticList.Store(&staticList)
return lh
}
func Test_NewConnectionManagerTest(t *testing.T) { func Test_NewConnectionManagerTest(t *testing.T) {
l := test.NewLogger() l := test.NewLogger()
//_, tuncidr, _ := net.ParseCIDR("1.1.1.1/24") //_, tuncidr, _ := net.ParseCIDR("1.1.1.1/24")
_, vpncidr, _ := net.ParseCIDR("172.1.1.1/24") vpncidr := netip.MustParsePrefix("172.1.1.1/24")
_, localrange, _ := net.ParseCIDR("10.1.1.1/24") localrange := netip.MustParsePrefix("10.1.1.1/24")
vpnIp = iputil.Ip2VpnIp(net.ParseIP("172.1.1.2")) vpnIp := netip.MustParseAddr("172.1.1.2")
preferredRanges := []*net.IPNet{localrange} preferredRanges := []netip.Prefix{localrange}
// Very incomplete mock objects // Very incomplete mock objects
hostMap := NewHostMap(l, "test", vpncidr, preferredRanges) hostMap := newHostMap(l, vpncidr)
hostMap.preferredRanges.Store(&preferredRanges)
cs := &CertState{ cs := &CertState{
rawCertificate: []byte{}, RawCertificate: []byte{},
privateKey: []byte{}, PrivateKey: []byte{},
certificate: &cert.NebulaCertificate{}, Certificate: &cert.NebulaCertificate{},
rawCertificateNoKey: []byte{}, RawCertificateNoKey: []byte{},
} }
lh := NewLightHouse(l, false, &net.IPNet{IP: net.IP{0, 0, 0, 0}, Mask: net.IPMask{0, 0, 0, 0}}, []iputil.VpnIp{}, 1000, 0, &udp.Conn{}, false, 1, false) lh := newTestLighthouse()
ifce := &Interface{ ifce := &Interface{
hostMap: hostMap, hostMap: hostMap,
inside: &test.NoopTun{}, inside: &test.NoopTun{},
outside: &udp.Conn{}, outside: &udp.NoopConn{},
certState: cs,
firewall: &Firewall{}, firewall: &Firewall{},
lightHouse: lh, lightHouse: lh,
handshakeManager: NewHandshakeManager(l, vpncidr, preferredRanges, hostMap, lh, &udp.Conn{}, defaultHandshakeConfig), pki: &PKI{},
handshakeManager: NewHandshakeManager(l, hostMap, lh, &udp.NoopConn{}, defaultHandshakeConfig),
l: l, l: l,
} }
now := time.Now() ifce.pki.cs.Store(cs)
// Create manager // Create manager
ctx, cancel := context.WithCancel(context.Background()) conf := config.NewC(l)
defer cancel() punchy := NewPunchyFromConfig(l, conf)
nc := newConnectionManager(ctx, l, ifce, 5, 10) nc := newConnectionManagerFromConfig(l, conf, hostMap, punchy)
nc.intf = ifce
p := []byte("") p := []byte("")
nb := make([]byte, 12, 12) nb := make([]byte, 12, 12)
out := make([]byte, mtu) out := make([]byte, mtu)
nc.HandleMonitorTick(now, p, nb, out)
// Add an ip we have established a connection w/ to hostmap // Add an ip we have established a connection w/ to hostmap
hostinfo, _ := nc.hostMap.AddVpnIp(vpnIp, nil) hostinfo := &HostInfo{
vpnIp: vpnIp,
localIndexId: 1099,
remoteIndexId: 9901,
}
hostinfo.ConnectionState = &ConnectionState{ hostinfo.ConnectionState = &ConnectionState{
certState: cs, myCert: &cert.NebulaCertificate{},
H: &noise.HandshakeState{}, H: &noise.HandshakeState{},
} }
nc.hostMap.unlockedAddHostInfo(hostinfo, ifce)
// We saw traffic out to vpnIp // We saw traffic out to vpnIp
nc.Out(vpnIp) nc.Out(hostinfo)
assert.NotContains(t, nc.pendingDeletion, vpnIp) nc.In(hostinfo)
assert.Contains(t, nc.hostMap.Hosts, vpnIp) assert.False(t, hostinfo.pendingDeletion.Load())
// Move ahead 5s. Nothing should happen assert.Contains(t, nc.hostMap.Hosts, hostinfo.vpnIp)
next_tick := now.Add(5 * time.Second) assert.Contains(t, nc.hostMap.Indexes, hostinfo.localIndexId)
nc.HandleMonitorTick(next_tick, p, nb, out) assert.True(t, hostinfo.out.Load())
nc.HandleDeletionTick(next_tick) assert.True(t, hostinfo.in.Load())
// Move ahead 6s. We haven't heard back
next_tick = now.Add(6 * time.Second)
nc.HandleMonitorTick(next_tick, p, nb, out)
nc.HandleDeletionTick(next_tick)
// This host should now be up for deletion
assert.Contains(t, nc.pendingDeletion, vpnIp)
assert.Contains(t, nc.hostMap.Hosts, vpnIp)
// Move ahead some more
next_tick = now.Add(45 * time.Second)
nc.HandleMonitorTick(next_tick, p, nb, out)
nc.HandleDeletionTick(next_tick)
// The host should be evicted
assert.NotContains(t, nc.pendingDeletion, vpnIp)
assert.NotContains(t, nc.hostMap.Hosts, vpnIp)
// Do a traffic check tick, should not be pending deletion but should not have any in/out packets recorded
nc.doTrafficCheck(hostinfo.localIndexId, p, nb, out, time.Now())
assert.False(t, hostinfo.pendingDeletion.Load())
assert.False(t, hostinfo.out.Load())
assert.False(t, hostinfo.in.Load())
// Do another traffic check tick, this host should be pending deletion now
nc.Out(hostinfo)
assert.True(t, hostinfo.out.Load())
nc.doTrafficCheck(hostinfo.localIndexId, p, nb, out, time.Now())
assert.True(t, hostinfo.pendingDeletion.Load())
assert.False(t, hostinfo.out.Load())
assert.False(t, hostinfo.in.Load())
assert.Contains(t, nc.hostMap.Indexes, hostinfo.localIndexId)
assert.Contains(t, nc.hostMap.Hosts, hostinfo.vpnIp)
// Do a final traffic check tick, the host should now be removed
nc.doTrafficCheck(hostinfo.localIndexId, p, nb, out, time.Now())
assert.NotContains(t, nc.hostMap.Hosts, hostinfo.vpnIp)
assert.NotContains(t, nc.hostMap.Indexes, hostinfo.localIndexId)
} }
func Test_NewConnectionManagerTest2(t *testing.T) { func Test_NewConnectionManagerTest2(t *testing.T) {
l := test.NewLogger() l := test.NewLogger()
//_, tuncidr, _ := net.ParseCIDR("1.1.1.1/24") //_, tuncidr, _ := net.ParseCIDR("1.1.1.1/24")
_, vpncidr, _ := net.ParseCIDR("172.1.1.1/24") vpncidr := netip.MustParsePrefix("172.1.1.1/24")
_, localrange, _ := net.ParseCIDR("10.1.1.1/24") localrange := netip.MustParsePrefix("10.1.1.1/24")
preferredRanges := []*net.IPNet{localrange} vpnIp := netip.MustParseAddr("172.1.1.2")
preferredRanges := []netip.Prefix{localrange}
// Very incomplete mock objects // Very incomplete mock objects
hostMap := NewHostMap(l, "test", vpncidr, preferredRanges) hostMap := newHostMap(l, vpncidr)
hostMap.preferredRanges.Store(&preferredRanges)
cs := &CertState{ cs := &CertState{
rawCertificate: []byte{}, RawCertificate: []byte{},
privateKey: []byte{}, PrivateKey: []byte{},
certificate: &cert.NebulaCertificate{}, Certificate: &cert.NebulaCertificate{},
rawCertificateNoKey: []byte{}, RawCertificateNoKey: []byte{},
} }
lh := NewLightHouse(l, false, &net.IPNet{IP: net.IP{0, 0, 0, 0}, Mask: net.IPMask{0, 0, 0, 0}}, []iputil.VpnIp{}, 1000, 0, &udp.Conn{}, false, 1, false) lh := newTestLighthouse()
ifce := &Interface{ ifce := &Interface{
hostMap: hostMap, hostMap: hostMap,
inside: &test.NoopTun{}, inside: &test.NoopTun{},
outside: &udp.Conn{}, outside: &udp.NoopConn{},
certState: cs,
firewall: &Firewall{}, firewall: &Firewall{},
lightHouse: lh, lightHouse: lh,
handshakeManager: NewHandshakeManager(l, vpncidr, preferredRanges, hostMap, lh, &udp.Conn{}, defaultHandshakeConfig), pki: &PKI{},
handshakeManager: NewHandshakeManager(l, hostMap, lh, &udp.NoopConn{}, defaultHandshakeConfig),
l: l, l: l,
} }
now := time.Now() ifce.pki.cs.Store(cs)
// Create manager // Create manager
ctx, cancel := context.WithCancel(context.Background()) conf := config.NewC(l)
defer cancel() punchy := NewPunchyFromConfig(l, conf)
nc := newConnectionManager(ctx, l, ifce, 5, 10) nc := newConnectionManagerFromConfig(l, conf, hostMap, punchy)
nc.intf = ifce
p := []byte("") p := []byte("")
nb := make([]byte, 12, 12) nb := make([]byte, 12, 12)
out := make([]byte, mtu) out := make([]byte, mtu)
nc.HandleMonitorTick(now, p, nb, out)
// Add an ip we have established a connection w/ to hostmap // Add an ip we have established a connection w/ to hostmap
hostinfo, _ := nc.hostMap.AddVpnIp(vpnIp, nil) hostinfo := &HostInfo{
vpnIp: vpnIp,
localIndexId: 1099,
remoteIndexId: 9901,
}
hostinfo.ConnectionState = &ConnectionState{ hostinfo.ConnectionState = &ConnectionState{
certState: cs, myCert: &cert.NebulaCertificate{},
H: &noise.HandshakeState{}, H: &noise.HandshakeState{},
} }
nc.hostMap.unlockedAddHostInfo(hostinfo, ifce)
// We saw traffic out to vpnIp // We saw traffic out to vpnIp
nc.Out(vpnIp) nc.Out(hostinfo)
assert.NotContains(t, nc.pendingDeletion, vpnIp) nc.In(hostinfo)
assert.Contains(t, nc.hostMap.Hosts, vpnIp) assert.True(t, hostinfo.in.Load())
// Move ahead 5s. Nothing should happen assert.True(t, hostinfo.out.Load())
next_tick := now.Add(5 * time.Second) assert.False(t, hostinfo.pendingDeletion.Load())
nc.HandleMonitorTick(next_tick, p, nb, out) assert.Contains(t, nc.hostMap.Hosts, hostinfo.vpnIp)
nc.HandleDeletionTick(next_tick) assert.Contains(t, nc.hostMap.Indexes, hostinfo.localIndexId)
// Move ahead 6s. We haven't heard back
next_tick = now.Add(6 * time.Second)
nc.HandleMonitorTick(next_tick, p, nb, out)
nc.HandleDeletionTick(next_tick)
// This host should now be up for deletion
assert.Contains(t, nc.pendingDeletion, vpnIp)
assert.Contains(t, nc.hostMap.Hosts, vpnIp)
// We heard back this time
nc.In(vpnIp)
// Move ahead some more
next_tick = now.Add(45 * time.Second)
nc.HandleMonitorTick(next_tick, p, nb, out)
nc.HandleDeletionTick(next_tick)
// The host should be evicted
assert.NotContains(t, nc.pendingDeletion, vpnIp)
assert.Contains(t, nc.hostMap.Hosts, vpnIp)
// Do a traffic check tick, should not be pending deletion but should not have any in/out packets recorded
nc.doTrafficCheck(hostinfo.localIndexId, p, nb, out, time.Now())
assert.False(t, hostinfo.pendingDeletion.Load())
assert.False(t, hostinfo.out.Load())
assert.False(t, hostinfo.in.Load())
// Do another traffic check tick, this host should be pending deletion now
nc.Out(hostinfo)
nc.doTrafficCheck(hostinfo.localIndexId, p, nb, out, time.Now())
assert.True(t, hostinfo.pendingDeletion.Load())
assert.False(t, hostinfo.out.Load())
assert.False(t, hostinfo.in.Load())
assert.Contains(t, nc.hostMap.Indexes, hostinfo.localIndexId)
assert.Contains(t, nc.hostMap.Hosts, hostinfo.vpnIp)
// We saw traffic, should no longer be pending deletion
nc.In(hostinfo)
nc.doTrafficCheck(hostinfo.localIndexId, p, nb, out, time.Now())
assert.False(t, hostinfo.pendingDeletion.Load())
assert.False(t, hostinfo.out.Load())
assert.False(t, hostinfo.in.Load())
assert.Contains(t, nc.hostMap.Indexes, hostinfo.localIndexId)
assert.Contains(t, nc.hostMap.Hosts, hostinfo.vpnIp)
}
func Test_NewConnectionManager_DisconnectInactive(t *testing.T) {
l := test.NewLogger()
vpncidr := netip.MustParsePrefix("172.1.1.1/24")
localrange := netip.MustParsePrefix("10.1.1.1/24")
vpnIp := netip.MustParseAddr("172.1.1.2")
preferredRanges := []netip.Prefix{localrange}
// Very incomplete mock objects
hostMap := newHostMap(l, vpncidr)
hostMap.preferredRanges.Store(&preferredRanges)
cs := &CertState{
RawCertificate: []byte{},
PrivateKey: []byte{},
Certificate: &cert.NebulaCertificate{},
RawCertificateNoKey: []byte{},
}
lh := newTestLighthouse()
ifce := &Interface{
hostMap: hostMap,
inside: &test.NoopTun{},
outside: &udp.NoopConn{},
firewall: &Firewall{},
lightHouse: lh,
pki: &PKI{},
handshakeManager: NewHandshakeManager(l, hostMap, lh, &udp.NoopConn{}, defaultHandshakeConfig),
l: l,
}
ifce.pki.cs.Store(cs)
// Create manager
conf := config.NewC(l)
conf.Settings["tunnels"] = map[interface{}]interface{}{
"drop_inactive": true,
}
punchy := NewPunchyFromConfig(l, conf)
nc := newConnectionManagerFromConfig(l, conf, hostMap, punchy)
assert.True(t, nc.dropInactive.Load())
nc.intf = ifce
// Add an ip we have established a connection w/ to hostmap
hostinfo := &HostInfo{
vpnIp: vpnIp,
localIndexId: 1099,
remoteIndexId: 9901,
}
hostinfo.ConnectionState = &ConnectionState{
myCert: &cert.NebulaCertificate{},
H: &noise.HandshakeState{},
}
nc.hostMap.unlockedAddHostInfo(hostinfo, ifce)
// Do a traffic check tick, in and out should be cleared but should not be pending deletion
nc.Out(hostinfo)
nc.In(hostinfo)
assert.True(t, hostinfo.out.Load())
assert.True(t, hostinfo.in.Load())
now := time.Now()
decision, _, _ := nc.makeTrafficDecision(hostinfo.localIndexId, now)
assert.Equal(t, tryRehandshake, decision)
assert.Equal(t, now, hostinfo.lastUsed)
assert.False(t, hostinfo.pendingDeletion.Load())
assert.False(t, hostinfo.out.Load())
assert.False(t, hostinfo.in.Load())
decision, _, _ = nc.makeTrafficDecision(hostinfo.localIndexId, now.Add(time.Second*5))
assert.Equal(t, doNothing, decision)
assert.Equal(t, now, hostinfo.lastUsed)
assert.False(t, hostinfo.pendingDeletion.Load())
assert.False(t, hostinfo.out.Load())
assert.False(t, hostinfo.in.Load())
// Do another traffic check tick, should still not be pending deletion
decision, _, _ = nc.makeTrafficDecision(hostinfo.localIndexId, now.Add(time.Second*10))
assert.Equal(t, doNothing, decision)
assert.Equal(t, now, hostinfo.lastUsed)
assert.False(t, hostinfo.pendingDeletion.Load())
assert.False(t, hostinfo.out.Load())
assert.False(t, hostinfo.in.Load())
assert.Contains(t, nc.hostMap.Indexes, hostinfo.localIndexId)
assert.Contains(t, nc.hostMap.Hosts, hostinfo.vpnIp)
// Finally advance beyond the inactivity timeout
decision, _, _ = nc.makeTrafficDecision(hostinfo.localIndexId, now.Add(time.Minute*10))
assert.Equal(t, closeTunnel, decision)
assert.Equal(t, now, hostinfo.lastUsed)
assert.False(t, hostinfo.pendingDeletion.Load())
assert.False(t, hostinfo.out.Load())
assert.False(t, hostinfo.in.Load())
assert.Contains(t, nc.hostMap.Indexes, hostinfo.localIndexId)
assert.Contains(t, nc.hostMap.Hosts, hostinfo.vpnIp)
} }
// Check if we can disconnect the peer. // Check if we can disconnect the peer.
@@ -169,10 +307,12 @@ func Test_NewConnectionManagerTest_DisconnectInvalid(t *testing.T) {
IP: net.IPv4(172, 1, 1, 2), IP: net.IPv4(172, 1, 1, 2),
Mask: net.IPMask{255, 255, 255, 0}, Mask: net.IPMask{255, 255, 255, 0},
} }
_, vpncidr, _ := net.ParseCIDR("172.1.1.1/24") vpncidr := netip.MustParsePrefix("172.1.1.1/24")
_, localrange, _ := net.ParseCIDR("10.1.1.1/24") localrange := netip.MustParsePrefix("10.1.1.1/24")
preferredRanges := []*net.IPNet{localrange} vpnIp := netip.MustParseAddr("172.1.1.2")
hostMap := NewHostMap(l, "test", vpncidr, preferredRanges) preferredRanges := []netip.Prefix{localrange}
hostMap := newHostMap(l, vpncidr)
hostMap.preferredRanges.Store(&preferredRanges)
// Generate keys for CA and peer's cert. // Generate keys for CA and peer's cert.
pubCA, privCA, _ := ed25519.GenerateKey(rand.Reader) pubCA, privCA, _ := ed25519.GenerateKey(rand.Reader)
@@ -185,7 +325,8 @@ func Test_NewConnectionManagerTest_DisconnectInvalid(t *testing.T) {
PublicKey: pubCA, PublicKey: pubCA,
}, },
} }
caCert.Sign(privCA)
assert.NoError(t, caCert.Sign(cert.Curve_CURVE25519, privCA))
ncp := &cert.NebulaCAPool{ ncp := &cert.NebulaCAPool{
CAs: cert.NewCAPool().CAs, CAs: cert.NewCAPool().CAs,
} }
@@ -204,52 +345,58 @@ func Test_NewConnectionManagerTest_DisconnectInvalid(t *testing.T) {
Issuer: "ca", Issuer: "ca",
}, },
} }
peerCert.Sign(privCA) assert.NoError(t, peerCert.Sign(cert.Curve_CURVE25519, privCA))
cs := &CertState{ cs := &CertState{
rawCertificate: []byte{}, RawCertificate: []byte{},
privateKey: []byte{}, PrivateKey: []byte{},
certificate: &cert.NebulaCertificate{}, Certificate: &cert.NebulaCertificate{},
rawCertificateNoKey: []byte{}, RawCertificateNoKey: []byte{},
} }
lh := NewLightHouse(l, false, &net.IPNet{IP: net.IP{0, 0, 0, 0}, Mask: net.IPMask{0, 0, 0, 0}}, []iputil.VpnIp{}, 1000, 0, &udp.Conn{}, false, 1, false) lh := newTestLighthouse()
ifce := &Interface{ ifce := &Interface{
hostMap: hostMap, hostMap: hostMap,
inside: &test.NoopTun{}, inside: &test.NoopTun{},
outside: &udp.Conn{}, outside: &udp.NoopConn{},
certState: cs,
firewall: &Firewall{}, firewall: &Firewall{},
lightHouse: lh, lightHouse: lh,
handshakeManager: NewHandshakeManager(l, vpncidr, preferredRanges, hostMap, lh, &udp.Conn{}, defaultHandshakeConfig), handshakeManager: NewHandshakeManager(l, hostMap, lh, &udp.NoopConn{}, defaultHandshakeConfig),
l: l, l: l,
disconnectInvalid: true, pki: &PKI{},
caPool: ncp,
} }
ifce.pki.cs.Store(cs)
ifce.pki.caPool.Store(ncp)
ifce.disconnectInvalid.Store(true)
// Create manager // Create manager
ctx, cancel := context.WithCancel(context.Background()) conf := config.NewC(l)
defer cancel() punchy := NewPunchyFromConfig(l, conf)
nc := newConnectionManager(ctx, l, ifce, 5, 10) nc := newConnectionManagerFromConfig(l, conf, hostMap, punchy)
nc.intf = ifce
ifce.connectionManager = nc ifce.connectionManager = nc
hostinfo, _ := nc.hostMap.AddVpnIp(vpnIp, nil)
hostinfo.ConnectionState = &ConnectionState{ hostinfo := &HostInfo{
certState: cs, vpnIp: vpnIp,
ConnectionState: &ConnectionState{
myCert: &cert.NebulaCertificate{},
peerCert: &peerCert, peerCert: &peerCert,
H: &noise.HandshakeState{}, H: &noise.HandshakeState{},
},
} }
nc.hostMap.unlockedAddHostInfo(hostinfo, ifce)
// Move ahead 45s. // Move ahead 45s.
// Check if to disconnect with invalid certificate. // Check if to disconnect with invalid certificate.
// Should be alive. // Should be alive.
nextTick := now.Add(45 * time.Second) nextTick := now.Add(45 * time.Second)
destroyed := nc.handleInvalidCertificate(nextTick, vpnIp, hostinfo) invalid := nc.isInvalidCertificate(nextTick, hostinfo)
assert.False(t, destroyed) assert.False(t, invalid)
// Move ahead 61s. // Move ahead 61s.
// Check if to disconnect with invalid certificate. // Check if to disconnect with invalid certificate.
// Should be disconnected. // Should be disconnected.
nextTick = now.Add(61 * time.Second) nextTick = now.Add(61 * time.Second)
destroyed = nc.handleInvalidCertificate(nextTick, vpnIp, hostinfo) invalid = nc.isInvalidCertificate(nextTick, hostinfo)
assert.True(t, destroyed) assert.True(t, invalid)
} }

View File

@@ -9,6 +9,7 @@ import (
"github.com/flynn/noise" "github.com/flynn/noise"
"github.com/sirupsen/logrus" "github.com/sirupsen/logrus"
"github.com/slackhq/nebula/cert" "github.com/slackhq/nebula/cert"
"github.com/slackhq/nebula/noiseutil"
) )
const ReplayWindow = 1024 const ReplayWindow = 1024
@@ -17,24 +18,34 @@ type ConnectionState struct {
eKey *NebulaCipherState eKey *NebulaCipherState
dKey *NebulaCipherState dKey *NebulaCipherState
H *noise.HandshakeState H *noise.HandshakeState
certState *CertState myCert *cert.NebulaCertificate
peerCert *cert.NebulaCertificate peerCert *cert.NebulaCertificate
initiator bool initiator bool
atomicMessageCounter uint64 messageCounter atomic.Uint64
window *Bits window *Bits
queueLock sync.Mutex
writeLock sync.Mutex writeLock sync.Mutex
ready bool
} }
func (f *Interface) newConnectionState(l *logrus.Logger, initiator bool, pattern noise.HandshakePattern, psk []byte, pskStage int) *ConnectionState { func NewConnectionState(l *logrus.Logger, cipher string, certState *CertState, initiator bool, pattern noise.HandshakePattern, psk []byte, pskStage int) *ConnectionState {
cs := noise.NewCipherSuite(noise.DH25519, noise.CipherAESGCM, noise.HashSHA256) var dhFunc noise.DHFunc
if f.cipher == "chachapoly" { switch certState.Certificate.Details.Curve {
cs = noise.NewCipherSuite(noise.DH25519, noise.CipherChaChaPoly, noise.HashSHA256) case cert.Curve_CURVE25519:
dhFunc = noise.DH25519
case cert.Curve_P256:
dhFunc = noiseutil.DHP256
default:
l.Errorf("invalid curve: %s", certState.Certificate.Details.Curve)
return nil
} }
curCertState := f.certState var cs noise.CipherSuite
static := noise.DHKey{Private: curCertState.privateKey, Public: curCertState.publicKey} if cipher == "chachapoly" {
cs = noise.NewCipherSuite(dhFunc, noise.CipherChaChaPoly, noise.HashSHA256)
} else {
cs = noise.NewCipherSuite(dhFunc, noiseutil.CipherAESGCM, noise.HashSHA256)
}
static := noise.DHKey{Private: certState.PrivateKey, Public: certState.PublicKey}
b := NewBits(ReplayWindow) b := NewBits(ReplayWindow)
// Clear out bit 0, we never transmit it and we don't want it showing as packet loss // Clear out bit 0, we never transmit it and we don't want it showing as packet loss
@@ -59,9 +70,10 @@ func (f *Interface) newConnectionState(l *logrus.Logger, initiator bool, pattern
H: hs, H: hs,
initiator: initiator, initiator: initiator,
window: b, window: b,
ready: false, myCert: certState.Certificate,
certState: curCertState,
} }
// always start the counter from 2, as packet 1 and packet 2 are handshake packets.
ci.messageCounter.Add(2)
return ci return ci
} }
@@ -70,7 +82,6 @@ func (cs *ConnectionState) MarshalJSON() ([]byte, error) {
return json.Marshal(m{ return json.Marshal(m{
"certificate": cs.peerCert, "certificate": cs.peerCert,
"initiator": cs.initiator, "initiator": cs.initiator,
"message_counter": atomic.LoadUint64(&cs.atomicMessageCounter), "message_counter": cs.messageCounter.Load(),
"ready": cs.ready,
}) })
} }

View File

@@ -2,40 +2,51 @@ package nebula
import ( import (
"context" "context"
"net" "net/netip"
"os" "os"
"os/signal" "os/signal"
"sync/atomic"
"syscall" "syscall"
"github.com/sirupsen/logrus" "github.com/sirupsen/logrus"
"github.com/slackhq/nebula/cert" "github.com/slackhq/nebula/cert"
"github.com/slackhq/nebula/header" "github.com/slackhq/nebula/header"
"github.com/slackhq/nebula/iputil" "github.com/slackhq/nebula/overlay"
"github.com/slackhq/nebula/udp"
) )
// Every interaction here needs to take extra care to copy memory and not return or use arguments "as is" when touching // Every interaction here needs to take extra care to copy memory and not return or use arguments "as is" when touching
// core. This means copying IP objects, slices, de-referencing pointers and taking the actual value, etc // core. This means copying IP objects, slices, de-referencing pointers and taking the actual value, etc
type controlEach func(h *HostInfo)
type controlHostLister interface {
QueryVpnIp(vpnIp netip.Addr) *HostInfo
ForEachIndex(each controlEach)
ForEachVpnIp(each controlEach)
GetPreferredRanges() []netip.Prefix
}
type Control struct { type Control struct {
f *Interface f *Interface
l *logrus.Logger l *logrus.Logger
ctx context.Context
cancel context.CancelFunc cancel context.CancelFunc
sshStart func() sshStart func()
statsStart func() statsStart func()
dnsStart func() dnsStart func()
lighthouseStart func()
connectionManagerStart func(context.Context)
} }
type ControlHostInfo struct { type ControlHostInfo struct {
VpnIp net.IP `json:"vpnIp"` VpnIp netip.Addr `json:"vpnIp"`
LocalIndex uint32 `json:"localIndex"` LocalIndex uint32 `json:"localIndex"`
RemoteIndex uint32 `json:"remoteIndex"` RemoteIndex uint32 `json:"remoteIndex"`
RemoteAddrs []*udp.Addr `json:"remoteAddrs"` RemoteAddrs []netip.AddrPort `json:"remoteAddrs"`
CachedPackets int `json:"cachedPackets"`
Cert *cert.NebulaCertificate `json:"cert"` Cert *cert.NebulaCertificate `json:"cert"`
MessageCounter uint64 `json:"messageCounter"` MessageCounter uint64 `json:"messageCounter"`
CurrentRemote *udp.Addr `json:"currentRemote"` CurrentRemote netip.AddrPort `json:"currentRemote"`
CurrentRelaysToMe []netip.Addr `json:"currentRelaysToMe"`
CurrentRelaysThroughMe []netip.Addr `json:"currentRelaysThroughMe"`
} }
// Start actually runs nebula, this is a nonblocking call. To block use Control.ShutdownBlock() // Start actually runs nebula, this is a nonblocking call. To block use Control.ShutdownBlock()
@@ -53,25 +64,37 @@ func (c *Control) Start() {
if c.dnsStart != nil { if c.dnsStart != nil {
go c.dnsStart() go c.dnsStart()
} }
if c.connectionManagerStart != nil {
go c.connectionManagerStart(c.ctx)
}
if c.lighthouseStart != nil {
c.lighthouseStart()
}
// Start reading packets. // Start reading packets.
c.f.run() c.f.run()
} }
// Stop signals nebula to shutdown, returns after the shutdown is complete func (c *Control) Context() context.Context {
return c.ctx
}
// Stop signals nebula to shutdown and close all tunnels, returns after the shutdown is complete
func (c *Control) Stop() { func (c *Control) Stop() {
//TODO: stop tun and udp routines, the lock on hostMap effectively does that though // Stop the handshakeManager (and other services), to prevent new tunnels from
// being created while we're shutting them all down.
c.cancel()
c.CloseAllTunnels(false) c.CloseAllTunnels(false)
if err := c.f.Close(); err != nil { if err := c.f.Close(); err != nil {
c.l.WithError(err).Error("Close interface failed") c.l.WithError(err).Error("Close interface failed")
} }
c.cancel()
c.l.Info("Goodbye") c.l.Info("Goodbye")
} }
// ShutdownBlock will listen for and block on term and interrupt signals, calling Control.Stop() once signalled // ShutdownBlock will listen for and block on term and interrupt signals, calling Control.Stop() once signalled
func (c *Control) ShutdownBlock() { func (c *Control) ShutdownBlock() {
sigChan := make(chan os.Signal) sigChan := make(chan os.Signal, 1)
signal.Notify(sigChan, syscall.SIGTERM) signal.Notify(sigChan, syscall.SIGTERM)
signal.Notify(sigChan, syscall.SIGINT) signal.Notify(sigChan, syscall.SIGINT)
@@ -86,55 +109,103 @@ func (c *Control) RebindUDPServer() {
_ = c.f.outside.Rebind() _ = c.f.outside.Rebind()
// Trigger a lighthouse update, useful for mobile clients that should have an update interval of 0 // Trigger a lighthouse update, useful for mobile clients that should have an update interval of 0
c.f.lightHouse.SendUpdate(c.f) c.f.lightHouse.SendUpdate()
// Let the main interface know that we rebound so that underlying tunnels know to trigger punches from their remotes // Let the main interface know that we rebound so that underlying tunnels know to trigger punches from their remotes
c.f.rebindCount++ c.f.rebindCount++
} }
// ListHostmap returns details about the actual or pending (handshaking) hostmap // ListHostmapHosts returns details about the actual or pending (handshaking) hostmap by vpn ip
func (c *Control) ListHostmap(pendingMap bool) []ControlHostInfo { func (c *Control) ListHostmapHosts(pendingMap bool) []ControlHostInfo {
if pendingMap { if pendingMap {
return listHostMap(c.f.handshakeManager.pendingHostMap) return listHostMapHosts(c.f.handshakeManager)
} else { } else {
return listHostMap(c.f.hostMap) return listHostMapHosts(c.f.hostMap)
} }
} }
// GetHostInfoByVpnIp returns a single tunnels hostInfo, or nil if not found // ListHostmapIndexes returns details about the actual or pending (handshaking) hostmap by local index id
func (c *Control) GetHostInfoByVpnIp(vpnIp iputil.VpnIp, pending bool) *ControlHostInfo { func (c *Control) ListHostmapIndexes(pendingMap bool) []ControlHostInfo {
var hm *HostMap if pendingMap {
if pending { return listHostMapIndexes(c.f.handshakeManager)
hm = c.f.handshakeManager.pendingHostMap
} else { } else {
hm = c.f.hostMap return listHostMapIndexes(c.f.hostMap)
}
}
// GetCertByVpnIp returns the authenticated certificate of the given vpn IP, or nil if not found
func (c *Control) GetCertByVpnIp(vpnIp netip.Addr) *cert.NebulaCertificate {
if c.f.myVpnNet.Addr() == vpnIp {
return c.f.pki.GetCertState().Certificate
}
hi := c.f.hostMap.QueryVpnIp(vpnIp)
if hi == nil {
return nil
}
return hi.GetCert()
}
// CreateTunnel creates a new tunnel to the given vpn ip.
func (c *Control) CreateTunnel(vpnIp netip.Addr) {
c.f.handshakeManager.StartHandshake(vpnIp, nil)
}
// PrintTunnel creates a new tunnel to the given vpn ip.
func (c *Control) PrintTunnel(vpnIp netip.Addr) *ControlHostInfo {
hi := c.f.hostMap.QueryVpnIp(vpnIp)
if hi == nil {
return nil
}
chi := copyHostInfo(hi, c.f.hostMap.GetPreferredRanges())
return &chi
}
// QueryLighthouse queries the lighthouse.
func (c *Control) QueryLighthouse(vpnIp netip.Addr) *CacheMap {
hi := c.f.lightHouse.Query(vpnIp)
if hi == nil {
return nil
}
return hi.CopyCache()
}
// GetHostInfoByVpnIp returns a single tunnels hostInfo, or nil if not found
// Caller should take care to Unmap() any 4in6 addresses prior to calling.
func (c *Control) GetHostInfoByVpnIp(vpnIp netip.Addr, pending bool) *ControlHostInfo {
var hl controlHostLister
if pending {
hl = c.f.handshakeManager
} else {
hl = c.f.hostMap
} }
h, err := hm.QueryVpnIp(vpnIp) h := hl.QueryVpnIp(vpnIp)
if err != nil { if h == nil {
return nil return nil
} }
ch := copyHostInfo(h, c.f.hostMap.preferredRanges) ch := copyHostInfo(h, c.f.hostMap.GetPreferredRanges())
return &ch return &ch
} }
// SetRemoteForTunnel forces a tunnel to use a specific remote // SetRemoteForTunnel forces a tunnel to use a specific remote
func (c *Control) SetRemoteForTunnel(vpnIp iputil.VpnIp, addr udp.Addr) *ControlHostInfo { // Caller should take care to Unmap() any 4in6 addresses prior to calling.
hostInfo, err := c.f.hostMap.QueryVpnIp(vpnIp) func (c *Control) SetRemoteForTunnel(vpnIp netip.Addr, addr netip.AddrPort) *ControlHostInfo {
if err != nil { hostInfo := c.f.hostMap.QueryVpnIp(vpnIp)
if hostInfo == nil {
return nil return nil
} }
hostInfo.SetRemote(addr.Copy()) hostInfo.SetRemote(addr)
ch := copyHostInfo(hostInfo, c.f.hostMap.preferredRanges) ch := copyHostInfo(hostInfo, c.f.hostMap.GetPreferredRanges())
return &ch return &ch
} }
// CloseTunnel closes a fully established tunnel. If localOnly is false it will notify the remote end as well. // CloseTunnel closes a fully established tunnel. If localOnly is false it will notify the remote end as well.
func (c *Control) CloseTunnel(vpnIp iputil.VpnIp, localOnly bool) bool { // Caller should take care to Unmap() any 4in6 addresses prior to calling.
hostInfo, err := c.f.hostMap.QueryVpnIp(vpnIp) func (c *Control) CloseTunnel(vpnIp netip.Addr, localOnly bool) bool {
if err != nil { hostInfo := c.f.hostMap.QueryVpnIp(vpnIp)
if hostInfo == nil {
return false return false
} }
@@ -144,14 +215,13 @@ func (c *Control) CloseTunnel(vpnIp iputil.VpnIp, localOnly bool) bool {
0, 0,
hostInfo.ConnectionState, hostInfo.ConnectionState,
hostInfo, hostInfo,
hostInfo.remote,
[]byte{}, []byte{},
make([]byte, 12, 12), make([]byte, 12, 12),
make([]byte, mtu), make([]byte, mtu),
) )
} }
c.f.closeTunnel(hostInfo, false) c.f.closeTunnel(hostInfo)
return true return true
} }
@@ -159,60 +229,91 @@ func (c *Control) CloseTunnel(vpnIp iputil.VpnIp, localOnly bool) bool {
// the int returned is a count of tunnels closed // the int returned is a count of tunnels closed
func (c *Control) CloseAllTunnels(excludeLighthouses bool) (closed int) { func (c *Control) CloseAllTunnels(excludeLighthouses bool) (closed int) {
//TODO: this is probably better as a function in ConnectionManager or HostMap directly //TODO: this is probably better as a function in ConnectionManager or HostMap directly
c.f.hostMap.Lock() lighthouses := c.f.lightHouse.GetLighthouses()
for _, h := range c.f.hostMap.Hosts {
if excludeLighthouses {
if _, ok := c.f.lightHouse.lighthouses[h.vpnIp]; ok {
continue
}
}
if h.ConnectionState.ready { shutdown := func(h *HostInfo) {
c.f.send(header.CloseTunnel, 0, h.ConnectionState, h, h.remote, []byte{}, make([]byte, 12, 12), make([]byte, mtu)) if excludeLighthouses {
c.f.closeTunnel(h, true) if _, ok := lighthouses[h.vpnIp]; ok {
return
}
}
c.f.send(header.CloseTunnel, 0, h.ConnectionState, h, []byte{}, make([]byte, 12, 12), make([]byte, mtu))
c.f.closeTunnel(h)
c.l.WithField("vpnIp", h.vpnIp).WithField("udpAddr", h.remote). c.l.WithField("vpnIp", h.vpnIp).WithField("udpAddr", h.remote).
Debug("Sending close tunnel message") Debug("Sending close tunnel message")
closed++ closed++
} }
// Learn which hosts are being used as relays, so we can shut them down last.
relayingHosts := map[netip.Addr]*HostInfo{}
// Grab the hostMap lock to access the Relays map
c.f.hostMap.Lock()
for _, relayingHost := range c.f.hostMap.Relays {
relayingHosts[relayingHost.vpnIp] = relayingHost
} }
c.f.hostMap.Unlock() c.f.hostMap.Unlock()
hostInfos := []*HostInfo{}
// Grab the hostMap lock to access the Hosts map
c.f.hostMap.Lock()
for _, relayHost := range c.f.hostMap.Indexes {
if _, ok := relayingHosts[relayHost.vpnIp]; !ok {
hostInfos = append(hostInfos, relayHost)
}
}
c.f.hostMap.Unlock()
for _, h := range hostInfos {
shutdown(h)
}
for _, h := range relayingHosts {
shutdown(h)
}
return return
} }
func copyHostInfo(h *HostInfo, preferredRanges []*net.IPNet) ControlHostInfo { func (c *Control) Device() overlay.Device {
return c.f.inside
}
func copyHostInfo(h *HostInfo, preferredRanges []netip.Prefix) ControlHostInfo {
chi := ControlHostInfo{ chi := ControlHostInfo{
VpnIp: h.vpnIp.ToIP(), VpnIp: h.vpnIp,
LocalIndex: h.localIndexId, LocalIndex: h.localIndexId,
RemoteIndex: h.remoteIndexId, RemoteIndex: h.remoteIndexId,
RemoteAddrs: h.remotes.CopyAddrs(preferredRanges), RemoteAddrs: h.remotes.CopyAddrs(preferredRanges),
CachedPackets: len(h.packetStore), CurrentRelaysToMe: h.relayState.CopyRelayIps(),
CurrentRelaysThroughMe: h.relayState.CopyRelayForIps(),
CurrentRemote: h.remote,
} }
if h.ConnectionState != nil { if h.ConnectionState != nil {
chi.MessageCounter = atomic.LoadUint64(&h.ConnectionState.atomicMessageCounter) chi.MessageCounter = h.ConnectionState.messageCounter.Load()
} }
if c := h.GetCert(); c != nil { if c := h.GetCert(); c != nil {
chi.Cert = c.Copy() chi.Cert = c.Copy()
} }
if h.remote != nil {
chi.CurrentRemote = h.remote.Copy()
}
return chi return chi
} }
func listHostMap(hm *HostMap) []ControlHostInfo { func listHostMapHosts(hl controlHostLister) []ControlHostInfo {
hm.RLock() hosts := make([]ControlHostInfo, 0)
hosts := make([]ControlHostInfo, len(hm.Hosts)) pr := hl.GetPreferredRanges()
i := 0 hl.ForEachVpnIp(func(hostinfo *HostInfo) {
for _, v := range hm.Hosts { hosts = append(hosts, copyHostInfo(hostinfo, pr))
hosts[i] = copyHostInfo(v, hm.preferredRanges) })
i++ return hosts
} }
hm.RUnlock()
func listHostMapIndexes(hl controlHostLister) []ControlHostInfo {
hosts := make([]ControlHostInfo, 0)
pr := hl.GetPreferredRanges()
hl.ForEachIndex(func(hostinfo *HostInfo) {
hosts = append(hosts, copyHostInfo(hostinfo, pr))
})
return hosts return hosts
} }

View File

@@ -2,15 +2,14 @@ package nebula
import ( import (
"net" "net"
"net/netip"
"reflect" "reflect"
"testing" "testing"
"time" "time"
"github.com/sirupsen/logrus" "github.com/sirupsen/logrus"
"github.com/slackhq/nebula/cert" "github.com/slackhq/nebula/cert"
"github.com/slackhq/nebula/iputil"
"github.com/slackhq/nebula/test" "github.com/slackhq/nebula/test"
"github.com/slackhq/nebula/udp"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
) )
@@ -18,16 +17,19 @@ func TestControl_GetHostInfoByVpnIp(t *testing.T) {
l := test.NewLogger() l := test.NewLogger()
// Special care must be taken to re-use all objects provided to the hostmap and certificate in the expectedInfo object // Special care must be taken to re-use all objects provided to the hostmap and certificate in the expectedInfo object
// To properly ensure we are not exposing core memory to the caller // To properly ensure we are not exposing core memory to the caller
hm := NewHostMap(l, "test", &net.IPNet{}, make([]*net.IPNet, 0)) hm := newHostMap(l, netip.Prefix{})
remote1 := udp.NewAddr(net.ParseIP("0.0.0.100"), 4444) hm.preferredRanges.Store(&[]netip.Prefix{})
remote2 := udp.NewAddr(net.ParseIP("1:2:3:4:5:6:7:8"), 4444)
remote1 := netip.MustParseAddrPort("0.0.0.100:4444")
remote2 := netip.MustParseAddrPort("[1:2:3:4:5:6:7:8]:4444")
ipNet := net.IPNet{ ipNet := net.IPNet{
IP: net.IPv4(1, 2, 3, 4), IP: remote1.Addr().AsSlice(),
Mask: net.IPMask{255, 255, 255, 0}, Mask: net.IPMask{255, 255, 255, 0},
} }
ipNet2 := net.IPNet{ ipNet2 := net.IPNet{
IP: net.ParseIP("1:2:3:4:5:6:7:8"), IP: remote2.Addr().AsSlice(),
Mask: net.IPMask{255, 255, 255, 0}, Mask: net.IPMask{255, 255, 255, 0},
} }
@@ -47,10 +49,14 @@ func TestControl_GetHostInfoByVpnIp(t *testing.T) {
Signature: []byte{1, 2, 1, 2, 1, 3}, Signature: []byte{1, 2, 1, 2, 1, 3},
} }
remotes := NewRemoteList() remotes := NewRemoteList(nil)
remotes.unlockedPrependV4(0, NewIp4AndPort(remote1.IP, uint32(remote1.Port))) remotes.unlockedPrependV4(netip.IPv4Unspecified(), NewIp4AndPortFromNetIP(remote1.Addr(), remote1.Port()))
remotes.unlockedPrependV6(0, NewIp6AndPort(remote2.IP, uint32(remote2.Port))) remotes.unlockedPrependV6(netip.IPv4Unspecified(), NewIp6AndPortFromNetIP(remote2.Addr(), remote2.Port()))
hm.Add(iputil.Ip2VpnIp(ipNet.IP), &HostInfo{
vpnIp, ok := netip.AddrFromSlice(ipNet.IP)
assert.True(t, ok)
hm.unlockedAddHostInfo(&HostInfo{
remote: remote1, remote: remote1,
remotes: remotes, remotes: remotes,
ConnectionState: &ConnectionState{ ConnectionState: &ConnectionState{
@@ -58,10 +64,18 @@ func TestControl_GetHostInfoByVpnIp(t *testing.T) {
}, },
remoteIndexId: 200, remoteIndexId: 200,
localIndexId: 201, localIndexId: 201,
vpnIp: iputil.Ip2VpnIp(ipNet.IP), vpnIp: vpnIp,
}) relayState: RelayState{
relays: nil,
relayForByIp: map[netip.Addr]*Relay{},
relayForByIdx: map[uint32]*Relay{},
},
}, &Interface{})
hm.Add(iputil.Ip2VpnIp(ipNet2.IP), &HostInfo{ vpnIp2, ok := netip.AddrFromSlice(ipNet2.IP)
assert.True(t, ok)
hm.unlockedAddHostInfo(&HostInfo{
remote: remote1, remote: remote1,
remotes: remotes, remotes: remotes,
ConnectionState: &ConnectionState{ ConnectionState: &ConnectionState{
@@ -69,8 +83,13 @@ func TestControl_GetHostInfoByVpnIp(t *testing.T) {
}, },
remoteIndexId: 200, remoteIndexId: 200,
localIndexId: 201, localIndexId: 201,
vpnIp: iputil.Ip2VpnIp(ipNet2.IP), vpnIp: vpnIp2,
}) relayState: RelayState{
relays: nil,
relayForByIp: map[netip.Addr]*Relay{},
relayForByIdx: map[uint32]*Relay{},
},
}, &Interface{})
c := Control{ c := Control{
f: &Interface{ f: &Interface{
@@ -79,26 +98,29 @@ func TestControl_GetHostInfoByVpnIp(t *testing.T) {
l: logrus.New(), l: logrus.New(),
} }
thi := c.GetHostInfoByVpnIp(iputil.Ip2VpnIp(ipNet.IP), false) thi := c.GetHostInfoByVpnIp(vpnIp, false)
expectedInfo := ControlHostInfo{ expectedInfo := ControlHostInfo{
VpnIp: net.IPv4(1, 2, 3, 4).To4(), VpnIp: vpnIp,
LocalIndex: 201, LocalIndex: 201,
RemoteIndex: 200, RemoteIndex: 200,
RemoteAddrs: []*udp.Addr{remote2, remote1}, RemoteAddrs: []netip.AddrPort{remote2, remote1},
CachedPackets: 0,
Cert: crt.Copy(), Cert: crt.Copy(),
MessageCounter: 0, MessageCounter: 0,
CurrentRemote: udp.NewAddr(net.ParseIP("0.0.0.100"), 4444), CurrentRemote: remote1,
CurrentRelaysToMe: []netip.Addr{},
CurrentRelaysThroughMe: []netip.Addr{},
} }
// Make sure we don't have any unexpected fields // Make sure we don't have any unexpected fields
assertFields(t, []string{"VpnIp", "LocalIndex", "RemoteIndex", "RemoteAddrs", "CachedPackets", "Cert", "MessageCounter", "CurrentRemote"}, thi) assertFields(t, []string{"VpnIp", "LocalIndex", "RemoteIndex", "RemoteAddrs", "Cert", "MessageCounter", "CurrentRemote", "CurrentRelaysToMe", "CurrentRelaysThroughMe"}, thi)
test.AssertDeepCopyEqual(t, &expectedInfo, thi) assert.EqualValues(t, &expectedInfo, thi)
//TODO: netip.Addr reuses global memory for zone identifiers which breaks our "no reused memory check" here
//test.AssertDeepCopyEqual(t, &expectedInfo, thi)
// Make sure we don't panic if the host info doesn't have a cert yet // Make sure we don't panic if the host info doesn't have a cert yet
assert.NotPanics(t, func() { assert.NotPanics(t, func() {
thi = c.GetHostInfoByVpnIp(iputil.Ip2VpnIp(ipNet2.IP), false) thi = c.GetHostInfoByVpnIp(vpnIp2, false)
}) })
} }

View File

@@ -4,22 +4,23 @@
package nebula package nebula
import ( import (
"net" "net/netip"
"github.com/slackhq/nebula/cert"
"github.com/google/gopacket" "github.com/google/gopacket"
"github.com/google/gopacket/layers" "github.com/google/gopacket/layers"
"github.com/slackhq/nebula/header" "github.com/slackhq/nebula/header"
"github.com/slackhq/nebula/iputil"
"github.com/slackhq/nebula/overlay" "github.com/slackhq/nebula/overlay"
"github.com/slackhq/nebula/udp" "github.com/slackhq/nebula/udp"
) )
// WaitForTypeByIndex will pipe all messages from this control device into the pipeTo control device // WaitForType will pipe all messages from this control device into the pipeTo control device
// returning after a message matching the criteria has been piped // returning after a message matching the criteria has been piped
func (c *Control) WaitForType(msgType header.MessageType, subType header.MessageSubType, pipeTo *Control) { func (c *Control) WaitForType(msgType header.MessageType, subType header.MessageSubType, pipeTo *Control) {
h := &header.H{} h := &header.H{}
for { for {
p := c.f.outside.Get(true) p := c.f.outside.(*udp.TesterConn).Get(true)
if err := h.Parse(p.Data); err != nil { if err := h.Parse(p.Data); err != nil {
panic(err) panic(err)
} }
@@ -35,7 +36,7 @@ func (c *Control) WaitForType(msgType header.MessageType, subType header.Message
func (c *Control) WaitForTypeByIndex(toIndex uint32, msgType header.MessageType, subType header.MessageSubType, pipeTo *Control) { func (c *Control) WaitForTypeByIndex(toIndex uint32, msgType header.MessageType, subType header.MessageSubType, pipeTo *Control) {
h := &header.H{} h := &header.H{}
for { for {
p := c.f.outside.Get(true) p := c.f.outside.(*udp.TesterConn).Get(true)
if err := h.Parse(p.Data); err != nil { if err := h.Parse(p.Data); err != nil {
panic(err) panic(err)
} }
@@ -48,21 +49,32 @@ func (c *Control) WaitForTypeByIndex(toIndex uint32, msgType header.MessageType,
// InjectLightHouseAddr will push toAddr into the local lighthouse cache for the vpnIp // InjectLightHouseAddr will push toAddr into the local lighthouse cache for the vpnIp
// This is necessary if you did not configure static hosts or are not running a lighthouse // This is necessary if you did not configure static hosts or are not running a lighthouse
func (c *Control) InjectLightHouseAddr(vpnIp net.IP, toAddr *net.UDPAddr) { func (c *Control) InjectLightHouseAddr(vpnIp netip.Addr, toAddr netip.AddrPort) {
c.f.lightHouse.Lock() c.f.lightHouse.Lock()
remoteList := c.f.lightHouse.unlockedGetRemoteList(iputil.Ip2VpnIp(vpnIp)) remoteList := c.f.lightHouse.unlockedGetRemoteList(vpnIp)
remoteList.Lock() remoteList.Lock()
defer remoteList.Unlock() defer remoteList.Unlock()
c.f.lightHouse.Unlock() c.f.lightHouse.Unlock()
iVpnIp := iputil.Ip2VpnIp(vpnIp) if toAddr.Addr().Is4() {
if v4 := toAddr.IP.To4(); v4 != nil { remoteList.unlockedPrependV4(vpnIp, NewIp4AndPortFromNetIP(toAddr.Addr(), toAddr.Port()))
remoteList.unlockedPrependV4(iVpnIp, NewIp4AndPort(v4, uint32(toAddr.Port)))
} else { } else {
remoteList.unlockedPrependV6(iVpnIp, NewIp6AndPort(toAddr.IP, uint32(toAddr.Port))) remoteList.unlockedPrependV6(vpnIp, NewIp6AndPortFromNetIP(toAddr.Addr(), toAddr.Port()))
} }
} }
// InjectRelays will push relayVpnIps into the local lighthouse cache for the vpnIp
// This is necessary to inform an initiator of possible relays for communicating with a responder
func (c *Control) InjectRelays(vpnIp netip.Addr, relayVpnIps []netip.Addr) {
c.f.lightHouse.Lock()
remoteList := c.f.lightHouse.unlockedGetRemoteList(vpnIp)
remoteList.Lock()
defer remoteList.Unlock()
c.f.lightHouse.Unlock()
remoteList.unlockedSetRelay(vpnIp, vpnIp, relayVpnIps)
}
// GetFromTun will pull a packet off the tun side of nebula // GetFromTun will pull a packet off the tun side of nebula
func (c *Control) GetFromTun(block bool) []byte { func (c *Control) GetFromTun(block bool) []byte {
return c.f.inside.(*overlay.TestTun).Get(block) return c.f.inside.(*overlay.TestTun).Get(block)
@@ -70,11 +82,11 @@ func (c *Control) GetFromTun(block bool) []byte {
// GetFromUDP will pull a udp packet off the udp side of nebula // GetFromUDP will pull a udp packet off the udp side of nebula
func (c *Control) GetFromUDP(block bool) *udp.Packet { func (c *Control) GetFromUDP(block bool) *udp.Packet {
return c.f.outside.Get(block) return c.f.outside.(*udp.TesterConn).Get(block)
} }
func (c *Control) GetUDPTxChan() <-chan *udp.Packet { func (c *Control) GetUDPTxChan() <-chan *udp.Packet {
return c.f.outside.TxPackets return c.f.outside.(*udp.TesterConn).TxPackets
} }
func (c *Control) GetTunTxChan() <-chan []byte { func (c *Control) GetTunTxChan() <-chan []byte {
@@ -83,17 +95,18 @@ func (c *Control) GetTunTxChan() <-chan []byte {
// InjectUDPPacket will inject a packet into the udp side of nebula // InjectUDPPacket will inject a packet into the udp side of nebula
func (c *Control) InjectUDPPacket(p *udp.Packet) { func (c *Control) InjectUDPPacket(p *udp.Packet) {
c.f.outside.Send(p) c.f.outside.(*udp.TesterConn).Send(p)
} }
// InjectTunUDPPacket puts a udp packet on the tun interface. Using UDP here because it's a simpler protocol // InjectTunUDPPacket puts a udp packet on the tun interface. Using UDP here because it's a simpler protocol
func (c *Control) InjectTunUDPPacket(toIp net.IP, toPort uint16, fromPort uint16, data []byte) { func (c *Control) InjectTunUDPPacket(toIp netip.Addr, toPort uint16, fromPort uint16, data []byte) {
//TODO: IPV6-WORK
ip := layers.IPv4{ ip := layers.IPv4{
Version: 4, Version: 4,
TTL: 64, TTL: 64,
Protocol: layers.IPProtocolUDP, Protocol: layers.IPProtocolUDP,
SrcIP: c.f.inside.Cidr().IP, SrcIP: c.f.inside.Cidr().Addr().Unmap().AsSlice(),
DstIP: toIp, DstIP: toIp.Unmap().AsSlice(),
} }
udp := layers.UDP{ udp := layers.UDP{
@@ -118,16 +131,32 @@ func (c *Control) InjectTunUDPPacket(toIp net.IP, toPort uint16, fromPort uint16
c.f.inside.(*overlay.TestTun).Send(buffer.Bytes()) c.f.inside.(*overlay.TestTun).Send(buffer.Bytes())
} }
func (c *Control) GetUDPAddr() string { func (c *Control) GetVpnIp() netip.Addr {
return c.f.outside.Addr.String() return c.f.myVpnNet.Addr()
} }
func (c *Control) KillPendingTunnel(vpnIp net.IP) bool { func (c *Control) GetUDPAddr() netip.AddrPort {
hostinfo, ok := c.f.handshakeManager.pendingHostMap.Hosts[iputil.Ip2VpnIp(vpnIp)] return c.f.outside.(*udp.TesterConn).Addr
if !ok { }
func (c *Control) KillPendingTunnel(vpnIp netip.Addr) bool {
hostinfo := c.f.handshakeManager.QueryVpnIp(vpnIp)
if hostinfo == nil {
return false return false
} }
c.f.handshakeManager.pendingHostMap.DeleteHostInfo(hostinfo) c.f.handshakeManager.DeleteHostInfo(hostinfo)
return true return true
} }
func (c *Control) GetHostmap() *HostMap {
return c.f.hostMap
}
func (c *Control) GetCert() *cert.NebulaCertificate {
return c.f.pki.GetCertState().Certificate
}
func (c *Control) ReHandshake(vpnIp netip.Addr) {
c.f.handshakeManager.StartHandshake(vpnIp, nil)
}

View File

@@ -1,13 +0,0 @@
[Unit]
Description=nebula
Wants=basic.target network-online.target
After=basic.target network.target network-online.target
[Service]
SyslogIdentifier=nebula
ExecReload=/bin/kill -HUP $MAINPID
ExecStart=/usr/bin/nebula -config /etc/nebula/config.yml
Restart=always
[Install]
WantedBy=multi-user.target

View File

@@ -1,15 +0,0 @@
[Unit]
Description=Nebula overlay networking tool
After=basic.target network.target network-online.target
Before=sshd.service
Wants=basic.target network-online.target
[Service]
ExecReload=/bin/kill -HUP $MAINPID
ExecStart=/usr/bin/nebula -config /etc/nebula/config.yml
Restart=always
SyslogIdentifier=nebula
[Install]
WantedBy=multi-user.target

View File

@@ -3,13 +3,14 @@ package nebula
import ( import (
"fmt" "fmt"
"net" "net"
"net/netip"
"strconv" "strconv"
"strings"
"sync" "sync"
"github.com/miekg/dns" "github.com/miekg/dns"
"github.com/sirupsen/logrus" "github.com/sirupsen/logrus"
"github.com/slackhq/nebula/config" "github.com/slackhq/nebula/config"
"github.com/slackhq/nebula/iputil"
) )
// This whole thing should be rewritten to use context // This whole thing should be rewritten to use context
@@ -33,37 +34,38 @@ func newDnsRecords(hostMap *HostMap) *dnsRecords {
func (d *dnsRecords) Query(data string) string { func (d *dnsRecords) Query(data string) string {
d.RLock() d.RLock()
if r, ok := d.dnsMap[data]; ok { defer d.RUnlock()
d.RUnlock() if r, ok := d.dnsMap[strings.ToLower(data)]; ok {
return r return r
} }
d.RUnlock()
return "" return ""
} }
func (d *dnsRecords) QueryCert(data string) string { func (d *dnsRecords) QueryCert(data string) string {
ip := net.ParseIP(data[:len(data)-1]) ip, err := netip.ParseAddr(data[:len(data)-1])
if ip == nil {
return ""
}
iip := iputil.Ip2VpnIp(ip)
hostinfo, err := d.hostMap.QueryVpnIp(iip)
if err != nil { if err != nil {
return "" return ""
} }
hostinfo := d.hostMap.QueryVpnIp(ip)
if hostinfo == nil {
return ""
}
q := hostinfo.GetCert() q := hostinfo.GetCert()
if q == nil { if q == nil {
return "" return ""
} }
cert := q.Details cert := q.Details
c := fmt.Sprintf("\"Name: %s\" \"Ips: %s\" \"Subnets %s\" \"Groups %s\" \"NotBefore %s\" \"NotAFter %s\" \"PublicKey %x\" \"IsCA %t\" \"Issuer %s\"", cert.Name, cert.Ips, cert.Subnets, cert.Groups, cert.NotBefore, cert.NotAfter, cert.PublicKey, cert.IsCA, cert.Issuer) c := fmt.Sprintf("\"Name: %s\" \"Ips: %s\" \"Subnets %s\" \"Groups %s\" \"NotBefore %s\" \"NotAfter %s\" \"PublicKey %x\" \"IsCA %t\" \"Issuer %s\"", cert.Name, cert.Ips, cert.Subnets, cert.Groups, cert.NotBefore, cert.NotAfter, cert.PublicKey, cert.IsCA, cert.Issuer)
return c return c
} }
func (d *dnsRecords) Add(host, data string) { func (d *dnsRecords) Add(host, data string) {
d.Lock() d.Lock()
d.dnsMap[host] = data defer d.Unlock()
d.Unlock() d.dnsMap[strings.ToLower(host)] = data
} }
func parseQuery(l *logrus.Logger, m *dns.Msg, w dns.ResponseWriter) { func parseQuery(l *logrus.Logger, m *dns.Msg, w dns.ResponseWriter) {
@@ -80,7 +82,11 @@ func parseQuery(l *logrus.Logger, m *dns.Msg, w dns.ResponseWriter) {
} }
case dns.TypeTXT: case dns.TypeTXT:
a, _, _ := net.SplitHostPort(w.RemoteAddr().String()) a, _, _ := net.SplitHostPort(w.RemoteAddr().String())
b := net.ParseIP(a) b, err := netip.ParseAddr(a)
if err != nil {
return
}
// We don't answer these queries from non nebula nodes or localhost // We don't answer these queries from non nebula nodes or localhost
//l.Debugf("Does %s contain %s", b, dnsR.hostMap.vpnCIDR) //l.Debugf("Does %s contain %s", b, dnsR.hostMap.vpnCIDR)
if !dnsR.hostMap.vpnCIDR.Contains(b) && a != "127.0.0.1" { if !dnsR.hostMap.vpnCIDR.Contains(b) && a != "127.0.0.1" {
@@ -96,6 +102,10 @@ func parseQuery(l *logrus.Logger, m *dns.Msg, w dns.ResponseWriter) {
} }
} }
} }
if len(m.Answer) == 0 {
m.Rcode = dns.RcodeNameError
}
} }
func handleDnsRequest(l *logrus.Logger, w dns.ResponseWriter, r *dns.Msg) { func handleDnsRequest(l *logrus.Logger, w dns.ResponseWriter, r *dns.Msg) {
@@ -129,13 +139,18 @@ func dnsMain(l *logrus.Logger, hostMap *HostMap, c *config.C) func() {
} }
func getDnsServerAddr(c *config.C) string { func getDnsServerAddr(c *config.C) string {
return c.GetString("lighthouse.dns.host", "") + ":" + strconv.Itoa(c.GetInt("lighthouse.dns.port", 53)) dnsHost := strings.TrimSpace(c.GetString("lighthouse.dns.host", ""))
// Old guidance was to provide the literal `[::]` in `lighthouse.dns.host` but that won't resolve.
if dnsHost == "[::]" {
dnsHost = "::"
}
return net.JoinHostPort(dnsHost, strconv.Itoa(c.GetInt("lighthouse.dns.port", 53)))
} }
func startDns(l *logrus.Logger, c *config.C) { func startDns(l *logrus.Logger, c *config.C) {
dnsAddr = getDnsServerAddr(c) dnsAddr = getDnsServerAddr(c)
dnsServer = &dns.Server{Addr: dnsAddr, Net: "udp"} dnsServer = &dns.Server{Addr: dnsAddr, Net: "udp"}
l.WithField("dnsListener", dnsAddr).Infof("Starting DNS responder") l.WithField("dnsListener", dnsAddr).Info("Starting DNS responder")
err := dnsServer.ListenAndServe() err := dnsServer.ListenAndServe()
defer dnsServer.Shutdown() defer dnsServer.Shutdown()
if err != nil { if err != nil {

View File

@@ -4,6 +4,8 @@ import (
"testing" "testing"
"github.com/miekg/dns" "github.com/miekg/dns"
"github.com/slackhq/nebula/config"
"github.com/stretchr/testify/assert"
) )
func TestParsequery(t *testing.T) { func TestParsequery(t *testing.T) {
@@ -17,3 +19,40 @@ func TestParsequery(t *testing.T) {
//parseQuery(m) //parseQuery(m)
} }
func Test_getDnsServerAddr(t *testing.T) {
c := config.NewC(nil)
c.Settings["lighthouse"] = map[interface{}]interface{}{
"dns": map[interface{}]interface{}{
"host": "0.0.0.0",
"port": "1",
},
}
assert.Equal(t, "0.0.0.0:1", getDnsServerAddr(c))
c.Settings["lighthouse"] = map[interface{}]interface{}{
"dns": map[interface{}]interface{}{
"host": "::",
"port": "1",
},
}
assert.Equal(t, "[::]:1", getDnsServerAddr(c))
c.Settings["lighthouse"] = map[interface{}]interface{}{
"dns": map[interface{}]interface{}{
"host": "[::]",
"port": "1",
},
}
assert.Equal(t, "[::]:1", getDnsServerAddr(c))
// Make sure whitespace doesn't mess us up
c.Settings["lighthouse"] = map[interface{}]interface{}{
"dns": map[interface{}]interface{}{
"host": "[::] ",
"port": "1",
},
}
assert.Equal(t, "[::]:1", getDnsServerAddr(c))
}

11
docker/Dockerfile Normal file
View File

@@ -0,0 +1,11 @@
FROM gcr.io/distroless/static:latest
ARG TARGETOS TARGETARCH
COPY build/$TARGETOS-$TARGETARCH/nebula /nebula
COPY build/$TARGETOS-$TARGETARCH/nebula-cert /nebula-cert
VOLUME ["/config"]
ENTRYPOINT ["/nebula"]
# Allow users to override the args passed to nebula
CMD ["-config", "/config/config.yml"]

24
docker/README.md Normal file
View File

@@ -0,0 +1,24 @@
# NebulaOSS/nebula Docker Image
## Building
From the root of the repository, run `make docker`.
## Running
To run the built image, use the following command:
```
docker run \
--name nebula \
--network host \
--cap-add NET_ADMIN \
--volume ./config:/config \
--rm \
nebulaoss/nebula
```
A few notes:
- The `NET_ADMIN` capability is necessary to create the tun adapter on the host (this is unnecessary if the tun device is disabled.)
- `--volume ./config:/config` should point to a directory that contains your `config.yml` and any other necessary files.

File diff suppressed because it is too large Load Diff

125
e2e/helpers.go Normal file
View File

@@ -0,0 +1,125 @@
package e2e
import (
"crypto/rand"
"io"
"net"
"net/netip"
"time"
"github.com/slackhq/nebula/cert"
"golang.org/x/crypto/curve25519"
"golang.org/x/crypto/ed25519"
)
// NewTestCaCert will generate a CA cert
func NewTestCaCert(before, after time.Time, ips, subnets []netip.Prefix, groups []string) (*cert.NebulaCertificate, []byte, []byte, []byte) {
pub, priv, err := ed25519.GenerateKey(rand.Reader)
if before.IsZero() {
before = time.Now().Add(time.Second * -60).Round(time.Second)
}
if after.IsZero() {
after = time.Now().Add(time.Second * 60).Round(time.Second)
}
nc := &cert.NebulaCertificate{
Details: cert.NebulaCertificateDetails{
Name: "test ca",
NotBefore: time.Unix(before.Unix(), 0),
NotAfter: time.Unix(after.Unix(), 0),
PublicKey: pub,
IsCA: true,
InvertedGroups: make(map[string]struct{}),
},
}
if len(ips) > 0 {
nc.Details.Ips = make([]*net.IPNet, len(ips))
for i, ip := range ips {
nc.Details.Ips[i] = &net.IPNet{IP: ip.Addr().AsSlice(), Mask: net.CIDRMask(ip.Bits(), ip.Addr().BitLen())}
}
}
if len(subnets) > 0 {
nc.Details.Subnets = make([]*net.IPNet, len(subnets))
for i, ip := range subnets {
nc.Details.Ips[i] = &net.IPNet{IP: ip.Addr().AsSlice(), Mask: net.CIDRMask(ip.Bits(), ip.Addr().BitLen())}
}
}
if len(groups) > 0 {
nc.Details.Groups = groups
}
err = nc.Sign(cert.Curve_CURVE25519, priv)
if err != nil {
panic(err)
}
pem, err := nc.MarshalToPEM()
if err != nil {
panic(err)
}
return nc, pub, priv, pem
}
// NewTestCert will generate a signed certificate with the provided details.
// Expiry times are defaulted if you do not pass them in
func NewTestCert(ca *cert.NebulaCertificate, key []byte, name string, before, after time.Time, ip netip.Prefix, subnets []netip.Prefix, groups []string) (*cert.NebulaCertificate, []byte, []byte, []byte) {
issuer, err := ca.Sha256Sum()
if err != nil {
panic(err)
}
if before.IsZero() {
before = time.Now().Add(time.Second * -60).Round(time.Second)
}
if after.IsZero() {
after = time.Now().Add(time.Second * 60).Round(time.Second)
}
pub, rawPriv := x25519Keypair()
ipb := ip.Addr().AsSlice()
nc := &cert.NebulaCertificate{
Details: cert.NebulaCertificateDetails{
Name: name,
Ips: []*net.IPNet{{IP: ipb[:], Mask: net.CIDRMask(ip.Bits(), ip.Addr().BitLen())}},
//Subnets: subnets,
Groups: groups,
NotBefore: time.Unix(before.Unix(), 0),
NotAfter: time.Unix(after.Unix(), 0),
PublicKey: pub,
IsCA: false,
Issuer: issuer,
InvertedGroups: make(map[string]struct{}),
},
}
err = nc.Sign(ca.Details.Curve, key)
if err != nil {
panic(err)
}
pem, err := nc.MarshalToPEM()
if err != nil {
panic(err)
}
return nc, pub, cert.MarshalX25519PrivateKey(rawPriv), pem
}
func x25519Keypair() ([]byte, []byte) {
privkey := make([]byte, 32)
if _, err := io.ReadFull(rand.Reader, privkey); err != nil {
panic(err)
}
pubkey, err := curve25519.X25519(privkey, curve25519.Basepoint)
if err != nil {
panic(err)
}
return pubkey, privkey
}

View File

@@ -4,15 +4,14 @@
package e2e package e2e
import ( import (
"crypto/rand"
"fmt" "fmt"
"io" "io"
"io/ioutil" "net/netip"
"net"
"os" "os"
"testing" "testing"
"time" "time"
"dario.cat/mergo"
"github.com/google/gopacket" "github.com/google/gopacket"
"github.com/google/gopacket/layers" "github.com/google/gopacket/layers"
"github.com/sirupsen/logrus" "github.com/sirupsen/logrus"
@@ -20,27 +19,32 @@ import (
"github.com/slackhq/nebula/cert" "github.com/slackhq/nebula/cert"
"github.com/slackhq/nebula/config" "github.com/slackhq/nebula/config"
"github.com/slackhq/nebula/e2e/router" "github.com/slackhq/nebula/e2e/router"
"github.com/slackhq/nebula/iputil"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
"golang.org/x/crypto/curve25519"
"golang.org/x/crypto/ed25519"
"gopkg.in/yaml.v2" "gopkg.in/yaml.v2"
) )
type m map[string]interface{} type m map[string]interface{}
// newSimpleServer creates a nebula instance with many assumptions // newSimpleServer creates a nebula instance with many assumptions
func newSimpleServer(caCrt *cert.NebulaCertificate, caKey []byte, name string, udpIp net.IP) (*nebula.Control, net.IP, *net.UDPAddr) { func newSimpleServer(caCrt *cert.NebulaCertificate, caKey []byte, name string, sVpnIpNet string, overrides m) (*nebula.Control, netip.Prefix, netip.AddrPort, *config.C) {
l := NewTestLogger() l := NewTestLogger()
vpnIpNet := &net.IPNet{IP: make([]byte, len(udpIp)), Mask: net.IPMask{255, 255, 255, 0}} vpnIpNet, err := netip.ParsePrefix(sVpnIpNet)
copy(vpnIpNet.IP, udpIp) if err != nil {
vpnIpNet.IP[1] += 128 panic(err)
udpAddr := net.UDPAddr{
IP: udpIp,
Port: 4242,
} }
_, _, myPrivKey, myPEM := newTestCert(caCrt, caKey, name, time.Now(), time.Now().Add(5*time.Minute), vpnIpNet, nil, []string{})
var udpAddr netip.AddrPort
if vpnIpNet.Addr().Is4() {
budpIp := vpnIpNet.Addr().As4()
budpIp[1] -= 128
udpAddr = netip.AddrPortFrom(netip.AddrFrom4(budpIp), 4242)
} else {
budpIp := vpnIpNet.Addr().As16()
budpIp[13] -= 128
udpAddr = netip.AddrPortFrom(netip.AddrFrom16(budpIp), 4242)
}
_, _, myPrivKey, myPEM := NewTestCert(caCrt, caKey, name, time.Now(), time.Now().Add(5*time.Minute), vpnIpNet, nil, []string{})
caB, err := caCrt.MarshalToPEM() caB, err := caCrt.MarshalToPEM()
if err != nil { if err != nil {
@@ -70,14 +74,27 @@ func newSimpleServer(caCrt *cert.NebulaCertificate, caKey []byte, name string, u
// "try_interval": "1s", // "try_interval": "1s",
//}, //},
"listen": m{ "listen": m{
"host": udpAddr.IP.String(), "host": udpAddr.Addr().String(),
"port": udpAddr.Port, "port": udpAddr.Port(),
}, },
"logging": m{ "logging": m{
"timestamp_format": fmt.Sprintf("%v 15:04:05.000000", name), "timestamp_format": fmt.Sprintf("%v 15:04:05.000000", name),
"level": l.Level.String(), "level": l.Level.String(),
}, },
"timers": m{
"pending_deletion_interval": 2,
"connection_alive_interval": 2,
},
} }
if overrides != nil {
err = mergo.Merge(&overrides, mc, mergo.WithAppendSlice)
if err != nil {
panic(err)
}
mc = overrides
}
cb, err := yaml.Marshal(mc) cb, err := yaml.Marshal(mc)
if err != nil { if err != nil {
panic(err) panic(err)
@@ -92,113 +109,7 @@ func newSimpleServer(caCrt *cert.NebulaCertificate, caKey []byte, name string, u
panic(err) panic(err)
} }
return control, vpnIpNet.IP, &udpAddr return control, vpnIpNet, udpAddr, c
}
// newTestCaCert will generate a CA cert
func newTestCaCert(before, after time.Time, ips, subnets []*net.IPNet, groups []string) (*cert.NebulaCertificate, []byte, []byte, []byte) {
pub, priv, err := ed25519.GenerateKey(rand.Reader)
if before.IsZero() {
before = time.Now().Add(time.Second * -60).Round(time.Second)
}
if after.IsZero() {
after = time.Now().Add(time.Second * 60).Round(time.Second)
}
nc := &cert.NebulaCertificate{
Details: cert.NebulaCertificateDetails{
Name: "test ca",
NotBefore: time.Unix(before.Unix(), 0),
NotAfter: time.Unix(after.Unix(), 0),
PublicKey: pub,
IsCA: true,
InvertedGroups: make(map[string]struct{}),
},
}
if len(ips) > 0 {
nc.Details.Ips = ips
}
if len(subnets) > 0 {
nc.Details.Subnets = subnets
}
if len(groups) > 0 {
nc.Details.Groups = groups
}
err = nc.Sign(priv)
if err != nil {
panic(err)
}
pem, err := nc.MarshalToPEM()
if err != nil {
panic(err)
}
return nc, pub, priv, pem
}
// newTestCert will generate a signed certificate with the provided details.
// Expiry times are defaulted if you do not pass them in
func newTestCert(ca *cert.NebulaCertificate, key []byte, name string, before, after time.Time, ip *net.IPNet, subnets []*net.IPNet, groups []string) (*cert.NebulaCertificate, []byte, []byte, []byte) {
issuer, err := ca.Sha256Sum()
if err != nil {
panic(err)
}
if before.IsZero() {
before = time.Now().Add(time.Second * -60).Round(time.Second)
}
if after.IsZero() {
after = time.Now().Add(time.Second * 60).Round(time.Second)
}
pub, rawPriv := x25519Keypair()
nc := &cert.NebulaCertificate{
Details: cert.NebulaCertificateDetails{
Name: name,
Ips: []*net.IPNet{ip},
Subnets: subnets,
Groups: groups,
NotBefore: time.Unix(before.Unix(), 0),
NotAfter: time.Unix(after.Unix(), 0),
PublicKey: pub,
IsCA: false,
Issuer: issuer,
InvertedGroups: make(map[string]struct{}),
},
}
err = nc.Sign(key)
if err != nil {
panic(err)
}
pem, err := nc.MarshalToPEM()
if err != nil {
panic(err)
}
return nc, pub, cert.MarshalX25519PrivateKey(rawPriv), pem
}
func x25519Keypair() ([]byte, []byte) {
privkey := make([]byte, 32)
if _, err := io.ReadFull(rand.Reader, privkey); err != nil {
panic(err)
}
pubkey, err := curve25519.X25519(privkey, curve25519.Basepoint)
if err != nil {
panic(err)
}
return pubkey, privkey
} }
type doneCb func() type doneCb func()
@@ -219,35 +130,32 @@ func deadline(t *testing.T, seconds time.Duration) doneCb {
} }
} }
func assertTunnel(t *testing.T, vpnIpA, vpnIpB net.IP, controlA, controlB *nebula.Control, r *router.R) { func assertTunnel(t *testing.T, vpnIpA, vpnIpB netip.Addr, controlA, controlB *nebula.Control, r *router.R) {
// Send a packet from them to me // Send a packet from them to me
controlB.InjectTunUDPPacket(vpnIpA, 80, 90, []byte("Hi from B")) controlB.InjectTunUDPPacket(vpnIpA, 80, 90, []byte("Hi from B"))
bPacket := r.RouteUntilTxTun(controlB, controlA) bPacket := r.RouteForAllUntilTxTun(controlA)
assertUdpPacket(t, []byte("Hi from B"), bPacket, vpnIpB, vpnIpA, 90, 80) assertUdpPacket(t, []byte("Hi from B"), bPacket, vpnIpB, vpnIpA, 90, 80)
// And once more from me to them // And once more from me to them
controlA.InjectTunUDPPacket(vpnIpB, 80, 90, []byte("Hello from A")) controlA.InjectTunUDPPacket(vpnIpB, 80, 90, []byte("Hello from A"))
aPacket := r.RouteUntilTxTun(controlA, controlB) aPacket := r.RouteForAllUntilTxTun(controlB)
assertUdpPacket(t, []byte("Hello from A"), aPacket, vpnIpA, vpnIpB, 90, 80) assertUdpPacket(t, []byte("Hello from A"), aPacket, vpnIpA, vpnIpB, 90, 80)
} }
func assertHostInfoPair(t *testing.T, addrA, addrB *net.UDPAddr, vpnIpA, vpnIpB net.IP, controlA, controlB *nebula.Control) { func assertHostInfoPair(t *testing.T, addrA, addrB netip.AddrPort, vpnIpA, vpnIpB netip.Addr, controlA, controlB *nebula.Control) {
// Get both host infos // Get both host infos
hBinA := controlA.GetHostInfoByVpnIp(iputil.Ip2VpnIp(vpnIpB), false) hBinA := controlA.GetHostInfoByVpnIp(vpnIpB, false)
assert.NotNil(t, hBinA, "Host B was not found by vpnIp in controlA") assert.NotNil(t, hBinA, "Host B was not found by vpnIp in controlA")
hAinB := controlB.GetHostInfoByVpnIp(iputil.Ip2VpnIp(vpnIpA), false) hAinB := controlB.GetHostInfoByVpnIp(vpnIpA, false)
assert.NotNil(t, hAinB, "Host A was not found by vpnIp in controlB") assert.NotNil(t, hAinB, "Host A was not found by vpnIp in controlB")
// Check that both vpn and real addr are correct // Check that both vpn and real addr are correct
assert.Equal(t, vpnIpB, hBinA.VpnIp, "Host B VpnIp is wrong in control A") assert.Equal(t, vpnIpB, hBinA.VpnIp, "Host B VpnIp is wrong in control A")
assert.Equal(t, vpnIpA, hAinB.VpnIp, "Host A VpnIp is wrong in control B") assert.Equal(t, vpnIpA, hAinB.VpnIp, "Host A VpnIp is wrong in control B")
assert.Equal(t, addrB.IP.To16(), hBinA.CurrentRemote.IP.To16(), "Host B remote ip is wrong in control A") assert.Equal(t, addrB, hBinA.CurrentRemote, "Host B remote is wrong in control A")
assert.Equal(t, addrA.IP.To16(), hAinB.CurrentRemote.IP.To16(), "Host A remote ip is wrong in control B") assert.Equal(t, addrA, hAinB.CurrentRemote, "Host A remote is wrong in control B")
assert.Equal(t, addrB.Port, int(hBinA.CurrentRemote.Port), "Host B remote port is wrong in control A")
assert.Equal(t, addrA.Port, int(hAinB.CurrentRemote.Port), "Host A remote port is wrong in control B")
// Check that our indexes match // Check that our indexes match
assert.Equal(t, hBinA.LocalIndex, hAinB.RemoteIndex, "Host B local index does not match host A remote index") assert.Equal(t, hBinA.LocalIndex, hAinB.RemoteIndex, "Host B local index does not match host A remote index")
@@ -270,13 +178,13 @@ func assertHostInfoPair(t *testing.T, addrA, addrB *net.UDPAddr, vpnIpA, vpnIpB
//checkIndexes("hmB", hmB, hAinB) //checkIndexes("hmB", hmB, hAinB)
} }
func assertUdpPacket(t *testing.T, expected, b []byte, fromIp, toIp net.IP, fromPort, toPort uint16) { func assertUdpPacket(t *testing.T, expected, b []byte, fromIp, toIp netip.Addr, fromPort, toPort uint16) {
packet := gopacket.NewPacket(b, layers.LayerTypeIPv4, gopacket.Lazy) packet := gopacket.NewPacket(b, layers.LayerTypeIPv4, gopacket.Lazy)
v4 := packet.Layer(layers.LayerTypeIPv4).(*layers.IPv4) v4 := packet.Layer(layers.LayerTypeIPv4).(*layers.IPv4)
assert.NotNil(t, v4, "No ipv4 data found") assert.NotNil(t, v4, "No ipv4 data found")
assert.Equal(t, fromIp, v4.SrcIP, "Source ip was incorrect") assert.Equal(t, fromIp.AsSlice(), []byte(v4.SrcIP), "Source ip was incorrect")
assert.Equal(t, toIp, v4.DstIP, "Dest ip was incorrect") assert.Equal(t, toIp.AsSlice(), []byte(v4.DstIP), "Dest ip was incorrect")
udp := packet.Layer(layers.LayerTypeUDP).(*layers.UDP) udp := packet.Layer(layers.LayerTypeUDP).(*layers.UDP)
assert.NotNil(t, udp, "No udp data found") assert.NotNil(t, udp, "No udp data found")
@@ -294,7 +202,8 @@ func NewTestLogger() *logrus.Logger {
v := os.Getenv("TEST_LOGS") v := os.Getenv("TEST_LOGS")
if v == "" { if v == "" {
l.SetOutput(ioutil.Discard) l.SetOutput(io.Discard)
l.SetLevel(logrus.PanicLevel)
return l return l
} }

145
e2e/router/hostmap.go Normal file
View File

@@ -0,0 +1,145 @@
//go:build e2e_testing
// +build e2e_testing
package router
import (
"fmt"
"net/netip"
"sort"
"strings"
"github.com/slackhq/nebula"
)
type edge struct {
from string
to string
dual bool
}
func renderHostmaps(controls ...*nebula.Control) string {
var lines []*edge
r := "graph TB\n"
for _, c := range controls {
sr, se := renderHostmap(c)
r += sr
for _, e := range se {
add := true
// Collapse duplicate edges into a bi-directionally connected edge
for _, ge := range lines {
if e.to == ge.from && e.from == ge.to {
add = false
ge.dual = true
break
}
}
if add {
lines = append(lines, e)
}
}
}
for _, line := range lines {
if line.dual {
r += fmt.Sprintf("\t%v <--> %v\n", line.from, line.to)
} else {
r += fmt.Sprintf("\t%v --> %v\n", line.from, line.to)
}
}
return r
}
func renderHostmap(c *nebula.Control) (string, []*edge) {
var lines []string
var globalLines []*edge
clusterName := strings.Trim(c.GetCert().Details.Name, " ")
clusterVpnIp := c.GetCert().Details.Ips[0].IP
r := fmt.Sprintf("\tsubgraph %s[\"%s (%s)\"]\n", clusterName, clusterName, clusterVpnIp)
hm := c.GetHostmap()
hm.RLock()
defer hm.RUnlock()
// Draw the vpn to index nodes
r += fmt.Sprintf("\t\tsubgraph %s.hosts[\"Hosts (vpn ip to index)\"]\n", clusterName)
hosts := sortedHosts(hm.Hosts)
for _, vpnIp := range hosts {
hi := hm.Hosts[vpnIp]
r += fmt.Sprintf("\t\t\t%v.%v[\"%v\"]\n", clusterName, vpnIp, vpnIp)
lines = append(lines, fmt.Sprintf("%v.%v --> %v.%v", clusterName, vpnIp, clusterName, hi.GetLocalIndex()))
rs := hi.GetRelayState()
for _, relayIp := range rs.CopyRelayIps() {
lines = append(lines, fmt.Sprintf("%v.%v --> %v.%v", clusterName, vpnIp, clusterName, relayIp))
}
for _, relayIp := range rs.CopyRelayForIdxs() {
lines = append(lines, fmt.Sprintf("%v.%v --> %v.%v", clusterName, vpnIp, clusterName, relayIp))
}
}
r += "\t\tend\n"
// Draw the relay hostinfos
if len(hm.Relays) > 0 {
r += fmt.Sprintf("\t\tsubgraph %s.relays[\"Relays (relay index to hostinfo)\"]\n", clusterName)
for relayIndex, hi := range hm.Relays {
r += fmt.Sprintf("\t\t\t%v.%v[\"%v\"]\n", clusterName, relayIndex, relayIndex)
lines = append(lines, fmt.Sprintf("%v.%v --> %v.%v", clusterName, relayIndex, clusterName, hi.GetLocalIndex()))
}
r += "\t\tend\n"
}
// Draw the local index to relay or remote index nodes
r += fmt.Sprintf("\t\tsubgraph indexes.%s[\"Indexes (index to hostinfo)\"]\n", clusterName)
indexes := sortedIndexes(hm.Indexes)
for _, idx := range indexes {
hi, ok := hm.Indexes[idx]
if ok {
r += fmt.Sprintf("\t\t\t%v.%v[\"%v (%v)\"]\n", clusterName, idx, idx, hi.GetVpnIp())
remoteClusterName := strings.Trim(hi.GetCert().Details.Name, " ")
globalLines = append(globalLines, &edge{from: fmt.Sprintf("%v.%v", clusterName, idx), to: fmt.Sprintf("%v.%v", remoteClusterName, hi.GetRemoteIndex())})
_ = hi
}
}
r += "\t\tend\n"
// Add the edges inside this host
for _, line := range lines {
r += fmt.Sprintf("\t\t%v\n", line)
}
r += "\tend\n"
return r, globalLines
}
func sortedHosts(hosts map[netip.Addr]*nebula.HostInfo) []netip.Addr {
keys := make([]netip.Addr, 0, len(hosts))
for key := range hosts {
keys = append(keys, key)
}
sort.SliceStable(keys, func(i, j int) bool {
return keys[i].Compare(keys[j]) > 0
})
return keys
}
func sortedIndexes(indexes map[uint32]*nebula.HostInfo) []uint32 {
keys := make([]uint32, 0, len(indexes))
for key := range indexes {
keys = append(keys, key)
}
sort.SliceStable(keys, func(i, j int) bool {
return keys[i] > keys[j]
})
return keys
}

View File

@@ -4,62 +4,158 @@
package router package router
import ( import (
"context"
"fmt" "fmt"
"net" "net/netip"
"os"
"path/filepath"
"reflect" "reflect"
"strconv" "sort"
"strings"
"sync" "sync"
"testing"
"time"
"github.com/google/gopacket"
"github.com/google/gopacket/layers"
"github.com/slackhq/nebula" "github.com/slackhq/nebula"
"github.com/slackhq/nebula/header" "github.com/slackhq/nebula/header"
"github.com/slackhq/nebula/udp" "github.com/slackhq/nebula/udp"
"golang.org/x/exp/maps"
) )
type R struct { type R struct {
// Simple map of the ip:port registered on a control to the control // Simple map of the ip:port registered on a control to the control
// Basically a router, right? // Basically a router, right?
controls map[string]*nebula.Control controls map[netip.AddrPort]*nebula.Control
// A map for inbound packets for a control that doesn't know about this address // A map for inbound packets for a control that doesn't know about this address
inNat map[string]*nebula.Control inNat map[netip.AddrPort]*nebula.Control
// A last used map, if an inbound packet hit the inNat map then // A last used map, if an inbound packet hit the inNat map then
// all return packets should use the same last used inbound address for the outbound sender // all return packets should use the same last used inbound address for the outbound sender
// map[from address + ":" + to address] => ip:port to rewrite in the udp packet to receiver // map[from address + ":" + to address] => ip:port to rewrite in the udp packet to receiver
outNat map[string]net.UDPAddr outNat map[string]netip.AddrPort
// A map of vpn ip to the nebula control it belongs to
vpnControls map[netip.Addr]*nebula.Control
ignoreFlows []ignoreFlow
flow []flowEntry
// A set of additional mermaid graphs to draw in the flow log markdown file
// Currently consisting only of hostmap renders
additionalGraphs []mermaidGraph
// All interactions are locked to help serialize behavior // All interactions are locked to help serialize behavior
sync.Mutex sync.Mutex
fn string
cancelRender context.CancelFunc
t testing.TB
}
type ignoreFlow struct {
tun NullBool
messageType header.MessageType
subType header.MessageSubType
//from
//to
}
type mermaidGraph struct {
title string
content string
}
type NullBool struct {
HasValue bool
IsTrue bool
}
type flowEntry struct {
note string
packet *packet
}
type packet struct {
from *nebula.Control
to *nebula.Control
packet *udp.Packet
tun bool // a packet pulled off a tun device
rx bool // the packet was received by a udp device
}
func (p *packet) WasReceived() {
if p != nil {
p.rx = true
}
} }
type ExitType int type ExitType int
const ( const (
// Keeps routing, the function will get called again on the next packet // KeepRouting the function will get called again on the next packet
KeepRouting ExitType = 0 KeepRouting ExitType = 0
// Does not route this packet and exits immediately // ExitNow does not route this packet and exits immediately
ExitNow ExitType = 1 ExitNow ExitType = 1
// Routes this packet and exits immediately afterwards // RouteAndExit routes this packet and exits immediately afterwards
RouteAndExit ExitType = 2 RouteAndExit ExitType = 2
) )
type ExitFunc func(packet *udp.Packet, receiver *nebula.Control) ExitType type ExitFunc func(packet *udp.Packet, receiver *nebula.Control) ExitType
func NewR(controls ...*nebula.Control) *R { // NewR creates a new router to pass packets in a controlled fashion between the provided controllers.
r := &R{ // The packet flow will be recorded in a file within the mermaid directory under the same name as the test.
controls: make(map[string]*nebula.Control), // Renders will occur automatically, roughly every 100ms, until a call to RenderFlow() is made
inNat: make(map[string]*nebula.Control), func NewR(t testing.TB, controls ...*nebula.Control) *R {
outNat: make(map[string]net.UDPAddr), ctx, cancel := context.WithCancel(context.Background())
if err := os.MkdirAll("mermaid", 0755); err != nil {
panic(err)
} }
r := &R{
controls: make(map[netip.AddrPort]*nebula.Control),
vpnControls: make(map[netip.Addr]*nebula.Control),
inNat: make(map[netip.AddrPort]*nebula.Control),
outNat: make(map[string]netip.AddrPort),
flow: []flowEntry{},
ignoreFlows: []ignoreFlow{},
fn: filepath.Join("mermaid", fmt.Sprintf("%s.md", t.Name())),
t: t,
cancelRender: cancel,
}
// Try to remove our render file
os.Remove(r.fn)
for _, c := range controls { for _, c := range controls {
addr := c.GetUDPAddr() addr := c.GetUDPAddr()
if _, ok := r.controls[addr]; ok { if _, ok := r.controls[addr]; ok {
panic("Duplicate listen address: " + addr) panic("Duplicate listen address: " + addr.String())
} }
r.vpnControls[c.GetVpnIp()] = c
r.controls[addr] = c r.controls[addr] = c
} }
// Spin the renderer in case we go nuts and the test never completes
go func() {
clockSource := time.NewTicker(time.Millisecond * 100)
defer clockSource.Stop()
for {
select {
case <-ctx.Done():
return
case <-clockSource.C:
r.renderHostmaps("clock tick")
r.renderFlow()
}
}
}()
return r return r
} }
@@ -67,17 +163,234 @@ func NewR(controls ...*nebula.Control) *R {
// It does not look at the addr attached to the instance. // It does not look at the addr attached to the instance.
// If a route is used, this will behave like a NAT for the return path. // If a route is used, this will behave like a NAT for the return path.
// Rewriting the source ip:port to what was last sent to from the origin // Rewriting the source ip:port to what was last sent to from the origin
func (r *R) AddRoute(ip net.IP, port uint16, c *nebula.Control) { func (r *R) AddRoute(ip netip.Addr, port uint16, c *nebula.Control) {
r.Lock() r.Lock()
defer r.Unlock() defer r.Unlock()
inAddr := net.JoinHostPort(ip.String(), fmt.Sprintf("%v", port)) inAddr := netip.AddrPortFrom(ip, port)
if _, ok := r.inNat[inAddr]; ok { if _, ok := r.inNat[inAddr]; ok {
panic("Duplicate listen address inNat: " + inAddr) panic("Duplicate listen address inNat: " + inAddr.String())
} }
r.inNat[inAddr] = c r.inNat[inAddr] = c
} }
// RenderFlow renders the packet flow seen up until now and stops further automatic renders from happening.
func (r *R) RenderFlow() {
r.cancelRender()
r.renderFlow()
}
// CancelFlowLogs stops flow logs from being tracked and destroys any logs already collected
func (r *R) CancelFlowLogs() {
r.cancelRender()
r.flow = nil
}
func (r *R) renderFlow() {
if r.flow == nil {
return
}
f, err := os.OpenFile(r.fn, os.O_CREATE|os.O_TRUNC|os.O_RDWR, 0644)
if err != nil {
panic(err)
}
var participants = map[netip.AddrPort]struct{}{}
var participantsVals []string
fmt.Fprintln(f, "```mermaid")
fmt.Fprintln(f, "sequenceDiagram")
// Assemble participants
for _, e := range r.flow {
if e.packet == nil {
continue
}
addr := e.packet.from.GetUDPAddr()
if _, ok := participants[addr]; ok {
continue
}
participants[addr] = struct{}{}
sanAddr := strings.Replace(addr.String(), ":", "-", 1)
participantsVals = append(participantsVals, sanAddr)
fmt.Fprintf(
f, " participant %s as Nebula: %s<br/>UDP: %s\n",
sanAddr, e.packet.from.GetVpnIp(), sanAddr,
)
}
if len(participantsVals) > 2 {
// Get the first and last participantVals for notes
participantsVals = []string{participantsVals[0], participantsVals[len(participantsVals)-1]}
}
// Print packets
h := &header.H{}
for _, e := range r.flow {
if e.packet == nil {
//fmt.Fprintf(f, " note over %s: %s\n", strings.Join(participantsVals, ", "), e.note)
continue
}
p := e.packet
if p.tun {
fmt.Fprintln(f, r.formatUdpPacket(p))
} else {
if err := h.Parse(p.packet.Data); err != nil {
panic(err)
}
line := "--x"
if p.rx {
line = "->>"
}
fmt.Fprintf(f,
" %s%s%s: %s(%s), index %v, counter: %v\n",
strings.Replace(p.from.GetUDPAddr().String(), ":", "-", 1),
line,
strings.Replace(p.to.GetUDPAddr().String(), ":", "-", 1),
h.TypeName(), h.SubTypeName(), h.RemoteIndex, h.MessageCounter,
)
}
}
fmt.Fprintln(f, "```")
for _, g := range r.additionalGraphs {
fmt.Fprintf(f, "## %s\n", g.title)
fmt.Fprintln(f, "```mermaid")
fmt.Fprintln(f, g.content)
fmt.Fprintln(f, "```")
}
}
// IgnoreFlow tells the router to stop recording future flows that matches the provided criteria.
// messageType and subType will target nebula underlay packets while tun will target nebula overlay packets
// NOTE: This is a very broad system, if you set tun to true then no more tun traffic will be rendered
func (r *R) IgnoreFlow(messageType header.MessageType, subType header.MessageSubType, tun NullBool) {
r.Lock()
defer r.Unlock()
r.ignoreFlows = append(r.ignoreFlows, ignoreFlow{
tun,
messageType,
subType,
})
}
func (r *R) RenderHostmaps(title string, controls ...*nebula.Control) {
r.Lock()
defer r.Unlock()
s := renderHostmaps(controls...)
if len(r.additionalGraphs) > 0 {
lastGraph := r.additionalGraphs[len(r.additionalGraphs)-1]
if lastGraph.content == s && lastGraph.title == title {
// Ignore this rendering if it matches the last rendering added
// This is useful if you want to track rendering changes
return
}
}
r.additionalGraphs = append(r.additionalGraphs, mermaidGraph{
title: title,
content: s,
})
}
func (r *R) renderHostmaps(title string) {
c := maps.Values(r.controls)
sort.SliceStable(c, func(i, j int) bool {
return c[i].GetVpnIp().Compare(c[j].GetVpnIp()) > 0
})
s := renderHostmaps(c...)
if len(r.additionalGraphs) > 0 {
lastGraph := r.additionalGraphs[len(r.additionalGraphs)-1]
if lastGraph.content == s {
// Ignore this rendering if it matches the last rendering added
// This is useful if you want to track rendering changes
return
}
}
r.additionalGraphs = append(r.additionalGraphs, mermaidGraph{
title: title,
content: s,
})
}
// InjectFlow can be used to record packet flow if the test is handling the routing on its own.
// The packet is assumed to have been received
func (r *R) InjectFlow(from, to *nebula.Control, p *udp.Packet) {
r.Lock()
defer r.Unlock()
r.unlockedInjectFlow(from, to, p, false)
}
func (r *R) Log(arg ...any) {
if r.flow == nil {
return
}
r.Lock()
r.flow = append(r.flow, flowEntry{note: fmt.Sprint(arg...)})
r.t.Log(arg...)
r.Unlock()
}
func (r *R) Logf(format string, arg ...any) {
if r.flow == nil {
return
}
r.Lock()
r.flow = append(r.flow, flowEntry{note: fmt.Sprintf(format, arg...)})
r.t.Logf(format, arg...)
r.Unlock()
}
// unlockedInjectFlow is used by the router to record a packet has been transmitted, the packet is returned and
// should be marked as received AFTER it has been placed on the receivers channel.
// If flow logs have been disabled this function will return nil
func (r *R) unlockedInjectFlow(from, to *nebula.Control, p *udp.Packet, tun bool) *packet {
if r.flow == nil {
return nil
}
r.renderHostmaps(fmt.Sprintf("Packet %v", len(r.flow)))
if len(r.ignoreFlows) > 0 {
var h header.H
err := h.Parse(p.Data)
if err != nil {
panic(err)
}
for _, i := range r.ignoreFlows {
if !tun {
if i.messageType == h.Type && i.subType == h.Subtype {
return nil
}
} else if i.tun.HasValue && i.tun.IsTrue {
return nil
}
}
}
fp := &packet{
from: from,
to: to,
packet: p.Copy(),
tun: tun,
}
r.flow = append(r.flow, flowEntry{packet: fp})
return fp
}
// OnceFrom will route a single packet from sender then return // OnceFrom will route a single packet from sender then return
// If the router doesn't have the nebula controller for that address, we panic // If the router doesn't have the nebula controller for that address, we panic
func (r *R) OnceFrom(sender *nebula.Control) { func (r *R) OnceFrom(sender *nebula.Control) {
@@ -96,25 +409,85 @@ func (r *R) RouteUntilTxTun(sender *nebula.Control, receiver *nebula.Control) []
select { select {
// Maybe we already have something on the tun for us // Maybe we already have something on the tun for us
case b := <-tunTx: case b := <-tunTx:
r.Lock()
np := udp.Packet{Data: make([]byte, len(b))}
copy(np.Data, b)
r.unlockedInjectFlow(receiver, receiver, &np, true)
r.Unlock()
return b return b
// Nope, lets push the sender along // Nope, lets push the sender along
case p := <-udpTx: case p := <-udpTx:
outAddr := sender.GetUDPAddr()
r.Lock() r.Lock()
inAddr := net.JoinHostPort(p.ToIp.String(), fmt.Sprintf("%v", p.ToPort)) c := r.getControl(sender.GetUDPAddr(), p.To, p)
c := r.getControl(outAddr, inAddr, p)
if c == nil { if c == nil {
r.Unlock() r.Unlock()
panic("No control for udp tx") panic("No control for udp tx")
} }
fp := r.unlockedInjectFlow(sender, c, p, false)
c.InjectUDPPacket(p) c.InjectUDPPacket(p)
fp.WasReceived()
r.Unlock() r.Unlock()
} }
} }
} }
// RouteForAllUntilTxTun will route for everyone and return when a packet is seen on receivers tun
// If the router doesn't have the nebula controller for that address, we panic
func (r *R) RouteForAllUntilTxTun(receiver *nebula.Control) []byte {
sc := make([]reflect.SelectCase, len(r.controls)+1)
cm := make([]*nebula.Control, len(r.controls)+1)
i := 0
sc[i] = reflect.SelectCase{
Dir: reflect.SelectRecv,
Chan: reflect.ValueOf(receiver.GetTunTxChan()),
Send: reflect.Value{},
}
cm[i] = receiver
i++
for _, c := range r.controls {
sc[i] = reflect.SelectCase{
Dir: reflect.SelectRecv,
Chan: reflect.ValueOf(c.GetUDPTxChan()),
Send: reflect.Value{},
}
cm[i] = c
i++
}
for {
x, rx, _ := reflect.Select(sc)
r.Lock()
if x == 0 {
// we are the tun tx, we can exit
p := rx.Interface().([]byte)
np := udp.Packet{Data: make([]byte, len(p))}
copy(np.Data, p)
r.unlockedInjectFlow(cm[x], cm[x], &np, true)
r.Unlock()
return p
} else {
// we are a udp tx, route and continue
p := rx.Interface().(*udp.Packet)
c := r.getControl(cm[x].GetUDPAddr(), p.To, p)
if c == nil {
r.Unlock()
panic("No control for udp tx")
}
fp := r.unlockedInjectFlow(cm[x], c, p, false)
c.InjectUDPPacket(p)
fp.WasReceived()
}
r.Unlock()
}
}
// RouteExitFunc will call the whatDo func with each udp packet from sender. // RouteExitFunc will call the whatDo func with each udp packet from sender.
// whatDo can return: // whatDo can return:
// - exitNow: the packet will not be routed and this call will return immediately // - exitNow: the packet will not be routed and this call will return immediately
@@ -129,12 +502,10 @@ func (r *R) RouteExitFunc(sender *nebula.Control, whatDo ExitFunc) {
panic(err) panic(err)
} }
outAddr := sender.GetUDPAddr() receiver := r.getControl(sender.GetUDPAddr(), p.To, p)
inAddr := net.JoinHostPort(p.ToIp.String(), fmt.Sprintf("%v", p.ToPort))
receiver := r.getControl(outAddr, inAddr, p)
if receiver == nil { if receiver == nil {
r.Unlock() r.Unlock()
panic("Can't route for host: " + inAddr) panic("Can't RouteExitFunc for host: " + p.To.String())
} }
e := whatDo(p, receiver) e := whatDo(p, receiver)
@@ -144,12 +515,16 @@ func (r *R) RouteExitFunc(sender *nebula.Control, whatDo ExitFunc) {
return return
case RouteAndExit: case RouteAndExit:
fp := r.unlockedInjectFlow(sender, receiver, p, false)
receiver.InjectUDPPacket(p) receiver.InjectUDPPacket(p)
fp.WasReceived()
r.Unlock() r.Unlock()
return return
case KeepRouting: case KeepRouting:
fp := r.unlockedInjectFlow(sender, receiver, p, false)
receiver.InjectUDPPacket(p) receiver.InjectUDPPacket(p)
fp.WasReceived()
default: default:
panic(fmt.Sprintf("Unknown exitFunc return: %v", e)) panic(fmt.Sprintf("Unknown exitFunc return: %v", e))
@@ -175,16 +550,44 @@ func (r *R) RouteUntilAfterMsgType(sender *nebula.Control, msgType header.Messag
}) })
} }
func (r *R) RouteForAllUntilAfterMsgTypeTo(receiver *nebula.Control, msgType header.MessageType, subType header.MessageSubType) {
h := &header.H{}
r.RouteForAllExitFunc(func(p *udp.Packet, r *nebula.Control) ExitType {
if r != receiver {
return KeepRouting
}
if err := h.Parse(p.Data); err != nil {
panic(err)
}
if h.Type == msgType && h.Subtype == subType {
return RouteAndExit
}
return KeepRouting
})
}
func (r *R) InjectUDPPacket(sender, receiver *nebula.Control, packet *udp.Packet) {
r.Lock()
defer r.Unlock()
fp := r.unlockedInjectFlow(sender, receiver, packet, false)
receiver.InjectUDPPacket(packet)
fp.WasReceived()
}
// RouteForUntilAfterToAddr will route for sender and return only after it sees and sends a packet destined for toAddr // RouteForUntilAfterToAddr will route for sender and return only after it sees and sends a packet destined for toAddr
// finish can be any of the exitType values except `keepRouting`, the default value is `routeAndExit` // finish can be any of the exitType values except `keepRouting`, the default value is `routeAndExit`
// If the router doesn't have the nebula controller for that address, we panic // If the router doesn't have the nebula controller for that address, we panic
func (r *R) RouteForUntilAfterToAddr(sender *nebula.Control, toAddr *net.UDPAddr, finish ExitType) { func (r *R) RouteForUntilAfterToAddr(sender *nebula.Control, toAddr netip.AddrPort, finish ExitType) {
if finish == KeepRouting { if finish == KeepRouting {
finish = RouteAndExit finish = RouteAndExit
} }
r.RouteExitFunc(sender, func(p *udp.Packet, r *nebula.Control) ExitType { r.RouteExitFunc(sender, func(p *udp.Packet, r *nebula.Control) ExitType {
if p.ToIp.Equal(toAddr.IP) && p.ToPort == uint16(toAddr.Port) { if p.To == toAddr {
return finish return finish
} }
@@ -218,13 +621,10 @@ func (r *R) RouteForAllExitFunc(whatDo ExitFunc) {
r.Lock() r.Lock()
p := rx.Interface().(*udp.Packet) p := rx.Interface().(*udp.Packet)
receiver := r.getControl(cm[x].GetUDPAddr(), p.To, p)
outAddr := cm[x].GetUDPAddr()
inAddr := net.JoinHostPort(p.ToIp.String(), fmt.Sprintf("%v", p.ToPort))
receiver := r.getControl(outAddr, inAddr, p)
if receiver == nil { if receiver == nil {
r.Unlock() r.Unlock()
panic("Can't route for host: " + inAddr) panic("Can't RouteForAllExitFunc for host: " + p.To.String())
} }
e := whatDo(p, receiver) e := whatDo(p, receiver)
@@ -234,12 +634,16 @@ func (r *R) RouteForAllExitFunc(whatDo ExitFunc) {
return return
case RouteAndExit: case RouteAndExit:
fp := r.unlockedInjectFlow(cm[x], receiver, p, false)
receiver.InjectUDPPacket(p) receiver.InjectUDPPacket(p)
fp.WasReceived()
r.Unlock() r.Unlock()
return return
case KeepRouting: case KeepRouting:
fp := r.unlockedInjectFlow(cm[x], receiver, p, false)
receiver.InjectUDPPacket(p) receiver.InjectUDPPacket(p)
fp.WasReceived()
default: default:
panic(fmt.Sprintf("Unknown exitFunc return: %v", e)) panic(fmt.Sprintf("Unknown exitFunc return: %v", e))
@@ -281,43 +685,57 @@ func (r *R) FlushAll() {
p := rx.Interface().(*udp.Packet) p := rx.Interface().(*udp.Packet)
outAddr := cm[x].GetUDPAddr() receiver := r.getControl(cm[x].GetUDPAddr(), p.To, p)
inAddr := net.JoinHostPort(p.ToIp.String(), fmt.Sprintf("%v", p.ToPort))
receiver := r.getControl(outAddr, inAddr, p)
if receiver == nil { if receiver == nil {
r.Unlock() r.Unlock()
panic("Can't route for host: " + inAddr) panic("Can't FlushAll for host: " + p.To.String())
} }
receiver.InjectUDPPacket(p)
r.Unlock() r.Unlock()
} }
} }
// getControl performs or seeds NAT translation and returns the control for toAddr, p from fields may change // getControl performs or seeds NAT translation and returns the control for toAddr, p from fields may change
// This is an internal router function, the caller must hold the lock // This is an internal router function, the caller must hold the lock
func (r *R) getControl(fromAddr, toAddr string, p *udp.Packet) *nebula.Control { func (r *R) getControl(fromAddr, toAddr netip.AddrPort, p *udp.Packet) *nebula.Control {
if newAddr, ok := r.outNat[fromAddr+":"+toAddr]; ok { if newAddr, ok := r.outNat[fromAddr.String()+":"+toAddr.String()]; ok {
p.FromIp = newAddr.IP p.From = newAddr
p.FromPort = uint16(newAddr.Port)
} }
c, ok := r.inNat[toAddr] c, ok := r.inNat[toAddr]
if ok { if ok {
sHost, sPort, err := net.SplitHostPort(toAddr) r.outNat[c.GetUDPAddr().String()+":"+fromAddr.String()] = toAddr
if err != nil {
panic(err)
}
port, err := strconv.Atoi(sPort)
if err != nil {
panic(err)
}
r.outNat[c.GetUDPAddr()+":"+fromAddr] = net.UDPAddr{
IP: net.ParseIP(sHost),
Port: port,
}
return c return c
} }
return r.controls[toAddr] return r.controls[toAddr]
} }
func (r *R) formatUdpPacket(p *packet) string {
packet := gopacket.NewPacket(p.packet.Data, layers.LayerTypeIPv4, gopacket.Lazy)
v4 := packet.Layer(layers.LayerTypeIPv4).(*layers.IPv4)
if v4 == nil {
panic("not an ipv4 packet")
}
from := "unknown"
srcAddr, _ := netip.AddrFromSlice(v4.SrcIP)
if c, ok := r.vpnControls[srcAddr]; ok {
from = c.GetUDPAddr().String()
}
udp := packet.Layer(layers.LayerTypeUDP).(*layers.UDP)
if udp == nil {
panic("not a udp packet")
}
data := packet.ApplicationLayer()
return fmt.Sprintf(
" %s-->>%s: src port: %v<br/>dest port: %v<br/>data: \"%v\"\n",
strings.Replace(from, ":", "-", 1),
strings.Replace(p.to.GetUDPAddr().String(), ":", "-", 1),
udp.SrcPort,
udp.DstPort,
string(data.Payload()),
)
}

55
e2e/tunnels_test.go Normal file
View File

@@ -0,0 +1,55 @@
//go:build e2e_testing
// +build e2e_testing
package e2e
import (
"testing"
"time"
"github.com/slackhq/nebula/e2e/router"
)
func TestDropInactiveTunnels(t *testing.T) {
// The goal of this test is to ensure the shortest inactivity timeout will close the tunnel on both sides
// under ideal conditions
ca, _, caKey, _ := NewTestCaCert(time.Now(), time.Now().Add(10*time.Minute), nil, nil, []string{})
myControl, myVpnIpNet, myUdpAddr, _ := newSimpleServer(ca, caKey, "me", "10.128.0.1/24", m{"tunnels": m{"drop_inactive": true, "inactivity_timeout": "5s"}})
theirControl, theirVpnIpNet, theirUdpAddr, _ := newSimpleServer(ca, caKey, "them", "10.128.0.2/24", m{"tunnels": m{"drop_inactive": true, "inactivity_timeout": "10m"}})
// Share our underlay information
myControl.InjectLightHouseAddr(theirVpnIpNet.Addr(), theirUdpAddr)
theirControl.InjectLightHouseAddr(myVpnIpNet.Addr(), myUdpAddr)
// Start the servers
myControl.Start()
theirControl.Start()
r := router.NewR(t, myControl, theirControl)
r.Log("Assert the tunnel between me and them works")
assertTunnel(t, myVpnIpNet.Addr(), theirVpnIpNet.Addr(), myControl, theirControl, r)
r.Log("Go inactive and wait for the tunnels to get dropped")
waitStart := time.Now()
for {
myIndexes := len(myControl.GetHostmap().Indexes)
theirIndexes := len(theirControl.GetHostmap().Indexes)
if myIndexes == 0 && theirIndexes == 0 {
break
}
since := time.Since(waitStart)
r.Logf("my tunnels: %v; their tunnels: %v; duration: %v", myIndexes, theirIndexes, since)
if since > time.Second*30 {
t.Fatal("Tunnel should have been declared inactive after 5 seconds and before 30 seconds")
}
time.Sleep(1 * time.Second)
r.FlushAll()
}
r.Logf("Inactive tunnels were dropped within %v", time.Since(waitStart))
myControl.Stop()
theirControl.Stop()
}

View File

@@ -11,7 +11,7 @@ pki:
#blocklist: #blocklist:
# - c99d4e650533b92061b09918e838a5a0a6aaee21eed1d12fd937682865936c72 # - c99d4e650533b92061b09918e838a5a0a6aaee21eed1d12fd937682865936c72
# disconnect_invalid is a toggle to force a client to be disconnected if the certificate is expired or invalid. # disconnect_invalid is a toggle to force a client to be disconnected if the certificate is expired or invalid.
#disconnect_invalid: false #disconnect_invalid: true
# The static host map defines a set of hosts with fixed IP addresses on the internet (or any network). # The static host map defines a set of hosts with fixed IP addresses on the internet (or any network).
# A host can have multiple fixed IP addresses defined here, and nebula will try each when establishing a tunnel. # A host can have multiple fixed IP addresses defined here, and nebula will try each when establishing a tunnel.
@@ -21,6 +21,19 @@ pki:
static_host_map: static_host_map:
"192.168.100.1": ["100.64.22.11:4242"] "192.168.100.1": ["100.64.22.11:4242"]
# The static_map config stanza can be used to configure how the static_host_map behaves.
#static_map:
# cadence determines how frequently DNS is re-queried for updated IP addresses when a static_host_map entry contains
# a DNS name.
#cadence: 30s
# network determines the type of IP addresses to ask the DNS server for. The default is "ip4" because nodes typically
# do not know their public IPv4 address. Connecting to the Lighthouse via IPv4 allows the Lighthouse to detect the
# public address. Other valid options are "ip6" and "ip" (returns both.)
#network: ip4
# lookup_timeout is the DNS query timeout.
#lookup_timeout: 250ms
lighthouse: lighthouse:
# am_lighthouse is used to enable lighthouse functionality for a node. This should ONLY be true on nodes # am_lighthouse is used to enable lighthouse functionality for a node. This should ONLY be true on nodes
@@ -47,8 +60,9 @@ lighthouse:
# allowed. You can provide CIDRs here with `true` to allow and `false` to # allowed. You can provide CIDRs here with `true` to allow and `false` to
# deny. The most specific CIDR rule applies to each remote. If all rules are # deny. The most specific CIDR rule applies to each remote. If all rules are
# "allow", the default will be "deny", and vice-versa. If both "allow" and # "allow", the default will be "deny", and vice-versa. If both "allow" and
# "deny" rules are present, then you MUST set a rule for "0.0.0.0/0" as the # "deny" IPv4 rules are present, then you MUST set a rule for "0.0.0.0/0" as
# default. # the default. Similarly if both "allow" and "deny" IPv6 rules are present,
# then you MUST set a rule for "::/0" as the default.
#remote_allow_list: #remote_allow_list:
# Example to block IPs from this subnet from being used for remote IPs. # Example to block IPs from this subnet from being used for remote IPs.
#"172.16.0.0/12": false #"172.16.0.0/12": false
@@ -58,7 +72,7 @@ lighthouse:
#"10.0.0.0/8": false #"10.0.0.0/8": false
#"10.42.42.0/24": true #"10.42.42.0/24": true
# EXPERIMENTAL: This option my change or disappear in the future. # EXPERIMENTAL: This option may change or disappear in the future.
# Optionally allows the definition of remote_allow_list blocks # Optionally allows the definition of remote_allow_list blocks
# specific to an inside VPN IP CIDR. # specific to an inside VPN IP CIDR.
#remote_allow_ranges: #remote_allow_ranges:
@@ -81,10 +95,32 @@ lighthouse:
# Example to only advertise this subnet to the lighthouse. # Example to only advertise this subnet to the lighthouse.
#"10.0.0.0/8": true #"10.0.0.0/8": true
# advertise_addrs are routable addresses that will be included along with discovered addresses to report to the
# lighthouse, the format is "ip:port". `port` can be `0`, in which case the actual listening port will be used in its
# place, useful if `listen.port` is set to 0.
# This option is mainly useful when there are static ip addresses the host can be reached at that nebula can not
# typically discover on its own. Examples being port forwarding or multiple paths to the internet.
#advertise_addrs:
#- "1.1.1.1:4242"
#- "1.2.3.4:0" # port will be replaced with the real listening port
# EXPERIMENTAL: This option may change or disappear in the future.
# This setting allows us to "guess" what the remote might be for a host
# while we wait for the lighthouse response.
#calculated_remotes:
# For any Nebula IPs in 10.0.10.0/24, this will apply the mask and add
# the calculated IP as an initial remote (while we wait for the response
# from the lighthouse). Both CIDRs must have the same mask size.
# For example, Nebula IP 10.0.10.123 will have a calculated remote of
# 192.168.1.123
#10.0.10.0/24:
#- mask: 192.168.1.0/24
# port: 4242
# Port Nebula will be listening on. The default here is 4242. For a lighthouse node, the port should be defined, # Port Nebula will be listening on. The default here is 4242. For a lighthouse node, the port should be defined,
# however using port 0 will dynamically assign a port and is recommended for roaming nodes. # however using port 0 will dynamically assign a port and is recommended for roaming nodes.
listen: listen:
# To listen on both any ipv4 and ipv6 use "[::]" # To listen on both any ipv4 and ipv6 use "::"
host: 0.0.0.0 host: 0.0.0.0
port: 4242 port: 4242
# Sets the max number of packets to pull from the kernel for each syscall (under systems that support recvmmsg) # Sets the max number of packets to pull from the kernel for each syscall (under systems that support recvmmsg)
@@ -96,14 +132,18 @@ listen:
# max, net.core.rmem_max and net.core.wmem_max # max, net.core.rmem_max and net.core.wmem_max
#read_buffer: 10485760 #read_buffer: 10485760
#write_buffer: 10485760 #write_buffer: 10485760
# By default, Nebula replies to packets it has no tunnel for with a "recv_error" packet. This packet helps speed up reconnection
# in the case that Nebula on either side did not shut down cleanly. This response can be abused as a way to discover if Nebula is running
# on a host though. This option lets you configure if you want to send "recv_error" packets always, never, or only to private network remotes.
# valid values: always, never, private
# This setting is reloadable.
#send_recv_error: always
# EXPERIMENTAL: This option is currently only supported on linux and may
# change in future minor releases.
#
# Routines is the number of thread pairs to run that consume from the tun and UDP queues. # Routines is the number of thread pairs to run that consume from the tun and UDP queues.
# Currently, this defaults to 1 which means we have 1 tun queue reader and 1 # Currently, this defaults to 1 which means we have 1 tun queue reader and 1
# UDP queue reader. Setting this above one will set IFF_MULTI_QUEUE on the tun # UDP queue reader. Setting this above one will set IFF_MULTI_QUEUE on the tun
# device and SO_REUSEPORT on the UDP socket to allow multiple queues. # device and SO_REUSEPORT on the UDP socket to allow multiple queues.
# This option is only supported on Linux.
#routines: 1 #routines: 1
punchy: punchy:
@@ -115,20 +155,23 @@ punchy:
# Default is false # Default is false
#respond: true #respond: true
# delays a punch response for misbehaving NATs, default is 1 second, respond must be true to take effect # delays a punch response for misbehaving NATs, default is 1 second.
#delay: 1s #delay: 1s
# set the delay before attempting punchy.respond. Default is 5 seconds. respond must be true to take effect.
#respond_delay: 5s
# Cipher allows you to choose between the available ciphers for your network. Options are chachapoly or aes # Cipher allows you to choose between the available ciphers for your network. Options are chachapoly or aes
# IMPORTANT: this value must be identical on ALL NODES/LIGHTHOUSES. We do not/will not support use of different ciphers simultaneously! # IMPORTANT: this value must be identical on ALL NODES/LIGHTHOUSES. We do not/will not support use of different ciphers simultaneously!
#cipher: chachapoly #cipher: aes
# Preferred ranges is used to define a hint about the local network ranges, which speeds up discovering the fastest # Preferred ranges is used to define a hint about the local network ranges, which speeds up discovering the fastest
# path to a network adjacent nebula node. # path to a network adjacent nebula node.
# NOTE: the previous option "local_range" only allowed definition of a single range # This setting is reloadable.
# and has been deprecated for "preferred_ranges"
#preferred_ranges: ["172.16.0.0/24"] #preferred_ranges: ["172.16.0.0/24"]
# sshd can expose informational and administrative functions via ssh this is a # sshd can expose informational and administrative functions via ssh. This can expose informational and administrative
# functions, and allows manual tweaking of various network settings when debugging or testing.
#sshd: #sshd:
# Toggles the feature # Toggles the feature
#enabled: true #enabled: true
@@ -137,12 +180,29 @@ punchy:
# A file containing the ssh host private key to use # A file containing the ssh host private key to use
# A decent way to generate one: ssh-keygen -t ed25519 -f ssh_host_ed25519_key -N "" < /dev/null # A decent way to generate one: ssh-keygen -t ed25519 -f ssh_host_ed25519_key -N "" < /dev/null
#host_key: ./ssh_host_ed25519_key #host_key: ./ssh_host_ed25519_key
# A file containing a list of authorized public keys # Authorized users and their public keys
#authorized_users: #authorized_users:
#- user: steeeeve #- user: steeeeve
# keys can be an array of strings or single string # keys can be an array of strings or single string
#keys: #keys:
#- "ssh public key string" #- "ssh public key string"
# Trusted SSH CA public keys. These are the public keys of the CAs that are allowed to sign SSH keys for access.
#trusted_cas:
#- "ssh public key string"
# EXPERIMENTAL: relay support for networks that can't establish direct connections.
relay:
# Relays are a list of Nebula IP's that peers can use to relay packets to me.
# IPs in this list must have am_relay set to true in their configs, otherwise
# they will reject relay requests.
#relays:
#- 192.168.100.1
#- <other Nebula VPN IPs of hosts used as relays to access me>
# Set am_relay to true to permit other hosts to list my IP in their relays config. Default false.
am_relay: false
# Set use_relays to false to prevent this instance from attempting to establish connections through relays.
# default true
use_relays: true
# Configure the private interface. Note: addr is baked into the nebula certificate # Configure the private interface. Note: addr is baked into the nebula certificate
tun: tun:
@@ -150,7 +210,7 @@ tun:
disabled: false disabled: false
# Name of the device. If not set, a default will be chosen by the OS. # Name of the device. If not set, a default will be chosen by the OS.
# For macOS: if set, must be in the form `utun[0-9]+`. # For macOS: if set, must be in the form `utun[0-9]+`.
# For FreeBSD: Required to be set, must be in the form `tun[0-9]+`. # For NetBSD: Required to be set, must be in the form `tun[0-9]+`
dev: nebula1 dev: nebula1
# Toggles forwarding of local broadcast packets, the address of which depends on the ip/mask encoded in pki.cert # Toggles forwarding of local broadcast packets, the address of which depends on the ip/mask encoded in pki.cert
drop_local_broadcast: false drop_local_broadcast: false
@@ -160,26 +220,37 @@ tun:
tx_queue: 500 tx_queue: 500
# Default MTU for every packet, safe setting is (and the default) 1300 for internet based traffic # Default MTU for every packet, safe setting is (and the default) 1300 for internet based traffic
mtu: 1300 mtu: 1300
# Route based MTU overrides, you have known vpn ip paths that can support larger MTUs you can increase/decrease them here # Route based MTU overrides, you have known vpn ip paths that can support larger MTUs you can increase/decrease them here
routes: routes:
#- mtu: 8800 #- mtu: 8800
# route: 10.0.0.0/16 # route: 10.0.0.0/16
# Unsafe routes allows you to route traffic over nebula to non-nebula nodes # Unsafe routes allows you to route traffic over nebula to non-nebula nodes
# Unsafe routes should be avoided unless you have hosts/services that cannot run nebula # Unsafe routes should be avoided unless you have hosts/services that cannot run nebula
# NOTE: The nebula certificate of the "via" node *MUST* have the "route" defined as a subnet in its certificate # NOTE: The nebula certificate of the "via" node *MUST* have the "route" defined as a subnet in its certificate
# `mtu` will default to tun mtu if this option is not specified # `mtu`: will default to tun mtu if this option is not specified
# `metric` will default to 0 if this option is not specified # `metric`: will default to 0 if this option is not specified
# `install`: will default to true, controls whether this route is installed in the systems routing table.
# This setting is reloadable.
unsafe_routes: unsafe_routes:
#- route: 172.16.1.0/24 #- route: 172.16.1.0/24
# via: 192.168.100.99 # via: 192.168.100.99
# mtu: 1300 # mtu: 1300
# metric: 100 # metric: 100
# install: true
# On linux only, set to true to manage unsafe routes directly on the system route table with gateway routes instead of
# in nebula configuration files. Default false, not reloadable.
#use_system_route_table: false
# TODO # TODO
# Configure logging level # Configure logging level
logging: logging:
# panic, fatal, error, warning, info, or debug. Default is info # panic, fatal, error, warning, info, or debug. Default is info and is reloadable.
#NOTE: Debug mode can log remotely controlled/untrusted data which can quickly fill a disk in some
# scenarios. Debug logging is also CPU intensive and will decrease performance overall.
# Only enable debug logging while actively investigating an issue.
level: info level: info
# json or text formats currently available. Default is text # json or text formats currently available. Default is text
format: text format: text
@@ -224,29 +295,63 @@ logging:
# A 100ms interval with the default 10 retries will give a handshake 5.5 seconds to resolve before timing out # A 100ms interval with the default 10 retries will give a handshake 5.5 seconds to resolve before timing out
#try_interval: 100ms #try_interval: 100ms
#retries: 20 #retries: 20
# query_buffer is the size of the buffer channel for querying lighthouses
#query_buffer: 64
# trigger_buffer is the size of the buffer channel for quickly sending handshakes # trigger_buffer is the size of the buffer channel for quickly sending handshakes
# after receiving the response for lighthouse queries # after receiving the response for lighthouse queries
#trigger_buffer: 64 #trigger_buffer: 64
# Tunnel manager settings
#tunnels:
# drop_inactive controls whether inactive tunnels are maintained or dropped after the inactive_timeout period has
# elapsed.
# In general, it is a good idea to enable this setting. It will be enabled by default in a future release.
# This setting is reloadable
#drop_inactive: false
# inactivity_timeout controls how long a tunnel MUST NOT see any inbound or outbound traffic before being considered
# inactive and eligible to be dropped.
# This setting is reloadable
#inactivity_timeout: 10m
# Nebula security group configuration # Nebula security group configuration
firewall: firewall:
# Action to take when a packet is not allowed by the firewall rules.
# Can be one of:
# `drop` (default): silently drop the packet.
# `reject`: send a reject reply.
# - For TCP, this will be a RST "Connection Reset" packet.
# - For other protocols, this will be an ICMP port unreachable packet.
outbound_action: drop
inbound_action: drop
# Controls the default value for local_cidr. Default is true, will be deprecated after v1.9 and defaulted to false.
# This setting only affects nebula hosts with subnets encoded in their certificate. A nebula host acting as an
# unsafe router with `default_local_cidr_any: true` will expose their unsafe routes to every inbound rule regardless
# of the actual destination for the packet. Setting this to false requires each inbound rule to contain a `local_cidr`
# if the intention is to allow traffic to flow to an unsafe route.
#default_local_cidr_any: false
conntrack: conntrack:
tcp_timeout: 12m tcp_timeout: 12m
udp_timeout: 3m udp_timeout: 3m
default_timeout: 10m default_timeout: 10m
max_connections: 100000
# The firewall is default deny. There is no way to write a deny rule. # The firewall is default deny. There is no way to write a deny rule.
# Rules are comprised of a protocol, port, and one or more of host, group, or CIDR # Rules are comprised of a protocol, port, and one or more of host, group, or CIDR
# Logical evaluation is roughly: port AND proto AND (ca_sha OR ca_name) AND (host OR group OR groups OR cidr) # Logical evaluation is roughly: port AND proto AND (ca_sha OR ca_name) AND (host OR group OR groups OR cidr) AND (local cidr)
# - port: Takes `0` or `any` as any, a single number `80`, a range `200-901`, or `fragment` to match second and further fragments of fragmented packets (since there is no port available). # - port: Takes `0` or `any` as any, a single number `80`, a range `200-901`, or `fragment` to match second and further fragments of fragmented packets (since there is no port available).
# code: same as port but makes more sense when talking about ICMP, TODO: this is not currently implemented in a way that works, use `any` # code: same as port but makes more sense when talking about ICMP, TODO: this is not currently implemented in a way that works, use `any`
# proto: `any`, `tcp`, `udp`, or `icmp` # proto: `any`, `tcp`, `udp`, or `icmp`
# host: `any` or a literal hostname, ie `test-host` # host: `any` or a literal hostname, ie `test-host`
# group: `any` or a literal group name, ie `default-group` # group: `any` or a literal group name, ie `default-group`
# groups: Same as group but accepts a list of values. Multiple values are AND'd together and a certificate would have to contain all groups to pass # groups: Same as group but accepts a list of values. Multiple values are AND'd together and a certificate would have to contain all groups to pass
# cidr: a CIDR, `0.0.0.0/0` is any. # cidr: a remote CIDR, `0.0.0.0/0` is any.
# local_cidr: a local CIDR, `0.0.0.0/0` is any. This could be used to filter destinations when using unsafe_routes.
# Default is `any` unless the certificate contains subnets and then the default is the ip issued in the certificate
# if `default_local_cidr_any` is false, otherwise its `any`.
# ca_name: An issuing CA name # ca_name: An issuing CA name
# ca_sha: An issuing CA shasum # ca_sha: An issuing CA shasum
@@ -268,3 +373,10 @@ firewall:
groups: groups:
- laptop - laptop
- home - home
# Expose a subnet (unsafe route) to hosts with the group remote_client
# This example assume you have a subnet of 192.168.100.1/24 or larger encoded in the certificate
- port: 8080
proto: tcp
group: remote_client
local_cidr: 192.168.100.1/24

109
examples/go_service/main.go Normal file
View File

@@ -0,0 +1,109 @@
package main
import (
"bufio"
"fmt"
"log"
"net"
"github.com/slackhq/nebula/config"
"github.com/slackhq/nebula/service"
)
func main() {
if err := run(); err != nil {
log.Fatalf("%+v", err)
}
}
func run() error {
configStr := `
tun:
user: true
static_host_map:
'192.168.100.1': ['localhost:4242']
listen:
host: 0.0.0.0
port: 4241
lighthouse:
am_lighthouse: false
interval: 60
hosts:
- '192.168.100.1'
firewall:
outbound:
# Allow all outbound traffic from this node
- port: any
proto: any
host: any
inbound:
# Allow icmp between any nebula hosts
- port: any
proto: icmp
host: any
- port: any
proto: any
host: any
pki:
ca: /home/rice/Developer/nebula-config/ca.crt
cert: /home/rice/Developer/nebula-config/app.crt
key: /home/rice/Developer/nebula-config/app.key
`
var cfg config.C
if err := cfg.LoadString(configStr); err != nil {
return err
}
svc, err := service.New(&cfg)
if err != nil {
return err
}
ln, err := svc.Listen("tcp", ":1234")
if err != nil {
return err
}
for {
conn, err := ln.Accept()
if err != nil {
log.Printf("accept error: %s", err)
break
}
defer func(conn net.Conn) {
_ = conn.Close()
}(conn)
log.Printf("got connection")
_, err = conn.Write([]byte("hello world\n"))
if err != nil {
log.Printf("write error: %s", err)
}
scanner := bufio.NewScanner(conn)
for scanner.Scan() {
message := scanner.Text()
_, err = fmt.Fprintf(conn, "echo: %q\n", message)
if err != nil {
log.Printf("write error: %s", err)
}
log.Printf("got message %q", message)
}
if err := scanner.Err(); err != nil {
log.Printf("scanner error: %s", err)
break
}
}
_ = svc.Close()
if err := svc.Wait(); err != nil {
return err
}
return nil
}

View File

@@ -1,139 +0,0 @@
# Quickstart Guide
This guide is intended to bring up a vagrant environment with 1 lighthouse and 2 generic hosts running nebula.
## Creating the virtualenv for ansible
Within the `quickstart/` directory, do the following
```
# make a virtual environment
virtualenv venv
# get into the virtualenv
source venv/bin/activate
# install ansible
pip install -r requirements.yml
```
## Bringing up the vagrant environment
A plugin that is used for the Vagrant environment is `vagrant-hostmanager`
To install, run
```
vagrant plugin install vagrant-hostmanager
```
All hosts within the Vagrantfile are brought up with
`vagrant up`
Once the boxes are up, go into the `ansible/` directory and deploy the playbook by running
`ansible-playbook playbook.yml -i inventory -u vagrant`
## Testing within the vagrant env
Once the ansible run is done, hop onto a vagrant box
`vagrant ssh generic1.vagrant`
or specifically
`ssh vagrant@<ip-address-in-vagrant-file` (password for the vagrant user on the boxes is `vagrant`)
Some quick tests once the vagrant boxes are up are to ping from `generic1.vagrant` to `generic2.vagrant` using
their respective nebula ip address.
```
vagrant@generic1:~$ ping 10.168.91.220
PING 10.168.91.220 (10.168.91.220) 56(84) bytes of data.
64 bytes from 10.168.91.220: icmp_seq=1 ttl=64 time=241 ms
64 bytes from 10.168.91.220: icmp_seq=2 ttl=64 time=0.704 ms
```
You can further verify that the allowed nebula firewall rules work by ssh'ing from 1 generic box to the other.
`ssh vagrant@<nebula-ip-address>` (password for the vagrant user on the boxes is `vagrant`)
See `/etc/nebula/config.yml` on a box for firewall rules.
To see full handshakes and hostmaps, change the logging config of `/etc/nebula/config.yml` on the vagrant boxes from
info to debug.
You can watch nebula logs by running
```
sudo journalctl -fu nebula
```
Refer to the nebula src code directory's README for further instructions on configuring nebula.
## Troubleshooting
### Is nebula up and running?
Run and verify that
```
ifconfig
```
shows you an interface with the name `nebula1` being up.
```
vagrant@generic1:~$ ifconfig nebula1
nebula1: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST> mtu 1300
inet 10.168.91.210 netmask 255.128.0.0 destination 10.168.91.210
inet6 fe80::aeaf:b105:e6dc:936c prefixlen 64 scopeid 0x20<link>
unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen 500 (UNSPEC)
RX packets 2 bytes 168 (168.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 11 bytes 600 (600.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
```
### Connectivity
Are you able to ping other boxes on the private nebula network?
The following are the private nebula ip addresses of the vagrant env
```
generic1.vagrant [nebula_ip] 10.168.91.210
generic2.vagrant [nebula_ip] 10.168.91.220
lighthouse1.vagrant [nebula_ip] 10.168.91.230
```
Try pinging generic1.vagrant to and from any other box using its nebula ip above.
Double check the nebula firewall rules under /etc/nebula/config.yml to make sure that connectivity is allowed for your use-case if on a specific port.
```
vagrant@lighthouse1:~$ grep -A21 firewall /etc/nebula/config.yml
firewall:
conntrack:
tcp_timeout: 12m
udp_timeout: 3m
default_timeout: 10m
max_connections: 100,000
inbound:
- proto: icmp
port: any
host: any
- proto: any
port: 22
host: any
- proto: any
port: 53
host: any
outbound:
- proto: any
port: any
host: any
```

View File

@@ -1,40 +0,0 @@
Vagrant.require_version ">= 2.2.6"
nodes = [
{ :hostname => 'generic1.vagrant', :ip => '172.11.91.210', :box => 'bento/ubuntu-18.04', :ram => '512', :cpus => 1},
{ :hostname => 'generic2.vagrant', :ip => '172.11.91.220', :box => 'bento/ubuntu-18.04', :ram => '512', :cpus => 1},
{ :hostname => 'lighthouse1.vagrant', :ip => '172.11.91.230', :box => 'bento/ubuntu-18.04', :ram => '512', :cpus => 1},
]
Vagrant.configure("2") do |config|
config.ssh.insert_key = false
if Vagrant.has_plugin?('vagrant-cachier')
config.cache.enable :apt
else
printf("** Install vagrant-cachier plugin to speedup deploy: `vagrant plugin install vagrant-cachier`.**\n")
end
if Vagrant.has_plugin?('vagrant-hostmanager')
config.hostmanager.enabled = true
config.hostmanager.manage_host = true
config.hostmanager.include_offline = true
else
config.vagrant.plugins = "vagrant-hostmanager"
end
nodes.each do |node|
config.vm.define node[:hostname] do |node_config|
node_config.vm.box = node[:box]
node_config.vm.hostname = node[:hostname]
node_config.vm.network :private_network, ip: node[:ip]
node_config.vm.provider :virtualbox do |vb|
vb.memory = node[:ram]
vb.cpus = node[:cpus]
vb.customize ["modifyvm", :id, "--natdnshostresolver1", "on"]
vb.customize ['guestproperty', 'set', :id, '/VirtualBox/GuestAdd/VBoxService/--timesync-set-threshold', 10000]
end
end
end
end

View File

@@ -1,4 +0,0 @@
[defaults]
host_key_checking = False
private_key_file = ~/.vagrant.d/insecure_private_key
become = yes

View File

@@ -1,21 +0,0 @@
#!/usr/bin/python
class FilterModule(object):
def filters(self):
return {
'to_nebula_ip': self.to_nebula_ip,
'map_to_nebula_ips': self.map_to_nebula_ips,
}
def to_nebula_ip(self, ip_str):
ip_list = list(map(int, ip_str.split(".")))
ip_list[0] = 10
ip_list[1] = 168
ip = '.'.join(map(str, ip_list))
return ip
def map_to_nebula_ips(self, ip_strs):
ip_list = [ self.to_nebula_ip(ip_str) for ip_str in ip_strs ]
ips = ', '.join(ip_list)
return ips

View File

@@ -1,11 +0,0 @@
[all]
generic1.vagrant
generic2.vagrant
lighthouse1.vagrant
[generic]
generic1.vagrant
generic2.vagrant
[lighthouse]
lighthouse1.vagrant

View File

@@ -1,23 +0,0 @@
---
- name: test connection to vagrant boxes
hosts: all
tasks:
- debug: msg=ok
- name: build nebula binaries locally
connection: local
hosts: localhost
tasks:
- command: chdir=../../../ make build/linux-amd64/"{{ item }}"
with_items:
- nebula
- nebula-cert
tags:
- build-nebula
- name: install nebula on all vagrant hosts
hosts: all
become: yes
gather_facts: yes
roles:
- nebula

View File

@@ -1,3 +0,0 @@
---
# defaults file for nebula
nebula_config_directory: "/etc/nebula/"

View File

@@ -1,13 +0,0 @@
[Unit]
Description=nebula
Wants=basic.target
After=basic.target network.target
[Service]
SyslogIdentifier=nebula
ExecReload=/bin/kill -HUP $MAINPID
ExecStart=/usr/local/bin/nebula -config /etc/nebula/config.yml
Restart=always
[Install]
WantedBy=multi-user.target

View File

@@ -1,5 +0,0 @@
-----BEGIN NEBULA CERTIFICATE-----
CkAKDm5lYnVsYSB0ZXN0IENBKNXC1NYFMNXIhO0GOiCmVYeZ9tkB4WEnawmkrca+
hsAg9otUFhpAowZeJ33KVEABEkAORybHQUUyVFbKYzw0JHfVzAQOHA4kwB1yP9IV
KpiTw9+ADz+wA+R5tn9B+L8+7+Apc+9dem4BQULjA5mRaoYN
-----END NEBULA CERTIFICATE-----

View File

@@ -1,4 +0,0 @@
-----BEGIN NEBULA ED25519 PRIVATE KEY-----
FEXZKMSmg8CgIODR0ymUeNT3nbnVpMi7nD79UgkCRHWmVYeZ9tkB4WEnawmkrca+
hsAg9otUFhpAowZeJ33KVA==
-----END NEBULA ED25519 PRIVATE KEY-----

View File

@@ -1,5 +0,0 @@
---
# handlers file for nebula
- name: restart nebula
service: name=nebula state=restarted

View File

@@ -1,62 +0,0 @@
---
# tasks file for nebula
- name: get the vagrant network interface and set fact
set_fact:
vagrant_ifce: "ansible_{{ ansible_interfaces | difference(['lo',ansible_default_ipv4.alias]) | sort | first }}"
tags:
- nebula-conf
- name: install built nebula binary
copy: src="../../../../../build/linux-amd64/{{ item }}" dest="/usr/local/bin" mode=0755
with_items:
- nebula
- nebula-cert
- name: create nebula config directory
file: path="{{ nebula_config_directory }}" state=directory mode=0755
- name: temporarily copy over root.crt and root.key to sign
copy: src={{ item }} dest=/opt/{{ item }}
with_items:
- vagrant-test-ca.key
- vagrant-test-ca.crt
- name: remove previously signed host certificate
file: dest=/etc/nebula/{{ item }} state=absent
with_items:
- host.crt
- host.key
- name: sign using the root key
command: nebula-cert sign -ca-crt /opt/vagrant-test-ca.crt -ca-key /opt/vagrant-test-ca.key -duration 4320h -groups vagrant -ip {{ hostvars[inventory_hostname][vagrant_ifce]['ipv4']['address'] | to_nebula_ip }}/9 -name {{ ansible_hostname }}.nebula -out-crt /etc/nebula/host.crt -out-key /etc/nebula/host.key
- name: remove root.key used to sign
file: dest=/opt/{{ item }} state=absent
with_items:
- vagrant-test-ca.key
- name: write the content of the trusted ca certificate
copy: src="vagrant-test-ca.crt" dest="/etc/nebula/vagrant-test-ca.crt"
notify: restart nebula
- name: Create config directory
file: path="{{ nebula_config_directory }}" owner=root group=root mode=0755 state=directory
- name: nebula config
template: src=config.yml.j2 dest="/etc/nebula/config.yml" mode=0644 owner=root group=root
notify: restart nebula
tags:
- nebula-conf
- name: nebula systemd
copy: src=systemd.nebula.service dest="/etc/systemd/system/nebula.service" mode=0644 owner=root group=root
register: addconf
notify: restart nebula
- name: maybe reload systemd
shell: systemctl daemon-reload
when: addconf.changed
- name: nebula running
service: name="nebula" state=started enabled=yes

View File

@@ -1,86 +0,0 @@
pki:
ca: /etc/nebula/vagrant-test-ca.crt
cert: /etc/nebula/host.crt
key: /etc/nebula/host.key
# Port Nebula will be listening on
listen:
host: 0.0.0.0
port: 4242
# sshd can expose informational and administrative functions via ssh
sshd:
# Toggles the feature
enabled: true
# Host and port to listen on
listen: 127.0.0.1:2222
# A file containing the ssh host private key to use
host_key: /etc/ssh/ssh_host_ed25519_key
# A file containing a list of authorized public keys
authorized_users:
{% for user in nebula_users %}
- user: {{ user.name }}
keys:
{% for key in user.ssh_auth_keys %}
- "{{ key }}"
{% endfor %}
{% endfor %}
local_range: 10.168.0.0/16
static_host_map:
# lighthouse
{{ hostvars[groups['lighthouse'][0]][vagrant_ifce]['ipv4']['address'] | to_nebula_ip }}: ["{{ hostvars[groups['lighthouse'][0]][vagrant_ifce]['ipv4']['address']}}:4242"]
default_route: "0.0.0.0"
lighthouse:
{% if 'lighthouse' in group_names %}
am_lighthouse: true
serve_dns: true
{% else %}
am_lighthouse: false
{% endif %}
interval: 60
{% if 'generic' in group_names %}
hosts:
- {{ hostvars[groups['lighthouse'][0]][vagrant_ifce]['ipv4']['address'] | to_nebula_ip }}
{% endif %}
# Configure the private interface
tun:
dev: nebula1
# Sets MTU of the tun dev.
# MTU of the tun must be smaller than the MTU of the eth0 interface
mtu: 1300
# TODO
# Configure logging level
logging:
level: info
format: json
firewall:
conntrack:
tcp_timeout: 12m
udp_timeout: 3m
default_timeout: 10m
max_connections: 100,000
inbound:
- proto: icmp
port: any
host: any
- proto: any
port: 22
host: any
{% if "lighthouse" in groups %}
- proto: any
port: 53
host: any
{% endif %}
outbound:
- proto: any
port: any
host: any

View File

@@ -1,7 +0,0 @@
---
# vars file for nebula
nebula_users:
- name: user1
ssh_auth_keys:
- "ed25519 place-your-ssh-public-key-here"

Some files were not shown because too many files have changed in this diff Show More