Compare commits

..

44 Commits

Author SHA1 Message Date
Wade Simmons
a800a48857 v1.6.1 (#752)
Update CHANGELOG for Nebula v1.6.1
2022-09-26 13:38:18 -04:00
Nate Brown
4c0ae3df5e Refuse to process double encrypted packets (#741) 2022-09-19 12:47:48 -05:00
Nate Brown
feb3e1317f Add a simple benchmark to e2e tests (#739) 2022-09-01 09:44:58 -05:00
Jon Rafkind
c2259f14a7 explicitly reload config from ssh command (#725) 2022-08-08 12:44:09 -05:00
Nate Brown
b1eeb5f3b8 Support unsafe_routes on mobile again (#729) 2022-08-05 09:58:10 -05:00
Nate Brown
2adf0ca1d1 Use issue templates to improve bug reports (#726) 2022-07-29 12:57:05 -05:00
Nate Brown
92dfccf01a v1.6.0 (#701)
Update CHANGELOG for Nebula v1.6.0

Co-authored-by: Wade Simmons <wsimmons@slack-corp.com>
Co-authored-by: brad-defined <77982333+brad-defined@users.noreply.github.com>
2022-06-30 16:15:18 -04:00
brad-defined
38e495e0d2 Remove EXPERIMENTAL text from routines example config. (#702) 2022-06-30 11:20:41 -04:00
brad-defined
78a0255c91 typeos (#700) 2022-06-29 11:19:20 -04:00
brad-defined
169cdbbd35 Immediately forward packets received on the nebula TUN device from self to self (#501)
* Immediately forward packets received on the nebula TUN device with a destination of our Nebula VPN IP right back out that same TUN device on MacOS.
2022-06-27 14:36:10 -04:00
Nate Brown
0d1ee4214a Add relay e2e tests and output some mermaid sequence diagrams (#691) 2022-06-27 12:33:29 -05:00
Wade Simmons
7b9287709c add listen.send_recv_error config option (#670)
By default, Nebula replies to packets it has no tunnel for with a `recv_error` packet. This packet helps speed up re-connection
in the case that Nebula on either side did not shut down cleanly. This response can be abused as a way to discover if Nebula is running
on a host though. This option lets you configure if you want to send `recv_error` packets always, never, or only to private network remotes.
valid values: always, never, private

This setting is reloadable with SIGHUP.
2022-06-27 12:37:54 -04:00
Wade Simmons
85ec807b7e reserve NebulaHandshakeDetails fields for multiport (#674)
We are currently testing changes for multiport (related to #497) that
use fields 6 and 7 in the protobuf. Reserved these fields so that when
we eventually open the PR we are backwards compatible with any future
changes.
2022-06-27 12:07:05 -04:00
John Maguire
a0b280621d Remove firewall.conntrack.max_connections from examples (#684) 2022-06-23 10:29:54 -05:00
Caleb Jasik
527f953c2c Remove x509 config loading code (#685) 2022-06-23 10:27:34 -05:00
brad-defined
1a7c575011 Relay (#678)
Co-authored-by: Wade Simmons <wsimmons@slack-corp.com>
2022-06-21 13:35:23 -05:00
Don Stephan
332fa2b825 fix panic in handleInvalidCertificate (#675)
* fix panic in handleInvalidCertificate

when HandleMonitorTick fires, the hostmap can be nil which causes a panic to occur when trying to clean up the hostmap in handleInvalidCertificate. This fix just stops the invalidation from continuing if the hostmap doesn't exist.

* removed conditional for disconnectInvalid in HandleDeletionTick
2022-05-16 13:29:57 -04:00
Wade Simmons
45d1d2b6c6 Update dependencies - 2022-04 (#664)
Updated  github.com/kardianos/service         https://github.com/kardianos/service/compare/v1.2.0...v1.2.1
    Updated  github.com/miekg/dns                 https://github.com/miekg/dns/compare/v1.1.43...v1.1.48
    Updated  github.com/prometheus/client_golang  https://github.com/prometheus/client_golang/compare/v1.11.0...v1.12.1
    Updated  github.com/prometheus/common         https://github.com/prometheus/common/compare/v0.32.1...v0.33.0
    Updated  github.com/stretchr/testify          https://github.com/stretchr/testify/compare/v1.7.0...v1.7.1
    Updated  golang.org/x/crypto                  5770296d90...ae2d96664a
    Updated  golang.org/x/net                     69e39bad7d...749bd193bc
    Updated  golang.org/x/sys                     7861aae155...289d7a0edf
    Updated  golang.zx2c4.com/wireguard/windows   v0.5.1...v0.5.3
    Updated  google.golang.org/protobuf           v1.27.1...v1.28.0
2022-04-18 12:12:25 -04:00
Wade Simmons
3913062c43 build and test with go1.18 (#656)
- https://go.dev/doc/go1.18
2022-04-05 17:08:00 -04:00
Wade Simmons
b38bd36766 fix connection manager check when disconnect_invalid set (#658)
This restores the hostMap.QueryVpnIP block to how it looked before #370
was merged. I'm not sure why the patch from #370 wanted to continue on
if there was no match found in the hostmap, since there isn't anything
to do at that point (the tunnel has already been closed).

This was causing a crash because the handleInvalidCertificate check
expects the hostinfo to be passed in (but it is nil since there was no
hostinfo in the hostmap).

Fixes: #657
2022-04-04 13:38:36 -04:00
Nate Brown
d85e24f49f Allow for self reported ips to the lighthouse (#650) 2022-04-04 12:35:23 -05:00
bitshop
7672c7087a Add to build all windows-arm64 / bin-windows-arm64 build option (#638)
* Add to build all windows-arm64 / bin-winarm64 builds

* update release to build for windows-arm64

* cleanup

Co-authored-by: Wade Simmons <wsimmons@slack-corp.com>
2022-03-18 13:23:10 -04:00
Caleb Jasik
730a5c4a23 Update link to nebula docs (#655) 2022-03-18 11:15:16 -04:00
brad-defined
03498a0cb2 Make nebula advertise its dynamic port to lighthouses (#653) 2022-03-15 18:03:56 -05:00
Nate Brown
312a01dc09 Lighthouse reload support (#649)
Co-authored-by: John Maguire <contact@johnmaguire.me>
2022-03-14 12:35:13 -05:00
Nate Brown
bbe0a032bb Fix windows unsafe_routes regression (#648) 2022-03-09 13:23:29 -06:00
Wade Simmons
b5b9d33ee7 v1.5.2 (#612)
Update CHANGELOG for Nebula v1.5.2
2021-12-14 16:48:56 -05:00
Wade Simmons
e434ba6523 fix unsafe routes darwin (#610)
With Nebula 1.4.0, if you create an unsafe_route that has a collision with an existing route on the system, the unsafe_route will be silently dropped (and the existing system route remains).

With Nebula 1.5.0, this same situation will cause Nebula to fail to start with an error (EEXIST).

This change restores the Nebula 1.4.0 behavior (but with a WARN log as well).
2021-12-14 11:52:49 -05:00
Wade Simmons
068a93d1f4 fix makeRouteTree allowMTU (#611)
With the previous implementation, we check if route.MTU is greater than zero,
but it will always be because we set it to the default MTU in
parseUnsafeRoutes. This change leaves it as zero in parseUnsafeRoutes so
it can be examined later.
2021-12-14 11:52:28 -05:00
Nate Brown
15fdabc3ab v1.5.1 (#606)
Update CHANGELOG for Nebula v1.5.1
2021-12-13 20:43:25 -05:00
forfuncsake
1110756f0f Allow setup of a CA pool from bytes that contain expired certs (#599)
Co-authored-by: Nate Brown <nbrown.us@gmail.com>
2021-12-09 21:24:56 -06:00
Nate Brown
e31006d546 Be more clear about ipv4 in nebula-cert (#604) 2021-12-07 21:40:30 -06:00
Wade Simmons
949ec78653 don't set ConnectionState to nil (#590)
* don't set ConnectionState to nil

We might have packets processing in another thread, so we can't safely
just set this to nil. Since we removed it from the hostmaps, the next
packets to process should start the handshake over again.

I believe this comment is outdated or incorrect, since the next
handshake will start over with a new HostInfo, I don't think there is
any way a counter reuse could happen:

> We must null the connectionstate or a counter reuse may happen

Here is a panic we saw that I think is related:

    panic: runtime error: invalid memory address or nil pointer dereference
    [signal SIGSEGV: segmentation violation code=0x1 addr=0x20 pc=0x93a037]
    goroutine 59 [running, locked to thread]:
    github.com/slackhq/nebula.(*Firewall).Drop(...)
            github.com/slackhq/nebula/firewall.go:380
    github.com/slackhq/nebula.(*Interface).consumeInsidePacket(...)
            github.com/slackhq/nebula/inside.go:59
    github.com/slackhq/nebula.(*Interface).listenIn(...)
            github.com/slackhq/nebula/interface.go:233
    created by github.com/slackhq/nebula.(*Interface).run
            github.com/slackhq/nebula/interface.go:191

* use closeTunnel
2021-12-06 14:09:05 -05:00
Wade Simmons
127a116bfd update golang.org/x/crypto (#603)
> Version v0.0.0-20211202192323-5770296d904e of golang.org/x/crypto fixes a vulnerability in the golang.org/x/crypto/ssh package which allowed unauthenticated clients to cause a panic in SSH servers.
>
> This issue was discovered and reported by Rod Hynes, Psiphon Inc., and is tracked as CVE-2021-43565 and Issue golang/go#49932.

    Updated  golang.org/x/crypto  089bfa5675...5770296d90
    Updated  golang.org/x/net     4a448f8816...69e39bad7d
2021-12-06 14:07:05 -05:00
Wade Simmons
befce3f990 fix crash with -test (#602)
When running in `-test` mode, `tun` is set to nil. So we should move the
defer into the `!configTest` if block.

    panic: runtime error: invalid memory address or nil pointer dereference
    [signal SIGSEGV: segmentation violation code=0x1 addr=0x28 pc=0x54855c]

    goroutine 1 [running]:
    github.com/slackhq/nebula.Main.func3(0x4000135e80, {0x0, 0x0})
            github.com/slackhq/nebula/main.go:176 +0x2c
    github.com/slackhq/nebula.Main(0x400022e060, 0x1, {0x76faa0, 0x5}, 0x4000230000, 0x0)
            github.com/slackhq/nebula/main.go:316 +0x2414
    main.main()
            github.com/slackhq/nebula/cmd/nebula/main.go:54 +0x540
2021-12-06 14:06:16 -05:00
Wade Simmons
f60ed2b36d overlay: fix tun.RouteFor getting *net.IP (#595)
tun.RouteFor expects the routeTree to have an iputil.VpnIp inside of it
instead of a *net.IP.
2021-12-06 09:35:31 -05:00
Nate Brown
48c47f5841 Warn if no lighthouses were configured on a non lighthouse node (#587) 2021-11-30 10:31:33 -06:00
Wade Simmons
75306487c5 fix wintun package to have // +build comments (#598)
Without these comments, gofmt 1.16.9 will complain. Since we otherwise
still support building with go1.16, lets add the comments to make it
easier to compile and gofmt.

Related: #588
2021-11-30 11:14:15 -05:00
Nate Brown
78d0d46bae Remove WriteRaw, cidrTree -> routeTree to better describe its purpose, remove redundancy from field names (#582) 2021-11-12 12:47:09 -06:00
Nate Brown
467e605d5e Push route handling into overlay, a few more nits fixed (#581) 2021-11-12 11:19:28 -06:00
Nate Brown
2f1f0d602f Cleanup most of the remaining nits (#578) 2021-11-12 10:47:36 -06:00
Nate Brown
e07524a654 Move all of tun into overlay (#577) 2021-11-11 16:37:29 -06:00
Nate Brown
88ce0edf76 Start the overlay package with the old Inside interface (#576) 2021-11-10 21:52:26 -06:00
Nate Brown
4453964e34 Move util to test, contextual errors to util (#575) 2021-11-10 21:47:38 -06:00
105 changed files with 4854 additions and 1760 deletions

57
.github/ISSUE_TEMPLATE/bug-report.yml vendored Normal file
View File

@@ -0,0 +1,57 @@
name: "\U0001F41B Bug Report"
description: Report an issue or possible bug
title: "\U0001F41B BUG:"
labels: []
assignees: []
body:
- type: markdown
attributes:
value: |
### Thank you for taking the time to file a bug report!
Please fill out this form as completely as possible.
- type: input
id: version
attributes:
label: What version of `nebula` are you using?
placeholder: 0.0.0
validations:
required: true
- type: input
id: os
attributes:
label: What operating system are you using?
description: iOS and Android specific issues belong in the [mobile_nebula](https://github.com/DefinedNet/mobile_nebula) repo.
placeholder: Linux, Mac, Windows
validations:
required: true
- type: textarea
id: description
attributes:
label: Describe the Bug
description: A clear and concise description of what the bug is.
validations:
required: true
- type: textarea
id: logs
attributes:
label: Logs from affected hosts
description: |
Provide logs from all affected hosts during the time of the issue.
Improve formatting by using <code>```</code> at the beginning and end of each log block.
validations:
required: false
- type: textarea
id: configs
attributes:
label: Config files from affected hosts
description: |
Provide config files for all affected hosts.
Improve formatting by using <code>```</code> at the beginning and end of each config file.
validations:
required: false

9
.github/ISSUE_TEMPLATE/config.yml vendored Normal file
View File

@@ -0,0 +1,9 @@
blank_issues_enabled: true
contact_links:
- name: 📘 Documentation
url: https://www.defined.net/nebula/
about: Review documentation.
- name: 💁 Support/Chat
url: https://join.slack.com/t/nebulaoss/shared_invite/enQtOTA5MDI4NDg3MTg4LTkwY2EwNTI4NzQyMzc0M2ZlODBjNWI3NTY1MzhiOThiMmZlZjVkMTI0NGY4YTMyNjUwMWEyNzNkZTJmYzQxOGU
about: 'This issue tracker is not for support questions. Join us on Slack for assistance!'

View File

@@ -14,10 +14,10 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Set up Go 1.17
- name: Set up Go 1.18
uses: actions/setup-go@v2
with:
go-version: 1.17
go-version: 1.18
id: go
- name: Check out code into the Go module directory
@@ -26,9 +26,9 @@ jobs:
- uses: actions/cache@v2
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-gofmt1.17-${{ hashFiles('**/go.sum') }}
key: ${{ runner.os }}-gofmt1.18-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-gofmt1.17-
${{ runner.os }}-gofmt1.18-
- name: Install goimports
run: |

View File

@@ -10,10 +10,10 @@ jobs:
name: Build Linux All
runs-on: ubuntu-latest
steps:
- name: Set up Go 1.17
- name: Set up Go 1.18
uses: actions/setup-go@v2
with:
go-version: 1.17
go-version: 1.18
- name: Checkout code
uses: actions/checkout@v2
@@ -31,13 +31,13 @@ jobs:
path: release
build-windows:
name: Build Windows amd64
name: Build Windows
runs-on: windows-latest
steps:
- name: Set up Go 1.17
- name: Set up Go 1.18
uses: actions/setup-go@v2
with:
go-version: 1.17
go-version: 1.18
- name: Checkout code
uses: actions/checkout@v2
@@ -45,8 +45,14 @@ jobs:
- name: Build
run: |
echo $Env:GITHUB_REF.Substring(11)
go build -trimpath -ldflags "-X main.Build=$($Env:GITHUB_REF.Substring(11))" -o build\nebula.exe ./cmd/nebula-service
go build -trimpath -ldflags "-X main.Build=$($Env:GITHUB_REF.Substring(11))" -o build\nebula-cert.exe ./cmd/nebula-cert
mkdir build\windows-amd64
$Env:GOARCH = "amd64"
go build -trimpath -ldflags "-X main.Build=$($Env:GITHUB_REF.Substring(11))" -o build\windows-amd64\nebula.exe ./cmd/nebula-service
go build -trimpath -ldflags "-X main.Build=$($Env:GITHUB_REF.Substring(11))" -o build\windows-amd64\nebula-cert.exe ./cmd/nebula-cert
mkdir build\windows-arm64
$Env:GOARCH = "arm64"
go build -trimpath -ldflags "-X main.Build=$($Env:GITHUB_REF.Substring(11))" -o build\windows-arm64\nebula.exe ./cmd/nebula-service
go build -trimpath -ldflags "-X main.Build=$($Env:GITHUB_REF.Substring(11))" -o build\windows-arm64\nebula-cert.exe ./cmd/nebula-cert
mkdir build\dist\windows
mv dist\windows\wintun build\dist\windows\
@@ -62,10 +68,10 @@ jobs:
HAS_SIGNING_CREDS: ${{ secrets.AC_USERNAME != '' }}
runs-on: macos-11
steps:
- name: Set up Go 1.17
- name: Set up Go 1.18
uses: actions/setup-go@v2
with:
go-version: 1.17
go-version: 1.18
- name: Checkout code
uses: actions/checkout@v2
@@ -117,7 +123,10 @@ jobs:
- name: Zip Windows
run: |
cd windows-latest
cp windows-amd64/* .
zip -r nebula-windows-amd64.zip nebula.exe nebula-cert.exe dist
cp windows-arm64/* .
zip -r nebula-windows-arm64.zip nebula.exe nebula-cert.exe dist
- name: Create sha256sum
run: |
@@ -127,9 +136,12 @@ jobs:
cd $dir
if [ "$dir" = windows-latest ]
then
sha256sum <nebula.exe | sed 's=-$=nebula-windows-amd64.zip/nebula.exe='
sha256sum <nebula-cert.exe | sed 's=-$=nebula-windows-amd64.zip/nebula-cert.exe='
sha256sum <windows-amd64/nebula.exe | sed 's=-$=nebula-windows-amd64.zip/nebula.exe='
sha256sum <windows-amd64/nebula-cert.exe | sed 's=-$=nebula-windows-amd64.zip/nebula-cert.exe='
sha256sum <windows-arm64/nebula.exe | sed 's=-$=nebula-windows-arm64.zip/nebula.exe='
sha256sum <windows-arm64/nebula-cert.exe | sed 's=-$=nebula-windows-arm64.zip/nebula-cert.exe='
sha256sum nebula-windows-amd64.zip
sha256sum nebula-windows-arm64.zip
elif [ "$dir" = darwin-latest ]
then
sha256sum <nebula-darwin.zip | sed 's=-$=nebula-darwin.zip='
@@ -190,6 +202,16 @@ jobs:
asset_name: nebula-windows-amd64.zip
asset_content_type: application/zip
- name: Upload windows-arm64
uses: actions/upload-release-asset@v1.0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: ./windows-latest/nebula-windows-arm64.zip
asset_name: nebula-windows-arm64.zip
asset_content_type: application/zip
- name: Upload linux-amd64
uses: actions/upload-release-asset@v1.0.1
env:

View File

@@ -18,10 +18,10 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Set up Go 1.17
- name: Set up Go 1.18
uses: actions/setup-go@v2
with:
go-version: 1.17
go-version: 1.18
id: go
- name: Check out code into the Go module directory
@@ -30,9 +30,9 @@ jobs:
- uses: actions/cache@v2
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go1.17-${{ hashFiles('**/go.sum') }}
key: ${{ runner.os }}-go1.18-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go1.17-
${{ runner.os }}-go1.18-
- name: build
run: make bin-docker
@@ -45,4 +45,12 @@ jobs:
working-directory: ./.github/workflows/smoke
run: ./smoke.sh
- name: setup relay docker image
working-directory: ./.github/workflows/smoke
run: ./build-relay.sh
- name: run smoke relay
working-directory: ./.github/workflows/smoke
run: ./smoke-relay.sh
timeout-minutes: 10

44
.github/workflows/smoke/build-relay.sh vendored Executable file
View File

@@ -0,0 +1,44 @@
#!/bin/sh
set -e -x
rm -rf ./build
mkdir ./build
(
cd build
cp ../../../../build/linux-amd64/nebula .
cp ../../../../build/linux-amd64/nebula-cert .
HOST="lighthouse1" AM_LIGHTHOUSE=true ../genconfig.sh >lighthouse1.yml <<EOF
relay:
am_relay: true
EOF
export LIGHTHOUSES="192.168.100.1 172.17.0.2:4242"
export REMOTE_ALLOW_LIST='{"172.17.0.4/32": false, "172.17.0.5/32": false}'
HOST="host2" ../genconfig.sh >host2.yml <<EOF
relay:
relays:
- 192.168.100.1
EOF
export REMOTE_ALLOW_LIST='{"172.17.0.3/32": false}'
HOST="host3" ../genconfig.sh >host3.yml
HOST="host4" ../genconfig.sh >host4.yml <<EOF
relay:
use_relays: false
EOF
../../../../nebula-cert ca -name "Smoke Test"
../../../../nebula-cert sign -name "lighthouse1" -groups "lighthouse,lighthouse1" -ip "192.168.100.1/24"
../../../../nebula-cert sign -name "host2" -groups "host,host2" -ip "192.168.100.2/24"
../../../../nebula-cert sign -name "host3" -groups "host,host3" -ip "192.168.100.3/24"
../../../../nebula-cert sign -name "host4" -groups "host,host4" -ip "192.168.100.4/24"
)
sudo docker build -t nebula:smoke-relay .

View File

@@ -40,6 +40,7 @@ pki:
lighthouse:
am_lighthouse: ${AM_LIGHTHOUSE:-false}
hosts: $(lighthouse_hosts)
remote_allow_list: ${REMOTE_ALLOW_LIST}
listen:
host: 0.0.0.0
@@ -51,4 +52,6 @@ tun:
firewall:
outbound: ${OUTBOUND:-$FIREWALL_ALL}
inbound: ${INBOUND:-$FIREWALL_ALL}
$(test -t 0 || cat)
EOF

85
.github/workflows/smoke/smoke-relay.sh vendored Executable file
View File

@@ -0,0 +1,85 @@
#!/bin/bash
set -e -x
set -o pipefail
mkdir -p logs
cleanup() {
echo
echo " *** cleanup"
echo
set +e
if [ "$(jobs -r)" ]
then
sudo docker kill lighthouse1 host2 host3 host4
fi
}
trap cleanup EXIT
sudo docker run --name lighthouse1 --rm nebula:smoke-relay -config lighthouse1.yml -test
sudo docker run --name host2 --rm nebula:smoke-relay -config host2.yml -test
sudo docker run --name host3 --rm nebula:smoke-relay -config host3.yml -test
sudo docker run --name host4 --rm nebula:smoke-relay -config host4.yml -test
sudo docker run --name lighthouse1 --device /dev/net/tun:/dev/net/tun --cap-add NET_ADMIN --rm nebula:smoke-relay -config lighthouse1.yml 2>&1 | tee logs/lighthouse1 | sed -u 's/^/ [lighthouse1] /' &
sleep 1
sudo docker run --name host2 --device /dev/net/tun:/dev/net/tun --cap-add NET_ADMIN --rm nebula:smoke-relay -config host2.yml 2>&1 | tee logs/host2 | sed -u 's/^/ [host2] /' &
sleep 1
sudo docker run --name host3 --device /dev/net/tun:/dev/net/tun --cap-add NET_ADMIN --rm nebula:smoke-relay -config host3.yml 2>&1 | tee logs/host3 | sed -u 's/^/ [host3] /' &
sleep 1
sudo docker run --name host4 --device /dev/net/tun:/dev/net/tun --cap-add NET_ADMIN --rm nebula:smoke-relay -config host4.yml 2>&1 | tee logs/host4 | sed -u 's/^/ [host4] /' &
sleep 1
set +x
echo
echo " *** Testing ping from lighthouse1"
echo
set -x
sudo docker exec lighthouse1 ping -c1 192.168.100.2
sudo docker exec lighthouse1 ping -c1 192.168.100.3
sudo docker exec lighthouse1 ping -c1 192.168.100.4
set +x
echo
echo " *** Testing ping from host2"
echo
set -x
sudo docker exec host2 ping -c1 192.168.100.1
# Should fail because no relay configured in this direction
! sudo docker exec host2 ping -c1 192.168.100.3 -w5 || exit 1
! sudo docker exec host2 ping -c1 192.168.100.4 -w5 || exit 1
set +x
echo
echo " *** Testing ping from host3"
echo
set -x
sudo docker exec host3 ping -c1 192.168.100.1
sudo docker exec host3 ping -c1 192.168.100.2
sudo docker exec host3 ping -c1 192.168.100.4
set +x
echo
echo " *** Testing ping from host4"
echo
set -x
sudo docker exec host4 ping -c1 192.168.100.1
# Should fail because relays not allowed
! sudo docker exec host4 ping -c1 192.168.100.2 -w5 || exit 1
sudo docker exec host4 ping -c1 192.168.100.3
sudo docker exec host4 sh -c 'kill 1'
sudo docker exec host3 sh -c 'kill 1'
sudo docker exec host2 sh -c 'kill 1'
sudo docker exec lighthouse1 sh -c 'kill 1'
sleep 1
if [ "$(jobs -r)" ]
then
echo "nebula still running after SIGTERM sent" >&2
exit 1
fi

View File

@@ -7,6 +7,10 @@ set -o pipefail
mkdir -p logs
cleanup() {
echo
echo " *** cleanup"
echo
set +e
if [ "$(jobs -r)" ]
then
@@ -21,13 +25,13 @@ sudo docker run --name host2 --rm nebula:smoke -config host2.yml -test
sudo docker run --name host3 --rm nebula:smoke -config host3.yml -test
sudo docker run --name host4 --rm nebula:smoke -config host4.yml -test
sudo docker run --name lighthouse1 --device /dev/net/tun:/dev/net/tun --cap-add NET_ADMIN --rm nebula:smoke -config lighthouse1.yml 2>&1 | tee logs/lighthouse1 &
sudo docker run --name lighthouse1 --device /dev/net/tun:/dev/net/tun --cap-add NET_ADMIN --rm nebula:smoke -config lighthouse1.yml 2>&1 | tee logs/lighthouse1 | sed -u 's/^/ [lighthouse1] /' &
sleep 1
sudo docker run --name host2 --device /dev/net/tun:/dev/net/tun --cap-add NET_ADMIN --rm nebula:smoke -config host2.yml 2>&1 | tee logs/host2 &
sudo docker run --name host2 --device /dev/net/tun:/dev/net/tun --cap-add NET_ADMIN --rm nebula:smoke -config host2.yml 2>&1 | tee logs/host2 | sed -u 's/^/ [host2] /' &
sleep 1
sudo docker run --name host3 --device /dev/net/tun:/dev/net/tun --cap-add NET_ADMIN --rm nebula:smoke -config host3.yml 2>&1 | tee logs/host3 &
sudo docker run --name host3 --device /dev/net/tun:/dev/net/tun --cap-add NET_ADMIN --rm nebula:smoke -config host3.yml 2>&1 | tee logs/host3 | sed -u 's/^/ [host3] /' &
sleep 1
sudo docker run --name host4 --device /dev/net/tun:/dev/net/tun --cap-add NET_ADMIN --rm nebula:smoke -config host4.yml 2>&1 | tee logs/host4 &
sudo docker run --name host4 --device /dev/net/tun:/dev/net/tun --cap-add NET_ADMIN --rm nebula:smoke -config host4.yml 2>&1 | tee logs/host4 | sed -u 's/^/ [host4] /' &
sleep 1
set +x
@@ -81,3 +85,9 @@ sudo docker exec host3 sh -c 'kill 1'
sudo docker exec host2 sh -c 'kill 1'
sudo docker exec lighthouse1 sh -c 'kill 1'
sleep 1
if [ "$(jobs -r)" ]
then
echo "nebula still running after SIGTERM sent" >&2
exit 1
fi

View File

@@ -18,10 +18,10 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Set up Go 1.17
- name: Set up Go 1.18
uses: actions/setup-go@v2
with:
go-version: 1.17
go-version: 1.18
id: go
- name: Check out code into the Go module directory
@@ -30,9 +30,9 @@ jobs:
- uses: actions/cache@v2
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go1.17-${{ hashFiles('**/go.sum') }}
key: ${{ runner.os }}-go1.18-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go1.17-
${{ runner.os }}-go1.18-
- name: Build
run: make all
@@ -43,6 +43,12 @@ jobs:
- name: End 2 end
run: make e2evv
- uses: actions/upload-artifact@v3
with:
name: e2e packet flow
path: e2e/mermaid/
if-no-files-found: warn
test:
name: Build and test on ${{ matrix.os }}
runs-on: ${{ matrix.os }}
@@ -51,10 +57,10 @@ jobs:
os: [windows-latest, macos-11]
steps:
- name: Set up Go 1.17
- name: Set up Go 1.18
uses: actions/setup-go@v2
with:
go-version: 1.17
go-version: 1.18
id: go
- name: Check out code into the Go module directory
@@ -63,9 +69,9 @@ jobs:
- uses: actions/cache@v2
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go1.17-${{ hashFiles('**/go.sum') }}
key: ${{ runner.os }}-go1.18-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go1.17-
${{ runner.os }}-go1.18-
- name: Build nebula
run: go build ./cmd/nebula
@@ -78,3 +84,9 @@ jobs:
- name: End 2 end
run: make e2evv
- uses: actions/upload-artifact@v3
with:
name: e2e packet flow
path: e2e/mermaid/
if-no-files-found: warn

1
.gitignore vendored
View File

@@ -10,3 +10,4 @@
/cpu.pprof
/build
/*.tar.gz
/e2e/mermaid/

View File

@@ -7,6 +7,98 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
## [Unreleased]
## [1.6.1] - 2022-09-26
### Fixed
- Refuse to process underlay packets received from overlay IPs. This prevents
confusion on hosts that have unsafe routes configured. (#741)
- The ssh `reload` command did not work on Windows, since it relied on sending
a SIGHUP signal internally. This has been fixed. (#725)
- A regression in v1.5.2 that broke unsafe routes on Mobile clients has been
fixed. (#729)
## [1.6.0] - 2022-06-30
### Added
- Experimental: nebula clients can be configured to act as relays for other nebula clients.
Primarily useful when stubborn NATs make a direct tunnel impossible. (#678)
- Configuration option to report manually specified `ip:port`s to lighthouses. (#650)
- Windows arm64 build. (#638)
- `punchy` and most `lighthouse` config options now support hot reloading. (#649)
### Changed
- Build against go 1.18. (#656)
- Promoted `routines` config from experimental to supported feature. (#702)
- Dependencies updated. (#664)
### Fixed
- Packets destined for the same host that sent it will be returned on MacOS.
This matches the default behavior of other operating systems. (#501)
- `unsafe_route` configuration will no longer crash on Windows. (#648)
- A few panics that were introduced in 1.5.x. (#657, #658, #675)
### Security
- You can set `listen.send_recv_error` to control the conditions in which
`recv_error` messages are sent. Sending these messages can expose the fact
that Nebula is running on a host, but it speeds up re-handshaking. (#670)
### Removed
- `x509` config stanza support has been removed. (#685)
## [1.5.2] - 2021-12-14
### Added
- Warn when a non lighthouse node does not have lighthouse hosts configured. (#587)
### Changed
- No longer fatals if expired CA certificates are present in `pki.ca`, as long as 1 valid CA is present. (#599)
- `nebula-cert` will now enforce ipv4 addresses. (#604)
- Warn on macOS if an unsafe route cannot be created due to a collision with an
existing route. (#610)
- Warn if you set a route MTU on platforms where we don't support it. (#611)
### Fixed
- Rare race condition when tearing down a tunnel due to `recv_error` and sending packets on another thread. (#590)
- Bug in `routes` and `unsafe_routes` handling that was introduced in 1.5.0. (#595)
- `-test` mode no longer results in a crash. (#602)
### Removed
- `x509.ca` config alias for `pki.ca`. (#604)
### Security
- Upgraded `golang.org/x/crypto` to address an issue which allowed unauthenticated clients to cause a panic in SSH
servers. (#603)
## 1.5.1 - 2021-12-13
(This release was skipped due to discovering #610 and #611 after the tag was
created.)
## [1.5.0] - 2021-11-11
### Added
@@ -306,7 +398,10 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- Initial public release.
[Unreleased]: https://github.com/slackhq/nebula/compare/v1.5.0...HEAD
[Unreleased]: https://github.com/slackhq/nebula/compare/v1.6.1...HEAD
[1.6.1]: https://github.com/slackhq/nebula/releases/tag/v1.6.1
[1.6.0]: https://github.com/slackhq/nebula/releases/tag/v1.6.0
[1.5.2]: https://github.com/slackhq/nebula/releases/tag/v1.5.2
[1.5.0]: https://github.com/slackhq/nebula/releases/tag/v1.5.0
[1.4.0]: https://github.com/slackhq/nebula/releases/tag/v1.4.0
[1.3.0]: https://github.com/slackhq/nebula/releases/tag/v1.3.0

View File

@@ -1,4 +1,4 @@
GOMINVERSION = 1.17
GOMINVERSION = 1.18
NEBULA_CMD_PATH = "./cmd/nebula"
GO111MODULE = on
export GO111MODULE
@@ -48,7 +48,8 @@ ALL = $(ALL_LINUX) \
darwin-amd64 \
darwin-arm64 \
freebsd-amd64 \
windows-amd64
windows-amd64 \
windows-arm64
e2e:
$(TEST_ENV) go test -tags=e2e_testing -count=1 $(TEST_FLAGS) ./e2e
@@ -65,6 +66,9 @@ e2evvv: e2ev
e2evvvv: TEST_ENV += TEST_LOGS=3
e2evvvv: e2ev
e2e-bench: TEST_FLAGS = -bench=. -benchmem -run=^$
e2e-bench: e2e
all: $(ALL:%=build/%/nebula) $(ALL:%=build/%/nebula-cert)
release: $(ALL:%=build/nebula-%.tar.gz)
@@ -78,6 +82,9 @@ BUILD_ARGS = -trimpath
bin-windows: build/windows-amd64/nebula.exe build/windows-amd64/nebula-cert.exe
mv $? .
bin-windows-arm64: build/windows-arm64/nebula.exe build/windows-arm64/nebula-cert.exe
mv $? .
bin-darwin: build/darwin-amd64/nebula build/darwin-amd64/nebula-cert
mv $? .
@@ -164,6 +171,10 @@ smoke-docker: bin-docker
cd .github/workflows/smoke/ && ./build.sh
cd .github/workflows/smoke/ && ./smoke.sh
smoke-relay-docker: bin-docker
cd .github/workflows/smoke/ && ./build-relay.sh
cd .github/workflows/smoke/ && ./smoke-relay.sh
smoke-docker-race: BUILD_ARGS = -race
smoke-docker-race: smoke-docker

View File

@@ -8,7 +8,7 @@ and tunneling, and each of those individual pieces existed before Nebula in vari
What makes Nebula different to existing offerings is that it brings all of these ideas together,
resulting in a sum that is greater than its individual parts.
Further documentation can be found [here](https://www.defined.net/nebula/introduction/).
Further documentation can be found [here](https://www.defined.net/nebula/).
You can read more about Nebula [here](https://medium.com/p/884110a5579).

View File

@@ -7,12 +7,12 @@ import (
"github.com/slackhq/nebula/cidr"
"github.com/slackhq/nebula/config"
"github.com/slackhq/nebula/util"
"github.com/slackhq/nebula/test"
"github.com/stretchr/testify/assert"
)
func TestNewAllowListFromConfig(t *testing.T) {
l := util.NewTestLogger()
l := test.NewLogger()
c := config.NewC(l)
c.Settings["allowlist"] = map[interface{}]interface{}{
"192.168.0.0": true,

View File

@@ -3,12 +3,12 @@ package nebula
import (
"testing"
"github.com/slackhq/nebula/util"
"github.com/slackhq/nebula/test"
"github.com/stretchr/testify/assert"
)
func TestBits(t *testing.T) {
l := util.NewTestLogger()
l := test.NewLogger()
b := NewBits(10)
// make sure it is the right size
@@ -76,7 +76,7 @@ func TestBits(t *testing.T) {
}
func TestBitsDupeCounter(t *testing.T) {
l := util.NewTestLogger()
l := test.NewLogger()
b := NewBits(10)
b.lostCounter.Clear()
b.dupeCounter.Clear()
@@ -101,7 +101,7 @@ func TestBitsDupeCounter(t *testing.T) {
}
func TestBitsOutOfWindowCounter(t *testing.T) {
l := util.NewTestLogger()
l := test.NewLogger()
b := NewBits(10)
b.lostCounter.Clear()
b.dupeCounter.Clear()
@@ -131,7 +131,7 @@ func TestBitsOutOfWindowCounter(t *testing.T) {
}
func TestBitsLostCounter(t *testing.T) {
l := util.NewTestLogger()
l := test.NewLogger()
b := NewBits(10)
b.lostCounter.Clear()
b.dupeCounter.Clear()

40
cert.go
View File

@@ -51,11 +51,6 @@ func NewCertStateFromConfig(c *config.C) (*CertState, error) {
var err error
privPathOrPEM := c.GetString("pki.key", "")
if privPathOrPEM == "" {
// Support backwards compat with the old x509
//TODO: remove after this is rolled out everywhere - NB 2018/02/23
privPathOrPEM = c.GetString("x509.key", "")
}
if privPathOrPEM == "" {
return nil, errors.New("no pki.key path or PEM data provided")
@@ -79,11 +74,6 @@ func NewCertStateFromConfig(c *config.C) (*CertState, error) {
var rawCert []byte
pubPathOrPEM := c.GetString("pki.cert", "")
if pubPathOrPEM == "" {
// Support backwards compat with the old x509
//TODO: remove after this is rolled out everywhere - NB 2018/02/23
pubPathOrPEM = c.GetString("x509.cert", "")
}
if pubPathOrPEM == "" {
return nil, errors.New("no pki.cert path or PEM data provided")
@@ -124,19 +114,13 @@ func loadCAFromConfig(l *logrus.Logger, c *config.C) (*cert.NebulaCAPool, error)
var err error
caPathOrPEM := c.GetString("pki.ca", "")
if caPathOrPEM == "" {
// Support backwards compat with the old x509
//TODO: remove after this is rolled out everywhere - NB 2018/02/23
caPathOrPEM = c.GetString("x509.ca", "")
}
if caPathOrPEM == "" {
return nil, errors.New("no pki.ca path or PEM data provided")
}
if strings.Contains(caPathOrPEM, "-----BEGIN") {
rawCA = []byte(caPathOrPEM)
caPathOrPEM = "<inline>"
} else {
rawCA, err = ioutil.ReadFile(caPathOrPEM)
if err != nil {
@@ -145,18 +129,32 @@ func loadCAFromConfig(l *logrus.Logger, c *config.C) (*cert.NebulaCAPool, error)
}
CAs, err := cert.NewCAPoolFromBytes(rawCA)
if err != nil {
if errors.Is(err, cert.ErrExpired) {
var expired int
for _, cert := range CAs.CAs {
if cert.Expired(time.Now()) {
expired++
l.WithField("cert", cert).Warn("expired certificate present in CA pool")
}
}
if expired >= len(CAs.CAs) {
return nil, errors.New("no valid CA certificates present")
}
} else if err != nil {
return nil, fmt.Errorf("error while adding CA certificate to CA trust store: %s", err)
}
for _, fp := range c.GetStringSlice("pki.blocklist", []string{}) {
l.WithField("fingerprint", fp).Infof("Blocklisting cert")
l.WithField("fingerprint", fp).Info("Blocklisting cert")
CAs.BlocklistFingerprint(fp)
}
// Support deprecated config for at leaast one minor release to allow for migrations
// Support deprecated config for at least one minor release to allow for migrations
//TODO: remove in 2022 or later
for _, fp := range c.GetStringSlice("pki.blacklist", []string{}) {
l.WithField("fingerprint", fp).Infof("Blocklisting cert")
l.WithField("fingerprint", fp).Info("Blocklisting cert")
l.Warn("pki.blacklist is deprecated and will not be supported in a future release. Please migrate your config to use pki.blocklist")
CAs.BlocklistFingerprint(fp)
}

View File

@@ -1,6 +1,7 @@
package cert
import (
"errors"
"fmt"
"strings"
"time"
@@ -21,19 +22,32 @@ func NewCAPool() *NebulaCAPool {
return &ca
}
// NewCAPoolFromBytes will create a new CA pool from the provided
// input bytes, which must be a PEM-encoded set of nebula certificates.
// If the pool contains any expired certificates, an ErrExpired will be
// returned along with the pool. The caller must handle any such errors.
func NewCAPoolFromBytes(caPEMs []byte) (*NebulaCAPool, error) {
pool := NewCAPool()
var err error
var expired bool
for {
caPEMs, err = pool.AddCACertificate(caPEMs)
if errors.Is(err, ErrExpired) {
expired = true
err = nil
}
if err != nil {
return nil, err
}
if caPEMs == nil || len(caPEMs) == 0 || strings.TrimSpace(string(caPEMs)) == "" {
if len(caPEMs) == 0 || strings.TrimSpace(string(caPEMs)) == "" {
break
}
}
if expired {
return pool, ErrExpired
}
return pool, nil
}
@@ -47,15 +61,11 @@ func (ncp *NebulaCAPool) AddCACertificate(pemBytes []byte) ([]byte, error) {
}
if !c.Details.IsCA {
return pemBytes, fmt.Errorf("provided certificate was not a CA; %s", c.Details.Name)
return pemBytes, fmt.Errorf("%s: %w", c.Details.Name, ErrNotCA)
}
if !c.CheckSignature(c.Details.PublicKey) {
return pemBytes, fmt.Errorf("provided certificate was not self signed; %s", c.Details.Name)
}
if c.Expired(time.Now()) {
return pemBytes, fmt.Errorf("provided CA certificate is expired; %s", c.Details.Name)
return pemBytes, fmt.Errorf("%s: %w", c.Details.Name, ErrNotSelfSigned)
}
sum, err := c.Sha256Sum()
@@ -64,6 +74,10 @@ func (ncp *NebulaCAPool) AddCACertificate(pemBytes []byte) ([]byte, error) {
}
ncp.CAs[sum] = c
if c.Expired(time.Now()) {
return pemBytes, fmt.Errorf("%s: %w", c.Details.Name, ErrExpired)
}
return pemBytes, nil
}

View File

@@ -13,9 +13,9 @@ import (
"net"
"time"
"github.com/golang/protobuf/proto"
"golang.org/x/crypto/curve25519"
"golang.org/x/crypto/ed25519"
"google.golang.org/protobuf/proto"
)
const publicKeyLen = 32

View File

@@ -1,7 +1,7 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.26.0
// protoc v3.14.0
// protoc-gen-go v1.28.0
// protoc v3.20.0
// source: cert.proto
package cert

View File

@@ -8,11 +8,11 @@ import (
"testing"
"time"
"github.com/golang/protobuf/proto"
"github.com/slackhq/nebula/util"
"github.com/slackhq/nebula/test"
"github.com/stretchr/testify/assert"
"golang.org/x/crypto/curve25519"
"golang.org/x/crypto/ed25519"
"google.golang.org/protobuf/proto"
)
func TestMarshalingNebulaCertificate(t *testing.T) {
@@ -429,6 +429,15 @@ BVG+oJpAoqokUBbI4U0N8CSfpUABEkB/Pm5A2xyH/nc8mg/wvGUWG3pZ7nHzaDMf
8/phAUt+FLzqTECzQKisYswKvE3pl9mbEYKbOdIHrxdIp95mo4sF
-----END NEBULA CERTIFICATE-----
`
expired := `
# expired certificate
-----BEGIN NEBULA CERTIFICATE-----
CjkKB2V4cGlyZWQouPmWjQYwufmWjQY6ILCRaoCkJlqHgv5jfDN4lzLHBvDzaQm4
vZxfu144hmgjQAESQG4qlnZi8DncvD/LDZnLgJHOaX1DWCHHEh59epVsC+BNgTie
WH1M9n4O7cFtGlM6sJJOS+rCVVEJ3ABS7+MPdQs=
-----END NEBULA CERTIFICATE-----
`
rootCA := NebulaCertificate{
@@ -452,6 +461,19 @@ BVG+oJpAoqokUBbI4U0N8CSfpUABEkB/Pm5A2xyH/nc8mg/wvGUWG3pZ7nHzaDMf
assert.Nil(t, err)
assert.Equal(t, pp.CAs[string("c9bfaf7ce8e84b2eeda2e27b469f4b9617bde192efd214b68891ecda6ed49522")].Details.Name, rootCA.Details.Name)
assert.Equal(t, pp.CAs[string("5c9c3f23e7ee7fe97637cbd3a0a5b854154d1d9aaaf7b566a51f4a88f76b64cd")].Details.Name, rootCA01.Details.Name)
// expired cert, no valid certs
ppp, err := NewCAPoolFromBytes([]byte(expired))
assert.Equal(t, ErrExpired, err)
assert.Equal(t, ppp.CAs[string("152070be6bb19bc9e3bde4c2f0e7d8f4ff5448b4c9856b8eccb314fade0229b0")].Details.Name, "expired")
// expired cert, with valid certs
pppp, err := NewCAPoolFromBytes(append([]byte(expired), noNewLines...))
assert.Equal(t, ErrExpired, err)
assert.Equal(t, pppp.CAs[string("c9bfaf7ce8e84b2eeda2e27b469f4b9617bde192efd214b68891ecda6ed49522")].Details.Name, rootCA.Details.Name)
assert.Equal(t, pppp.CAs[string("5c9c3f23e7ee7fe97637cbd3a0a5b854154d1d9aaaf7b566a51f4a88f76b64cd")].Details.Name, rootCA01.Details.Name)
assert.Equal(t, pppp.CAs[string("152070be6bb19bc9e3bde4c2f0e7d8f4ff5448b4c9856b8eccb314fade0229b0")].Details.Name, "expired")
assert.Equal(t, len(pppp.CAs), 3)
}
func appendByteSlices(b ...[]byte) []byte {
@@ -752,7 +774,7 @@ func TestNebulaCertificate_Copy(t *testing.T) {
assert.Nil(t, err)
cc := c.Copy()
util.AssertDeepCopyEqual(t, c, cc)
test.AssertDeepCopyEqual(t, c, cc)
}
func TestUnmarshalNebulaCertificate(t *testing.T) {

9
cert/errors.go Normal file
View File

@@ -0,0 +1,9 @@
package cert
import "errors"
var (
ErrExpired = errors.New("certificate is expired")
ErrNotCA = errors.New("certificate is not a CA")
ErrNotSelfSigned = errors.New("certificate is not self-signed")
)

View File

@@ -37,8 +37,8 @@ func newCaFlags() *caFlags {
cf.outCertPath = cf.set.String("out-crt", "ca.crt", "Optional: path to write the certificate to")
cf.outQRPath = cf.set.String("out-qr", "", "Optional: output a qr code image (png) of the certificate")
cf.groups = cf.set.String("groups", "", "Optional: comma separated list of groups. This will limit which groups subordinate certs can use")
cf.ips = cf.set.String("ips", "", "Optional: comma separated list of ip and network in CIDR notation. This will limit which ip addresses and networks subordinate certs can use")
cf.subnets = cf.set.String("subnets", "", "Optional: comma separated list of ip and network in CIDR notation. This will limit which subnet addresses and networks subordinate certs can use")
cf.ips = cf.set.String("ips", "", "Optional: comma separated list of ipv4 address and network in CIDR notation. This will limit which ipv4 addresses and networks subordinate certs can use for ip addresses")
cf.subnets = cf.set.String("subnets", "", "Optional: comma separated list of ipv4 address and network in CIDR notation. This will limit which ipv4 addresses and networks subordinate certs can use in subnets")
return &cf
}
@@ -82,6 +82,9 @@ func ca(args []string, out io.Writer, errOut io.Writer) error {
if err != nil {
return newHelpErrorf("invalid ip definition: %s", err)
}
if ip.To4() == nil {
return newHelpErrorf("invalid ip definition: can only be ipv4, have %s", rs)
}
ipNet.IP = ip
ips = append(ips, ipNet)
@@ -98,6 +101,9 @@ func ca(args []string, out io.Writer, errOut io.Writer) error {
if err != nil {
return newHelpErrorf("invalid subnet definition: %s", err)
}
if s.IP.To4() == nil {
return newHelpErrorf("invalid subnet definition: can only be ipv4, have %s", rs)
}
subnets = append(subnets, s)
}
}

View File

@@ -31,7 +31,7 @@ func Test_caHelp(t *testing.T) {
" -groups string\n"+
" \tOptional: comma separated list of groups. This will limit which groups subordinate certs can use\n"+
" -ips string\n"+
" \tOptional: comma separated list of ip and network in CIDR notation. This will limit which ip addresses and networks subordinate certs can use\n"+
" \tOptional: comma separated list of ipv4 address and network in CIDR notation. This will limit which ipv4 addresses and networks subordinate certs can use for ip addresses\n"+
" -name string\n"+
" \tRequired: name of the certificate authority\n"+
" -out-crt string\n"+
@@ -41,7 +41,7 @@ func Test_caHelp(t *testing.T) {
" -out-qr string\n"+
" \tOptional: output a qr code image (png) of the certificate\n"+
" -subnets string\n"+
" \tOptional: comma separated list of ip and network in CIDR notation. This will limit which subnet addresses and networks subordinate certs can use\n",
" \tOptional: comma separated list of ipv4 address and network in CIDR notation. This will limit which ipv4 addresses and networks subordinate certs can use in subnets\n",
ob.String(),
)
}
@@ -55,6 +55,16 @@ func Test_ca(t *testing.T) {
assert.Equal(t, "", ob.String())
assert.Equal(t, "", eb.String())
// ipv4 only ips
assertHelpError(t, ca([]string{"-name", "ipv6", "-ips", "100::100/100"}, ob, eb), "invalid ip definition: can only be ipv4, have 100::100/100")
assert.Equal(t, "", ob.String())
assert.Equal(t, "", eb.String())
// ipv4 only subnets
assertHelpError(t, ca([]string{"-name", "ipv6", "-subnets", "100::100/100"}, ob, eb), "invalid subnet definition: can only be ipv4, have 100::100/100")
assert.Equal(t, "", ob.String())
assert.Equal(t, "", eb.String())
// failed key write
ob.Reset()
eb.Reset()

View File

@@ -37,14 +37,14 @@ func newSignFlags() *signFlags {
sf.caKeyPath = sf.set.String("ca-key", "ca.key", "Optional: path to the signing CA key")
sf.caCertPath = sf.set.String("ca-crt", "ca.crt", "Optional: path to the signing CA cert")
sf.name = sf.set.String("name", "", "Required: name of the cert, usually a hostname")
sf.ip = sf.set.String("ip", "", "Required: ip and network in CIDR notation to assign the cert")
sf.ip = sf.set.String("ip", "", "Required: ipv4 address and network in CIDR notation to assign the cert")
sf.duration = sf.set.Duration("duration", 0, "Optional: how long the cert should be valid for. The default is 1 second before the signing cert expires. Valid time units are seconds: \"s\", minutes: \"m\", hours: \"h\"")
sf.inPubPath = sf.set.String("in-pub", "", "Optional (if out-key not set): path to read a previously generated public key")
sf.outKeyPath = sf.set.String("out-key", "", "Optional (if in-pub not set): path to write the private key to")
sf.outCertPath = sf.set.String("out-crt", "", "Optional: path to write the certificate to")
sf.outQRPath = sf.set.String("out-qr", "", "Optional: output a qr code image (png) of the certificate")
sf.groups = sf.set.String("groups", "", "Optional: comma separated list of groups")
sf.subnets = sf.set.String("subnets", "", "Optional: comma separated list of subnet this cert can serve for")
sf.subnets = sf.set.String("subnets", "", "Optional: comma separated list of ipv4 address and network in CIDR notation. Subnets this cert can serve for")
return &sf
}
@@ -114,6 +114,9 @@ func signCert(args []string, out io.Writer, errOut io.Writer) error {
if err != nil {
return newHelpErrorf("invalid ip definition: %s", err)
}
if ip.To4() == nil {
return newHelpErrorf("invalid ip definition: can only be ipv4, have %s", *sf.ip)
}
ipNet.IP = ip
groups := []string{}
@@ -135,6 +138,9 @@ func signCert(args []string, out io.Writer, errOut io.Writer) error {
if err != nil {
return newHelpErrorf("invalid subnet definition: %s", err)
}
if s.IP.To4() == nil {
return newHelpErrorf("invalid subnet definition: can only be ipv4, have %s", rs)
}
subnets = append(subnets, s)
}
}

View File

@@ -39,7 +39,7 @@ func Test_signHelp(t *testing.T) {
" -in-pub string\n"+
" \tOptional (if out-key not set): path to read a previously generated public key\n"+
" -ip string\n"+
" \tRequired: ip and network in CIDR notation to assign the cert\n"+
" \tRequired: ipv4 address and network in CIDR notation to assign the cert\n"+
" -name string\n"+
" \tRequired: name of the cert, usually a hostname\n"+
" -out-crt string\n"+
@@ -49,7 +49,7 @@ func Test_signHelp(t *testing.T) {
" -out-qr string\n"+
" \tOptional: output a qr code image (png) of the certificate\n"+
" -subnets string\n"+
" \tOptional: comma separated list of subnet this cert can serve for\n",
" \tOptional: comma separated list of ipv4 address and network in CIDR notation. Subnets this cert can serve for\n",
ob.String(),
)
}
@@ -59,7 +59,6 @@ func Test_signCert(t *testing.T) {
eb := &bytes.Buffer{}
// required args
assertHelpError(t, signCert([]string{"-ca-crt", "./nope", "-ca-key", "./nope", "-ip", "1.1.1.1/24", "-out-key", "nope", "-out-crt", "nope"}, ob, eb), "-name is required")
assert.Empty(t, ob.String())
assert.Empty(t, eb.String())
@@ -160,6 +159,13 @@ func Test_signCert(t *testing.T) {
assert.Empty(t, ob.String())
assert.Empty(t, eb.String())
ob.Reset()
eb.Reset()
args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "100::100/100", "-out-crt", "nope", "-out-key", "nope", "-duration", "100m"}
assertHelpError(t, signCert(args, ob, eb), "invalid ip definition: can only be ipv4, have 100::100/100")
assert.Empty(t, ob.String())
assert.Empty(t, eb.String())
// bad subnet cidr
ob.Reset()
eb.Reset()
@@ -168,6 +174,13 @@ func Test_signCert(t *testing.T) {
assert.Empty(t, ob.String())
assert.Empty(t, eb.String())
ob.Reset()
eb.Reset()
args = []string{"-ca-crt", caCrtF.Name(), "-ca-key", caKeyF.Name(), "-name", "test", "-ip", "1.1.1.1/24", "-out-crt", "nope", "-out-key", "nope", "-duration", "100m", "-subnets", "100::100/100"}
assertHelpError(t, signCert(args, ob, eb), "invalid subnet definition: can only be ipv4, have 100::100/100")
assert.Empty(t, ob.String())
assert.Empty(t, eb.String())
// mismatched ca key
_, caPriv2, _ := ed25519.GenerateKey(rand.Reader)
caKeyF2, err := ioutil.TempFile("", "sign-cert-2.key")

View File

@@ -8,6 +8,7 @@ import (
"github.com/sirupsen/logrus"
"github.com/slackhq/nebula"
"github.com/slackhq/nebula/config"
"github.com/slackhq/nebula/util"
)
// A version string that can be set with
@@ -60,7 +61,7 @@ func main() {
ctrl, err := nebula.Main(c, *configTest, Build, l, nil)
switch v := err.(type) {
case nebula.ContextualError:
case util.ContextualError:
v.Log(l)
os.Exit(1)
case error:

View File

@@ -8,6 +8,7 @@ import (
"github.com/sirupsen/logrus"
"github.com/slackhq/nebula"
"github.com/slackhq/nebula/config"
"github.com/slackhq/nebula/util"
)
// A version string that can be set with
@@ -54,7 +55,7 @@ func main() {
ctrl, err := nebula.Main(c, *configTest, Build, l, nil)
switch v := err.(type) {
case nebula.ContextualError:
case util.ContextualError:
v.Log(l)
os.Exit(1)
case error:

View File

@@ -11,6 +11,7 @@ import (
"sort"
"strconv"
"strings"
"sync"
"syscall"
"time"
@@ -26,6 +27,7 @@ type C struct {
oldSettings map[interface{}]interface{}
callbacks []func(*C)
l *logrus.Logger
reloadLock sync.Mutex
}
func NewC(l *logrus.Logger) *C {
@@ -74,6 +76,11 @@ func (c *C) RegisterReloadCallback(f func(*C)) {
c.callbacks = append(c.callbacks, f)
}
// InitialLoad returns true if this is the first load of the config, and ReloadConfig has not been called yet.
func (c *C) InitialLoad() bool {
return c.oldSettings == nil
}
// HasChanged checks if the underlying structure of the provided key has changed after a config reload. The value of
// k in both the old and new settings will be serialized, the result of the string comparison is returned.
// If k is an empty string the entire config is tested.
@@ -133,6 +140,9 @@ func (c *C) CatchHUP(ctx context.Context) {
}
func (c *C) ReloadConfig() {
c.reloadLock.Lock()
defer c.reloadLock.Unlock()
c.oldSettings = make(map[interface{}]interface{})
for k, v := range c.Settings {
c.oldSettings[k] = v
@@ -149,6 +159,27 @@ func (c *C) ReloadConfig() {
}
}
func (c *C) ReloadConfigString(raw string) error {
c.reloadLock.Lock()
defer c.reloadLock.Unlock()
c.oldSettings = make(map[interface{}]interface{})
for k, v := range c.Settings {
c.oldSettings[k] = v
}
err := c.LoadString(raw)
if err != nil {
return err
}
for _, v := range c.callbacks {
v(c)
}
return nil
}
// GetString will get the string for k or return the default d if not found or invalid
func (c *C) GetString(k, d string) string {
r := c.Get(k)

View File

@@ -7,12 +7,12 @@ import (
"testing"
"time"
"github.com/slackhq/nebula/util"
"github.com/slackhq/nebula/test"
"github.com/stretchr/testify/assert"
)
func TestConfig_Load(t *testing.T) {
l := util.NewTestLogger()
l := test.NewLogger()
dir, err := ioutil.TempDir("", "config-test")
// invalid yaml
c := NewC(l)
@@ -42,7 +42,7 @@ func TestConfig_Load(t *testing.T) {
}
func TestConfig_Get(t *testing.T) {
l := util.NewTestLogger()
l := test.NewLogger()
// test simple type
c := NewC(l)
c.Settings["firewall"] = map[interface{}]interface{}{"outbound": "hi"}
@@ -58,14 +58,14 @@ func TestConfig_Get(t *testing.T) {
}
func TestConfig_GetStringSlice(t *testing.T) {
l := util.NewTestLogger()
l := test.NewLogger()
c := NewC(l)
c.Settings["slice"] = []interface{}{"one", "two"}
assert.Equal(t, []string{"one", "two"}, c.GetStringSlice("slice", []string{}))
}
func TestConfig_GetBool(t *testing.T) {
l := util.NewTestLogger()
l := test.NewLogger()
c := NewC(l)
c.Settings["bool"] = true
assert.Equal(t, true, c.GetBool("bool", false))
@@ -93,7 +93,7 @@ func TestConfig_GetBool(t *testing.T) {
}
func TestConfig_HasChanged(t *testing.T) {
l := util.NewTestLogger()
l := test.NewLogger()
// No reload has occurred, return false
c := NewC(l)
c.Settings["test"] = "hi"
@@ -115,7 +115,7 @@ func TestConfig_HasChanged(t *testing.T) {
}
func TestConfig_ReloadConfig(t *testing.T) {
l := util.NewTestLogger()
l := test.NewLogger()
done := make(chan bool, 1)
dir, err := ioutil.TempDir("", "config-test")
assert.Nil(t, err)

View File

@@ -179,12 +179,9 @@ func (n *connectionManager) HandleMonitorTick(now time.Time, p, nb, out []byte)
hostinfo, err := n.hostMap.QueryVpnIp(vpnIp)
if err != nil {
n.l.Debugf("Not found in hostmap: %s", vpnIp)
if !n.intf.disconnectInvalid {
n.ClearIP(vpnIp)
n.ClearPendingDeletion(vpnIp)
continue
}
n.ClearIP(vpnIp)
n.ClearPendingDeletion(vpnIp)
continue
}
if n.handleInvalidCertificate(now, vpnIp, hostinfo) {
@@ -233,12 +230,9 @@ func (n *connectionManager) HandleDeletionTick(now time.Time) {
hostinfo, err := n.hostMap.QueryVpnIp(vpnIp)
if err != nil {
n.l.Debugf("Not found in hostmap: %s", vpnIp)
if !n.intf.disconnectInvalid {
n.ClearIP(vpnIp)
n.ClearPendingDeletion(vpnIp)
continue
}
n.ClearIP(vpnIp)
n.ClearPendingDeletion(vpnIp)
continue
}
if n.handleInvalidCertificate(now, vpnIp, hostinfo) {
@@ -307,7 +301,7 @@ func (n *connectionManager) handleInvalidCertificate(now time.Time, vpnIp iputil
// Inform the remote and close the tunnel locally
n.intf.sendCloseTunnel(hostinfo)
n.intf.closeTunnel(hostinfo, false)
n.intf.closeTunnel(hostinfo)
n.ClearIP(vpnIp)
n.ClearPendingDeletion(vpnIp)

View File

@@ -11,15 +11,15 @@ import (
"github.com/flynn/noise"
"github.com/slackhq/nebula/cert"
"github.com/slackhq/nebula/iputil"
"github.com/slackhq/nebula/test"
"github.com/slackhq/nebula/udp"
"github.com/slackhq/nebula/util"
"github.com/stretchr/testify/assert"
)
var vpnIp iputil.VpnIp
func Test_NewConnectionManagerTest(t *testing.T) {
l := util.NewTestLogger()
l := test.NewLogger()
//_, tuncidr, _ := net.ParseCIDR("1.1.1.1/24")
_, vpncidr, _ := net.ParseCIDR("172.1.1.1/24")
_, localrange, _ := net.ParseCIDR("10.1.1.1/24")
@@ -35,10 +35,10 @@ func Test_NewConnectionManagerTest(t *testing.T) {
rawCertificateNoKey: []byte{},
}
lh := NewLightHouse(l, false, &net.IPNet{IP: net.IP{0, 0, 0, 0}, Mask: net.IPMask{0, 0, 0, 0}}, []iputil.VpnIp{}, 1000, 0, &udp.Conn{}, false, 1, false)
lh := &LightHouse{l: l, atomicStaticList: make(map[iputil.VpnIp]struct{}), atomicLighthouses: make(map[iputil.VpnIp]struct{})}
ifce := &Interface{
hostMap: hostMap,
inside: &Tun{},
inside: &test.NoopTun{},
outside: &udp.Conn{},
certState: cs,
firewall: &Firewall{},
@@ -89,7 +89,7 @@ func Test_NewConnectionManagerTest(t *testing.T) {
}
func Test_NewConnectionManagerTest2(t *testing.T) {
l := util.NewTestLogger()
l := test.NewLogger()
//_, tuncidr, _ := net.ParseCIDR("1.1.1.1/24")
_, vpncidr, _ := net.ParseCIDR("172.1.1.1/24")
_, localrange, _ := net.ParseCIDR("10.1.1.1/24")
@@ -104,10 +104,10 @@ func Test_NewConnectionManagerTest2(t *testing.T) {
rawCertificateNoKey: []byte{},
}
lh := NewLightHouse(l, false, &net.IPNet{IP: net.IP{0, 0, 0, 0}, Mask: net.IPMask{0, 0, 0, 0}}, []iputil.VpnIp{}, 1000, 0, &udp.Conn{}, false, 1, false)
lh := &LightHouse{l: l, atomicStaticList: make(map[iputil.VpnIp]struct{}), atomicLighthouses: make(map[iputil.VpnIp]struct{})}
ifce := &Interface{
hostMap: hostMap,
inside: &Tun{},
inside: &test.NoopTun{},
outside: &udp.Conn{},
certState: cs,
firewall: &Firewall{},
@@ -164,7 +164,7 @@ func Test_NewConnectionManagerTest2(t *testing.T) {
// Disconnect only if disconnectInvalid: true is set.
func Test_NewConnectionManagerTest_DisconnectInvalid(t *testing.T) {
now := time.Now()
l := util.NewTestLogger()
l := test.NewLogger()
ipNet := net.IPNet{
IP: net.IPv4(172, 1, 1, 2),
Mask: net.IPMask{255, 255, 255, 0},
@@ -213,10 +213,10 @@ func Test_NewConnectionManagerTest_DisconnectInvalid(t *testing.T) {
rawCertificateNoKey: []byte{},
}
lh := NewLightHouse(l, false, &net.IPNet{IP: net.IP{0, 0, 0, 0}, Mask: net.IPMask{0, 0, 0, 0}}, []iputil.VpnIp{}, 1000, 0, &udp.Conn{}, false, 1, false)
lh := &LightHouse{l: l, atomicStaticList: make(map[iputil.VpnIp]struct{}), atomicLighthouses: make(map[iputil.VpnIp]struct{})}
ifce := &Interface{
hostMap: hostMap,
inside: &Tun{},
inside: &test.NoopTun{},
outside: &udp.Conn{},
certState: cs,
firewall: &Firewall{},

View File

@@ -28,14 +28,16 @@ type Control struct {
}
type ControlHostInfo struct {
VpnIp net.IP `json:"vpnIp"`
LocalIndex uint32 `json:"localIndex"`
RemoteIndex uint32 `json:"remoteIndex"`
RemoteAddrs []*udp.Addr `json:"remoteAddrs"`
CachedPackets int `json:"cachedPackets"`
Cert *cert.NebulaCertificate `json:"cert"`
MessageCounter uint64 `json:"messageCounter"`
CurrentRemote *udp.Addr `json:"currentRemote"`
VpnIp net.IP `json:"vpnIp"`
LocalIndex uint32 `json:"localIndex"`
RemoteIndex uint32 `json:"remoteIndex"`
RemoteAddrs []*udp.Addr `json:"remoteAddrs"`
CachedPackets int `json:"cachedPackets"`
Cert *cert.NebulaCertificate `json:"cert"`
MessageCounter uint64 `json:"messageCounter"`
CurrentRemote *udp.Addr `json:"currentRemote"`
CurrentRelaysToMe []iputil.VpnIp `json:"currentRelaysToMe"`
CurrentRelaysThroughMe []iputil.VpnIp `json:"currentRelaysThroughMe"`
}
// Start actually runs nebula, this is a nonblocking call. To block use Control.ShutdownBlock()
@@ -60,12 +62,14 @@ func (c *Control) Start() {
// Stop signals nebula to shutdown, returns after the shutdown is complete
func (c *Control) Stop() {
//TODO: stop tun and udp routines, the lock on hostMap effectively does that though
// Stop the handshakeManager (and other serivces), to prevent new tunnels from
// being created while we're shutting them all down.
c.cancel()
c.CloseAllTunnels(false)
if err := c.f.Close(); err != nil {
c.l.WithError(err).Error("Close interface failed")
}
c.cancel()
c.l.Info("Goodbye")
}
@@ -144,14 +148,13 @@ func (c *Control) CloseTunnel(vpnIp iputil.VpnIp, localOnly bool) bool {
0,
hostInfo.ConnectionState,
hostInfo,
hostInfo.remote,
[]byte{},
make([]byte, 12, 12),
make([]byte, mtu),
)
}
c.f.closeTunnel(hostInfo, false)
c.f.closeTunnel(hostInfo)
return true
}
@@ -159,34 +162,60 @@ func (c *Control) CloseTunnel(vpnIp iputil.VpnIp, localOnly bool) bool {
// the int returned is a count of tunnels closed
func (c *Control) CloseAllTunnels(excludeLighthouses bool) (closed int) {
//TODO: this is probably better as a function in ConnectionManager or HostMap directly
c.f.hostMap.Lock()
for _, h := range c.f.hostMap.Hosts {
lighthouses := c.f.lightHouse.GetLighthouses()
shutdown := func(h *HostInfo) {
if excludeLighthouses {
if _, ok := c.f.lightHouse.lighthouses[h.vpnIp]; ok {
continue
if _, ok := lighthouses[h.vpnIp]; ok {
return
}
}
c.f.send(header.CloseTunnel, 0, h.ConnectionState, h, []byte{}, make([]byte, 12, 12), make([]byte, mtu))
c.f.closeTunnel(h)
if h.ConnectionState.ready {
c.f.send(header.CloseTunnel, 0, h.ConnectionState, h, h.remote, []byte{}, make([]byte, 12, 12), make([]byte, mtu))
c.f.closeTunnel(h, true)
c.l.WithField("vpnIp", h.vpnIp).WithField("udpAddr", h.remote).
Debug("Sending close tunnel message")
closed++
}
c.l.WithField("vpnIp", h.vpnIp).WithField("udpAddr", h.remote).
Debug("Sending close tunnel message")
closed++
// Learn which hosts are being used as relays, so we can shut them down last.
relayingHosts := map[iputil.VpnIp]*HostInfo{}
// Grab the hostMap lock to access the Relays map
c.f.hostMap.Lock()
for _, relayingHost := range c.f.hostMap.Relays {
relayingHosts[relayingHost.vpnIp] = relayingHost
}
c.f.hostMap.Unlock()
hostInfos := []*HostInfo{}
// Grab the hostMap lock to access the Hosts map
c.f.hostMap.Lock()
for _, relayHost := range c.f.hostMap.Hosts {
if _, ok := relayingHosts[relayHost.vpnIp]; !ok {
hostInfos = append(hostInfos, relayHost)
}
}
c.f.hostMap.Unlock()
for _, h := range hostInfos {
shutdown(h)
}
for _, h := range relayingHosts {
shutdown(h)
}
return
}
func copyHostInfo(h *HostInfo, preferredRanges []*net.IPNet) ControlHostInfo {
chi := ControlHostInfo{
VpnIp: h.vpnIp.ToIP(),
LocalIndex: h.localIndexId,
RemoteIndex: h.remoteIndexId,
RemoteAddrs: h.remotes.CopyAddrs(preferredRanges),
CachedPackets: len(h.packetStore),
VpnIp: h.vpnIp.ToIP(),
LocalIndex: h.localIndexId,
RemoteIndex: h.remoteIndexId,
RemoteAddrs: h.remotes.CopyAddrs(preferredRanges),
CachedPackets: len(h.packetStore),
CurrentRelaysToMe: h.relayState.CopyRelayIps(),
CurrentRelaysThroughMe: h.relayState.CopyRelayForIps(),
}
if h.ConnectionState != nil {

View File

@@ -9,13 +9,13 @@ import (
"github.com/sirupsen/logrus"
"github.com/slackhq/nebula/cert"
"github.com/slackhq/nebula/iputil"
"github.com/slackhq/nebula/test"
"github.com/slackhq/nebula/udp"
"github.com/slackhq/nebula/util"
"github.com/stretchr/testify/assert"
)
func TestControl_GetHostInfoByVpnIp(t *testing.T) {
l := util.NewTestLogger()
l := test.NewLogger()
// Special care must be taken to re-use all objects provided to the hostmap and certificate in the expectedInfo object
// To properly ensure we are not exposing core memory to the caller
hm := NewHostMap(l, "test", &net.IPNet{}, make([]*net.IPNet, 0))
@@ -59,6 +59,11 @@ func TestControl_GetHostInfoByVpnIp(t *testing.T) {
remoteIndexId: 200,
localIndexId: 201,
vpnIp: iputil.Ip2VpnIp(ipNet.IP),
relayState: RelayState{
relays: map[iputil.VpnIp]struct{}{},
relayForByIp: map[iputil.VpnIp]*Relay{},
relayForByIdx: map[uint32]*Relay{},
},
})
hm.Add(iputil.Ip2VpnIp(ipNet2.IP), &HostInfo{
@@ -70,6 +75,11 @@ func TestControl_GetHostInfoByVpnIp(t *testing.T) {
remoteIndexId: 200,
localIndexId: 201,
vpnIp: iputil.Ip2VpnIp(ipNet2.IP),
relayState: RelayState{
relays: map[iputil.VpnIp]struct{}{},
relayForByIp: map[iputil.VpnIp]*Relay{},
relayForByIdx: map[uint32]*Relay{},
},
})
c := Control{
@@ -82,19 +92,21 @@ func TestControl_GetHostInfoByVpnIp(t *testing.T) {
thi := c.GetHostInfoByVpnIp(iputil.Ip2VpnIp(ipNet.IP), false)
expectedInfo := ControlHostInfo{
VpnIp: net.IPv4(1, 2, 3, 4).To4(),
LocalIndex: 201,
RemoteIndex: 200,
RemoteAddrs: []*udp.Addr{remote2, remote1},
CachedPackets: 0,
Cert: crt.Copy(),
MessageCounter: 0,
CurrentRemote: udp.NewAddr(net.ParseIP("0.0.0.100"), 4444),
VpnIp: net.IPv4(1, 2, 3, 4).To4(),
LocalIndex: 201,
RemoteIndex: 200,
RemoteAddrs: []*udp.Addr{remote2, remote1},
CachedPackets: 0,
Cert: crt.Copy(),
MessageCounter: 0,
CurrentRemote: udp.NewAddr(net.ParseIP("0.0.0.100"), 4444),
CurrentRelaysToMe: []iputil.VpnIp{},
CurrentRelaysThroughMe: []iputil.VpnIp{},
}
// Make sure we don't have any unexpected fields
assertFields(t, []string{"VpnIp", "LocalIndex", "RemoteIndex", "RemoteAddrs", "CachedPackets", "Cert", "MessageCounter", "CurrentRemote"}, thi)
util.AssertDeepCopyEqual(t, &expectedInfo, thi)
assertFields(t, []string{"VpnIp", "LocalIndex", "RemoteIndex", "RemoteAddrs", "CachedPackets", "Cert", "MessageCounter", "CurrentRemote", "CurrentRelaysToMe", "CurrentRelaysThroughMe"}, thi)
test.AssertDeepCopyEqual(t, &expectedInfo, thi)
// Make sure we don't panic if the host info doesn't have a cert yet
assert.NotPanics(t, func() {

View File

@@ -10,6 +10,7 @@ import (
"github.com/google/gopacket/layers"
"github.com/slackhq/nebula/header"
"github.com/slackhq/nebula/iputil"
"github.com/slackhq/nebula/overlay"
"github.com/slackhq/nebula/udp"
)
@@ -62,9 +63,27 @@ func (c *Control) InjectLightHouseAddr(vpnIp net.IP, toAddr *net.UDPAddr) {
}
}
// InjectRelays will push relayVpnIps into the local lighthouse cache for the vpnIp
// This is necessary to inform an initiator of possible relays for communicating with a responder
func (c *Control) InjectRelays(vpnIp net.IP, relayVpnIps []net.IP) {
c.f.lightHouse.Lock()
remoteList := c.f.lightHouse.unlockedGetRemoteList(iputil.Ip2VpnIp(vpnIp))
remoteList.Lock()
defer remoteList.Unlock()
c.f.lightHouse.Unlock()
iVpnIp := iputil.Ip2VpnIp(vpnIp)
uVpnIp := []uint32{}
for _, rVPnIp := range relayVpnIps {
uVpnIp = append(uVpnIp, uint32(iputil.Ip2VpnIp(rVPnIp)))
}
remoteList.unlockedSetRelay(iVpnIp, iVpnIp, uVpnIp)
}
// GetFromTun will pull a packet off the tun side of nebula
func (c *Control) GetFromTun(block bool) []byte {
return c.f.inside.(*Tun).Get(block)
return c.f.inside.(*overlay.TestTun).Get(block)
}
// GetFromUDP will pull a udp packet off the udp side of nebula
@@ -77,7 +96,7 @@ func (c *Control) GetUDPTxChan() <-chan *udp.Packet {
}
func (c *Control) GetTunTxChan() <-chan []byte {
return c.f.inside.(*Tun).txPackets
return c.f.inside.(*overlay.TestTun).TxPackets
}
// InjectUDPPacket will inject a packet into the udp side of nebula
@@ -91,7 +110,7 @@ func (c *Control) InjectTunUDPPacket(toIp net.IP, toPort uint16, fromPort uint16
Version: 4,
TTL: 64,
Protocol: layers.IPProtocolUDP,
SrcIP: c.f.inside.CidrNet().IP,
SrcIP: c.f.inside.Cidr().IP,
DstIP: toIp,
}
@@ -114,7 +133,11 @@ func (c *Control) InjectTunUDPPacket(toIp net.IP, toPort uint16, fromPort uint16
panic(err)
}
c.f.inside.(*Tun).Send(buffer.Bytes())
c.f.inside.(*overlay.TestTun).Send(buffer.Bytes())
}
func (c *Control) GetVpnIp() iputil.VpnIp {
return c.f.myVpnIp
}
func (c *Control) GetUDPAddr() string {

View File

@@ -135,7 +135,7 @@ func getDnsServerAddr(c *config.C) string {
func startDns(l *logrus.Logger, c *config.C) {
dnsAddr = getDnsServerAddr(c)
dnsServer = &dns.Server{Addr: dnsAddr, Net: "udp"}
l.WithField("dnsListener", dnsAddr).Infof("Starting DNS responder")
l.WithField("dnsListener", dnsAddr).Info("Starting DNS responder")
err := dnsServer.ListenAndServe()
defer dnsServer.Shutdown()
if err != nil {

View File

@@ -16,10 +16,34 @@ import (
"github.com/stretchr/testify/assert"
)
func BenchmarkHotPath(b *testing.B) {
ca, _, caKey, _ := newTestCaCert(time.Now(), time.Now().Add(10*time.Minute), []*net.IPNet{}, []*net.IPNet{}, []string{})
myControl, _, _ := newSimpleServer(ca, caKey, "me", net.IP{10, 0, 0, 1}, nil)
theirControl, theirVpnIp, theirUdpAddr := newSimpleServer(ca, caKey, "them", net.IP{10, 0, 0, 2}, nil)
// Put their info in our lighthouse
myControl.InjectLightHouseAddr(theirVpnIp, theirUdpAddr)
// Start the servers
myControl.Start()
theirControl.Start()
r := router.NewR(b, myControl, theirControl)
r.CancelFlowLogs()
for n := 0; n < b.N; n++ {
myControl.InjectTunUDPPacket(theirVpnIp, 80, 80, []byte("Hi from me"))
_ = r.RouteForAllUntilTxTun(theirControl)
}
myControl.Stop()
theirControl.Stop()
}
func TestGoodHandshake(t *testing.T) {
ca, _, caKey, _ := newTestCaCert(time.Now(), time.Now().Add(10*time.Minute), []*net.IPNet{}, []*net.IPNet{}, []string{})
myControl, myVpnIp, myUdpAddr := newSimpleServer(ca, caKey, "me", net.IP{10, 0, 0, 1})
theirControl, theirVpnIp, theirUdpAddr := newSimpleServer(ca, caKey, "them", net.IP{10, 0, 0, 2})
myControl, myVpnIp, myUdpAddr := newSimpleServer(ca, caKey, "me", net.IP{10, 0, 0, 1}, nil)
theirControl, theirVpnIp, theirUdpAddr := newSimpleServer(ca, caKey, "them", net.IP{10, 0, 0, 2}, nil)
// Put their info in our lighthouse
myControl.InjectLightHouseAddr(theirVpnIp, theirUdpAddr)
@@ -57,7 +81,9 @@ func TestGoodHandshake(t *testing.T) {
assertUdpPacket(t, []byte("Hi from me"), myCachedPacket, myVpnIp, theirVpnIp, 80, 80)
t.Log("Do a bidirectional tunnel test")
assertTunnel(t, myVpnIp, theirVpnIp, myControl, theirControl, router.NewR(myControl, theirControl))
r := router.NewR(t, myControl, theirControl)
defer r.RenderFlow()
assertTunnel(t, myVpnIp, theirVpnIp, myControl, theirControl, r)
myControl.Stop()
theirControl.Stop()
@@ -70,9 +96,9 @@ func TestWrongResponderHandshake(t *testing.T) {
// The IPs here are chosen on purpose:
// The current remote handling will sort by preference, public, and then lexically.
// So we need them to have a higher address than evil (we could apply a preference though)
myControl, myVpnIp, myUdpAddr := newSimpleServer(ca, caKey, "me", net.IP{10, 0, 0, 100})
theirControl, theirVpnIp, theirUdpAddr := newSimpleServer(ca, caKey, "them", net.IP{10, 0, 0, 99})
evilControl, evilVpnIp, evilUdpAddr := newSimpleServer(ca, caKey, "evil", net.IP{10, 0, 0, 2})
myControl, myVpnIp, myUdpAddr := newSimpleServer(ca, caKey, "me", net.IP{10, 0, 0, 100}, nil)
theirControl, theirVpnIp, theirUdpAddr := newSimpleServer(ca, caKey, "them", net.IP{10, 0, 0, 99}, nil)
evilControl, evilVpnIp, evilUdpAddr := newSimpleServer(ca, caKey, "evil", net.IP{10, 0, 0, 2}, nil)
// Add their real udp addr, which should be tried after evil.
myControl.InjectLightHouseAddr(theirVpnIp, theirUdpAddr)
@@ -81,7 +107,8 @@ func TestWrongResponderHandshake(t *testing.T) {
myControl.InjectLightHouseAddr(theirVpnIp, evilUdpAddr)
// Build a router so we don't have to reason who gets which packet
r := router.NewR(myControl, theirControl, evilControl)
r := router.NewR(t, myControl, theirControl, evilControl)
defer r.RenderFlow()
// Start the servers
myControl.Start()
@@ -130,15 +157,16 @@ func TestWrongResponderHandshake(t *testing.T) {
func Test_Case1_Stage1Race(t *testing.T) {
ca, _, caKey, _ := newTestCaCert(time.Now(), time.Now().Add(10*time.Minute), []*net.IPNet{}, []*net.IPNet{}, []string{})
myControl, myVpnIp, myUdpAddr := newSimpleServer(ca, caKey, "me ", net.IP{10, 0, 0, 1})
theirControl, theirVpnIp, theirUdpAddr := newSimpleServer(ca, caKey, "them", net.IP{10, 0, 0, 2})
myControl, myVpnIp, myUdpAddr := newSimpleServer(ca, caKey, "me ", net.IP{10, 0, 0, 1}, nil)
theirControl, theirVpnIp, theirUdpAddr := newSimpleServer(ca, caKey, "them", net.IP{10, 0, 0, 2}, nil)
// Put their info in our lighthouse and vice versa
myControl.InjectLightHouseAddr(theirVpnIp, theirUdpAddr)
theirControl.InjectLightHouseAddr(myVpnIp, myUdpAddr)
// Build a router so we don't have to reason who gets which packet
r := router.NewR(myControl, theirControl)
r := router.NewR(t, myControl, theirControl)
defer r.RenderFlow()
// Start the servers
myControl.Start()
@@ -152,16 +180,16 @@ func Test_Case1_Stage1Race(t *testing.T) {
myHsForThem := myControl.GetFromUDP(true)
theirHsForMe := theirControl.GetFromUDP(true)
t.Log("Now inject both stage 1 handshake packets")
myControl.InjectUDPPacket(theirHsForMe)
theirControl.InjectUDPPacket(myHsForThem)
r.Log("Now inject both stage 1 handshake packets")
r.InjectUDPPacket(theirControl, myControl, theirHsForMe)
r.InjectUDPPacket(myControl, theirControl, myHsForThem)
//TODO: they should win, grab their index for me and make sure I use it in the end.
t.Log("They should not have a stage 2 (won the race) but I should send one")
theirControl.InjectUDPPacket(myControl.GetFromUDP(true))
r.Log("They should not have a stage 2 (won the race) but I should send one")
r.InjectUDPPacket(myControl, theirControl, myControl.GetFromUDP(true))
t.Log("Route for me until I send a message packet to them")
myControl.WaitForType(1, 0, theirControl)
r.Log("Route for me until I send a message packet to them")
r.RouteForAllUntilAfterMsgTypeTo(theirControl, header.Message, header.MessageNone)
t.Log("My cached packet should be received by them")
myCachedPacket := theirControl.GetFromTun(true)
@@ -182,4 +210,32 @@ func Test_Case1_Stage1Race(t *testing.T) {
//TODO: assert hostmaps
}
func TestRelays(t *testing.T) {
ca, _, caKey, _ := newTestCaCert(time.Now(), time.Now().Add(10*time.Minute), []*net.IPNet{}, []*net.IPNet{}, []string{})
myControl, myVpnIp, _ := newSimpleServer(ca, caKey, "me ", net.IP{10, 0, 0, 1}, m{"relay": m{"use_relays": true}})
relayControl, relayVpnIp, relayUdpAddr := newSimpleServer(ca, caKey, "relay ", net.IP{10, 0, 0, 128}, m{"relay": m{"am_relay": true}})
theirControl, theirVpnIp, theirUdpAddr := newSimpleServer(ca, caKey, "them ", net.IP{10, 0, 0, 2}, m{"relay": m{"use_relays": true}})
// Teach my how to get to the relay and that their can be reached via the relay
myControl.InjectLightHouseAddr(relayVpnIp, relayUdpAddr)
myControl.InjectRelays(theirVpnIp, []net.IP{relayVpnIp})
relayControl.InjectLightHouseAddr(theirVpnIp, theirUdpAddr)
// Build a router so we don't have to reason who gets which packet
r := router.NewR(t, myControl, relayControl, theirControl)
defer r.RenderFlow()
// Start the servers
myControl.Start()
relayControl.Start()
theirControl.Start()
t.Log("Trigger a handshake from me to them via the relay")
myControl.InjectTunUDPPacket(theirVpnIp, 80, 80, []byte("Hi from me"))
p := r.RouteForAllUntilTxTun(theirControl)
assertUdpPacket(t, []byte("Hi from me"), p, myVpnIp, theirVpnIp, 80, 80)
//TODO: assert we actually used the relay even though it should be impossible for a tunnel to have occurred without it
}
//TODO: add a test with many lies

View File

@@ -7,7 +7,6 @@ import (
"crypto/rand"
"fmt"
"io"
"io/ioutil"
"net"
"os"
"testing"
@@ -15,6 +14,7 @@ import (
"github.com/google/gopacket"
"github.com/google/gopacket/layers"
"github.com/imdario/mergo"
"github.com/sirupsen/logrus"
"github.com/slackhq/nebula"
"github.com/slackhq/nebula/cert"
@@ -30,7 +30,7 @@ import (
type m map[string]interface{}
// newSimpleServer creates a nebula instance with many assumptions
func newSimpleServer(caCrt *cert.NebulaCertificate, caKey []byte, name string, udpIp net.IP) (*nebula.Control, net.IP, *net.UDPAddr) {
func newSimpleServer(caCrt *cert.NebulaCertificate, caKey []byte, name string, udpIp net.IP, overrides m) (*nebula.Control, net.IP, *net.UDPAddr) {
l := NewTestLogger()
vpnIpNet := &net.IPNet{IP: make([]byte, len(udpIp)), Mask: net.IPMask{255, 255, 255, 0}}
@@ -78,6 +78,15 @@ func newSimpleServer(caCrt *cert.NebulaCertificate, caKey []byte, name string, u
"level": l.Level.String(),
},
}
if overrides != nil {
err = mergo.Merge(&overrides, mc, mergo.WithAppendSlice)
if err != nil {
panic(err)
}
mc = overrides
}
cb, err := yaml.Marshal(mc)
if err != nil {
panic(err)
@@ -294,7 +303,8 @@ func NewTestLogger() *logrus.Logger {
v := os.Getenv("TEST_LOGS")
if v == "" {
l.SetOutput(ioutil.Discard)
l.SetOutput(io.Discard)
l.SetLevel(logrus.PanicLevel)
return l
}

View File

@@ -4,14 +4,23 @@
package router
import (
"context"
"fmt"
"net"
"os"
"path/filepath"
"reflect"
"strconv"
"strings"
"sync"
"testing"
"time"
"github.com/google/gopacket"
"github.com/google/gopacket/layers"
"github.com/slackhq/nebula"
"github.com/slackhq/nebula/header"
"github.com/slackhq/nebula/iputil"
"github.com/slackhq/nebula/udp"
)
@@ -28,38 +37,100 @@ type R struct {
// map[from address + ":" + to address] => ip:port to rewrite in the udp packet to receiver
outNat map[string]net.UDPAddr
// A map of vpn ip to the nebula control it belongs to
vpnControls map[iputil.VpnIp]*nebula.Control
flow []flowEntry
// All interactions are locked to help serialize behavior
sync.Mutex
fn string
cancelRender context.CancelFunc
t testing.TB
}
type flowEntry struct {
note string
packet *packet
}
type packet struct {
from *nebula.Control
to *nebula.Control
packet *udp.Packet
tun bool // a packet pulled off a tun device
rx bool // the packet was received by a udp device
}
func (p *packet) WasReceived() {
if p != nil {
p.rx = true
}
}
type ExitType int
const (
// Keeps routing, the function will get called again on the next packet
// KeepRouting the function will get called again on the next packet
KeepRouting ExitType = 0
// Does not route this packet and exits immediately
// ExitNow does not route this packet and exits immediately
ExitNow ExitType = 1
// Routes this packet and exits immediately afterwards
// RouteAndExit routes this packet and exits immediately afterwards
RouteAndExit ExitType = 2
)
type ExitFunc func(packet *udp.Packet, receiver *nebula.Control) ExitType
func NewR(controls ...*nebula.Control) *R {
r := &R{
controls: make(map[string]*nebula.Control),
inNat: make(map[string]*nebula.Control),
outNat: make(map[string]net.UDPAddr),
// NewR creates a new router to pass packets in a controlled fashion between the provided controllers.
// The packet flow will be recorded in a file within the mermaid directory under the same name as the test.
// Renders will occur automatically, roughly every 100ms, until a call to RenderFlow() is made
func NewR(t testing.TB, controls ...*nebula.Control) *R {
ctx, cancel := context.WithCancel(context.Background())
if err := os.MkdirAll("mermaid", 0755); err != nil {
panic(err)
}
r := &R{
controls: make(map[string]*nebula.Control),
vpnControls: make(map[iputil.VpnIp]*nebula.Control),
inNat: make(map[string]*nebula.Control),
outNat: make(map[string]net.UDPAddr),
flow: []flowEntry{},
fn: filepath.Join("mermaid", fmt.Sprintf("%s.md", t.Name())),
t: t,
cancelRender: cancel,
}
// Try to remove our render file
os.Remove(r.fn)
for _, c := range controls {
addr := c.GetUDPAddr()
if _, ok := r.controls[addr]; ok {
panic("Duplicate listen address: " + addr)
}
r.vpnControls[c.GetVpnIp()] = c
r.controls[addr] = c
}
// Spin the renderer in case we go nuts and the test never completes
go func() {
clockSource := time.NewTicker(time.Millisecond * 100)
defer clockSource.Stop()
for {
select {
case <-ctx.Done():
return
case <-clockSource.C:
r.renderFlow()
}
}
}()
return r
}
@@ -78,6 +149,136 @@ func (r *R) AddRoute(ip net.IP, port uint16, c *nebula.Control) {
r.inNat[inAddr] = c
}
// RenderFlow renders the packet flow seen up until now and stops further automatic renders from happening.
func (r *R) RenderFlow() {
r.cancelRender()
r.renderFlow()
}
// CancelFlowLogs stops flow logs from being tracked and destroys any logs already collected
func (r *R) CancelFlowLogs() {
r.cancelRender()
r.flow = nil
}
func (r *R) renderFlow() {
if r.flow == nil {
return
}
f, err := os.OpenFile(r.fn, os.O_CREATE|os.O_TRUNC|os.O_RDWR, 0644)
if err != nil {
panic(err)
}
var participants = map[string]struct{}{}
var participantsVals []string
fmt.Fprintln(f, "```mermaid")
fmt.Fprintln(f, "sequenceDiagram")
// Assemble participants
for _, e := range r.flow {
if e.packet == nil {
continue
}
addr := e.packet.from.GetUDPAddr()
if _, ok := participants[addr]; ok {
continue
}
participants[addr] = struct{}{}
sanAddr := strings.Replace(addr, ":", "#58;", 1)
participantsVals = append(participantsVals, sanAddr)
fmt.Fprintf(
f, " participant %s as Nebula: %s<br/>UDP: %s\n",
sanAddr, e.packet.from.GetVpnIp(), sanAddr,
)
}
// Print packets
h := &header.H{}
for _, e := range r.flow {
if e.packet == nil {
fmt.Fprintf(f, " note over %s: %s\n", strings.Join(participantsVals, ", "), e.note)
continue
}
p := e.packet
if p.tun {
fmt.Fprintln(f, r.formatUdpPacket(p))
} else {
if err := h.Parse(p.packet.Data); err != nil {
panic(err)
}
line := "--x"
if p.rx {
line = "->>"
}
fmt.Fprintf(f,
" %s%s%s: %s(%s), counter: %v\n",
strings.Replace(p.from.GetUDPAddr(), ":", "#58;", 1),
line,
strings.Replace(p.to.GetUDPAddr(), ":", "#58;", 1),
h.TypeName(), h.SubTypeName(), h.MessageCounter,
)
}
}
fmt.Fprintln(f, "```")
}
// InjectFlow can be used to record packet flow if the test is handling the routing on its own.
// The packet is assumed to have been received
func (r *R) InjectFlow(from, to *nebula.Control, p *udp.Packet) {
r.Lock()
defer r.Unlock()
r.unlockedInjectFlow(from, to, p, false)
}
func (r *R) Log(arg ...any) {
if r.flow == nil {
return
}
r.Lock()
r.flow = append(r.flow, flowEntry{note: fmt.Sprint(arg...)})
r.t.Log(arg...)
r.Unlock()
}
func (r *R) Logf(format string, arg ...any) {
if r.flow == nil {
return
}
r.Lock()
r.flow = append(r.flow, flowEntry{note: fmt.Sprintf(format, arg...)})
r.t.Logf(format, arg...)
r.Unlock()
}
// unlockedInjectFlow is used by the router to record a packet has been transmitted, the packet is returned and
// should be marked as received AFTER it has been placed on the receivers channel.
// If flow logs have been disabled this function will return nil
func (r *R) unlockedInjectFlow(from, to *nebula.Control, p *udp.Packet, tun bool) *packet {
if r.flow == nil {
return nil
}
fp := &packet{
from: from,
to: to,
packet: p.Copy(),
tun: tun,
}
r.flow = append(r.flow, flowEntry{packet: fp})
return fp
}
// OnceFrom will route a single packet from sender then return
// If the router doesn't have the nebula controller for that address, we panic
func (r *R) OnceFrom(sender *nebula.Control) {
@@ -96,6 +297,11 @@ func (r *R) RouteUntilTxTun(sender *nebula.Control, receiver *nebula.Control) []
select {
// Maybe we already have something on the tun for us
case b := <-tunTx:
r.Lock()
np := udp.Packet{Data: make([]byte, len(b))}
copy(np.Data, b)
r.unlockedInjectFlow(receiver, receiver, &np, true)
r.Unlock()
return b
// Nope, lets push the sender along
@@ -108,13 +314,73 @@ func (r *R) RouteUntilTxTun(sender *nebula.Control, receiver *nebula.Control) []
r.Unlock()
panic("No control for udp tx")
}
fp := r.unlockedInjectFlow(sender, c, p, false)
c.InjectUDPPacket(p)
fp.WasReceived()
r.Unlock()
}
}
}
// RouteForAllUntilTxTun will route for everyone and return when a packet is seen on receivers tun
// If the router doesn't have the nebula controller for that address, we panic
func (r *R) RouteForAllUntilTxTun(receiver *nebula.Control) []byte {
sc := make([]reflect.SelectCase, len(r.controls)+1)
cm := make([]*nebula.Control, len(r.controls)+1)
i := 0
sc[i] = reflect.SelectCase{
Dir: reflect.SelectRecv,
Chan: reflect.ValueOf(receiver.GetTunTxChan()),
Send: reflect.Value{},
}
cm[i] = receiver
i++
for _, c := range r.controls {
sc[i] = reflect.SelectCase{
Dir: reflect.SelectRecv,
Chan: reflect.ValueOf(c.GetUDPTxChan()),
Send: reflect.Value{},
}
cm[i] = c
i++
}
for {
x, rx, _ := reflect.Select(sc)
r.Lock()
if x == 0 {
// we are the tun tx, we can exit
p := rx.Interface().([]byte)
np := udp.Packet{Data: make([]byte, len(p))}
copy(np.Data, p)
r.unlockedInjectFlow(cm[x], cm[x], &np, true)
r.Unlock()
return p
} else {
// we are a udp tx, route and continue
p := rx.Interface().(*udp.Packet)
outAddr := cm[x].GetUDPAddr()
inAddr := net.JoinHostPort(p.ToIp.String(), fmt.Sprintf("%v", p.ToPort))
c := r.getControl(outAddr, inAddr, p)
if c == nil {
r.Unlock()
panic("No control for udp tx")
}
fp := r.unlockedInjectFlow(cm[x], c, p, false)
c.InjectUDPPacket(p)
fp.WasReceived()
}
r.Unlock()
}
}
// RouteExitFunc will call the whatDo func with each udp packet from sender.
// whatDo can return:
// - exitNow: the packet will not be routed and this call will return immediately
@@ -144,12 +410,16 @@ func (r *R) RouteExitFunc(sender *nebula.Control, whatDo ExitFunc) {
return
case RouteAndExit:
fp := r.unlockedInjectFlow(sender, receiver, p, false)
receiver.InjectUDPPacket(p)
fp.WasReceived()
r.Unlock()
return
case KeepRouting:
fp := r.unlockedInjectFlow(sender, receiver, p, false)
receiver.InjectUDPPacket(p)
fp.WasReceived()
default:
panic(fmt.Sprintf("Unknown exitFunc return: %v", e))
@@ -175,6 +445,34 @@ func (r *R) RouteUntilAfterMsgType(sender *nebula.Control, msgType header.Messag
})
}
func (r *R) RouteForAllUntilAfterMsgTypeTo(receiver *nebula.Control, msgType header.MessageType, subType header.MessageSubType) {
h := &header.H{}
r.RouteForAllExitFunc(func(p *udp.Packet, r *nebula.Control) ExitType {
if r != receiver {
return KeepRouting
}
if err := h.Parse(p.Data); err != nil {
panic(err)
}
if h.Type == msgType && h.Subtype == subType {
return RouteAndExit
}
return KeepRouting
})
}
func (r *R) InjectUDPPacket(sender, receiver *nebula.Control, packet *udp.Packet) {
r.Lock()
defer r.Unlock()
fp := r.unlockedInjectFlow(sender, receiver, packet, false)
receiver.InjectUDPPacket(packet)
fp.WasReceived()
}
// RouteForUntilAfterToAddr will route for sender and return only after it sees and sends a packet destined for toAddr
// finish can be any of the exitType values except `keepRouting`, the default value is `routeAndExit`
// If the router doesn't have the nebula controller for that address, we panic
@@ -234,12 +532,16 @@ func (r *R) RouteForAllExitFunc(whatDo ExitFunc) {
return
case RouteAndExit:
fp := r.unlockedInjectFlow(cm[x], receiver, p, false)
receiver.InjectUDPPacket(p)
fp.WasReceived()
r.Unlock()
return
case KeepRouting:
fp := r.unlockedInjectFlow(cm[x], receiver, p, false)
receiver.InjectUDPPacket(p)
fp.WasReceived()
default:
panic(fmt.Sprintf("Unknown exitFunc return: %v", e))
@@ -321,3 +623,31 @@ func (r *R) getControl(fromAddr, toAddr string, p *udp.Packet) *nebula.Control {
return r.controls[toAddr]
}
func (r *R) formatUdpPacket(p *packet) string {
packet := gopacket.NewPacket(p.packet.Data, layers.LayerTypeIPv4, gopacket.Lazy)
v4 := packet.Layer(layers.LayerTypeIPv4).(*layers.IPv4)
if v4 == nil {
panic("not an ipv4 packet")
}
from := "unknown"
if c, ok := r.vpnControls[iputil.Ip2VpnIp(v4.SrcIP)]; ok {
from = c.GetUDPAddr()
}
udp := packet.Layer(layers.LayerTypeUDP).(*layers.UDP)
if udp == nil {
panic("not a udp packet")
}
data := packet.ApplicationLayer()
return fmt.Sprintf(
" %s-->>%s: src port: %v<br/>dest port: %v<br/>data: \"%v\"\n",
strings.Replace(from, ":", "#58;", 1),
strings.Replace(p.to.GetUDPAddr(), ":", "#58;", 1),
udp.SrcPort,
udp.DstPort,
string(data.Payload()),
)
}

View File

@@ -81,6 +81,15 @@ lighthouse:
# Example to only advertise this subnet to the lighthouse.
#"10.0.0.0/8": true
# advertise_addrs are routable addresses that will be included along with discovered addresses to report to the
# lighthouse, the format is "ip:port". `port` can be `0`, in which case the actual listening port will be used in its
# place, useful if `listen.port` is set to 0.
# This option is mainly useful when there are static ip addresses the host can be reached at that nebula can not
# typically discover on its own. Examples being port forwarding or multiple paths to the internet.
#advertise_addrs:
#- "1.1.1.1:4242"
#- "1.2.3.4:0" # port will be replaced with the real listening port
# Port Nebula will be listening on. The default here is 4242. For a lighthouse node, the port should be defined,
# however using port 0 will dynamically assign a port and is recommended for roaming nodes.
listen:
@@ -96,14 +105,18 @@ listen:
# max, net.core.rmem_max and net.core.wmem_max
#read_buffer: 10485760
#write_buffer: 10485760
# By default, Nebula replies to packets it has no tunnel for with a "recv_error" packet. This packet helps speed up reconnection
# in the case that Nebula on either side did not shut down cleanly. This response can be abused as a way to discover if Nebula is running
# on a host though. This option lets you configure if you want to send "recv_error" packets always, never, or only to private network remotes.
# valid values: always, never, private
# This setting is reloadable.
#send_recv_error: always
# EXPERIMENTAL: This option is currently only supported on linux and may
# change in future minor releases.
#
# Routines is the number of thread pairs to run that consume from the tun and UDP queues.
# Currently, this defaults to 1 which means we have 1 tun queue reader and 1
# UDP queue reader. Setting this above one will set IFF_MULTI_QUEUE on the tun
# device and SO_REUSEPORT on the UDP socket to allow multiple queues.
# This option is only supported on Linux.
#routines: 1
punchy:
@@ -144,6 +157,20 @@ punchy:
#keys:
#- "ssh public key string"
# EXPERIMENTAL: relay support for networks that can't establish direct connections.
relay:
# Relays are a list of Nebula IP's that peers can use to relay packets to me.
# IPs in this list must have am_relay set to true in their configs, otherwise
# they will reject relay requests.
#relays:
#- 192.168.100.1
#- <other Nebula VPN IPs of hosts used as relays to access me>
# Set am_relay to true to permit other hosts to list my IP in their relays config. Default false.
am_relay: false
# Set use_relays to false to prevent this instance from attempting to establish connections through relays.
# default true
use_relays: true
# Configure the private interface. Note: addr is baked into the nebula certificate
tun:
# When tun is disabled, a lighthouse can be started without a local tun interface (and therefore without root)
@@ -235,7 +262,6 @@ firewall:
tcp_timeout: 12m
udp_timeout: 3m
default_timeout: 10m
max_connections: 100000
# The firewall is default deny. There is no way to write a deny rule.
# Rules are comprised of a protocol, port, and one or more of host, group, or CIDR

View File

@@ -119,7 +119,6 @@ firewall:
tcp_timeout: 12m
udp_timeout: 3m
default_timeout: 10m
max_connections: 100,000
inbound:
- proto: icmp

View File

@@ -65,7 +65,6 @@ firewall:
tcp_timeout: 12m
udp_timeout: 3m
default_timeout: 10m
max_connections: 100,000
inbound:
- proto: icmp

View File

@@ -14,12 +14,12 @@ import (
"github.com/slackhq/nebula/config"
"github.com/slackhq/nebula/firewall"
"github.com/slackhq/nebula/iputil"
"github.com/slackhq/nebula/util"
"github.com/slackhq/nebula/test"
"github.com/stretchr/testify/assert"
)
func TestNewFirewall(t *testing.T) {
l := util.NewTestLogger()
l := test.NewLogger()
c := &cert.NebulaCertificate{}
fw := NewFirewall(l, time.Second, time.Minute, time.Hour, c)
conntrack := fw.Conntrack
@@ -58,7 +58,7 @@ func TestNewFirewall(t *testing.T) {
}
func TestFirewall_AddRule(t *testing.T) {
l := util.NewTestLogger()
l := test.NewLogger()
ob := &bytes.Buffer{}
l.SetOutput(ob)
@@ -133,7 +133,7 @@ func TestFirewall_AddRule(t *testing.T) {
}
func TestFirewall_Drop(t *testing.T) {
l := util.NewTestLogger()
l := test.NewLogger()
ob := &bytes.Buffer{}
l.SetOutput(ob)
@@ -308,7 +308,7 @@ func BenchmarkFirewallTable_match(b *testing.B) {
}
func TestFirewall_Drop2(t *testing.T) {
l := util.NewTestLogger()
l := test.NewLogger()
ob := &bytes.Buffer{}
l.SetOutput(ob)
@@ -367,7 +367,7 @@ func TestFirewall_Drop2(t *testing.T) {
}
func TestFirewall_Drop3(t *testing.T) {
l := util.NewTestLogger()
l := test.NewLogger()
ob := &bytes.Buffer{}
l.SetOutput(ob)
@@ -453,7 +453,7 @@ func TestFirewall_Drop3(t *testing.T) {
}
func TestFirewall_DropConntrackReload(t *testing.T) {
l := util.NewTestLogger()
l := test.NewLogger()
ob := &bytes.Buffer{}
l.SetOutput(ob)
@@ -635,7 +635,7 @@ func Test_parsePort(t *testing.T) {
}
func TestNewFirewallFromConfig(t *testing.T) {
l := util.NewTestLogger()
l := test.NewLogger()
// Test a bad rule definition
c := &cert.NebulaCertificate{}
conf := config.NewC(l)
@@ -685,7 +685,7 @@ func TestNewFirewallFromConfig(t *testing.T) {
}
func TestAddFirewallRulesFromConfig(t *testing.T) {
l := util.NewTestLogger()
l := test.NewLogger()
// Test adding tcp rule
conf := config.NewC(l)
mf := &mockFirewall{}
@@ -849,7 +849,7 @@ func TestTCPRTTTracking(t *testing.T) {
}
func TestFirewall_convertRule(t *testing.T) {
l := util.NewTestLogger()
l := test.NewLogger()
ob := &bytes.Buffer{}
l.SetOutput(ob)

37
go.mod
View File

@@ -1,45 +1,48 @@
module github.com/slackhq/nebula
go 1.17
go 1.18
require (
github.com/anmitsu/go-shlex v0.0.0-20200514113438-38f4b401e2be
github.com/armon/go-radix v1.0.0
github.com/cespare/xxhash/v2 v2.1.2 // indirect
github.com/cyberdelia/go-metrics-graphite v0.0.0-20161219230853-39f87cc3b432
github.com/flynn/noise v1.0.0
github.com/gogo/protobuf v1.3.2
github.com/golang/protobuf v1.5.2
github.com/google/gopacket v1.1.19
github.com/imdario/mergo v0.3.8
github.com/kardianos/service v1.2.0
github.com/miekg/dns v1.1.43
github.com/kardianos/service v1.2.1
github.com/miekg/dns v1.1.48
github.com/nbrownus/go-metrics-prometheus v0.0.0-20210712211119-974a6260965f
github.com/prometheus/client_golang v1.11.0
github.com/prometheus/client_model v0.2.0 // indirect
github.com/prometheus/procfs v0.7.3 // indirect
github.com/prometheus/client_golang v1.12.1
github.com/rcrowley/go-metrics v0.0.0-20201227073835-cf1acfcdf475
github.com/sirupsen/logrus v1.8.1
github.com/skip2/go-qrcode v0.0.0-20200617195104-da1b6568686e
github.com/songgao/water v0.0.0-20200317203138-2b4b6d7c09d8
github.com/stretchr/testify v1.7.0
github.com/stretchr/testify v1.7.1
github.com/vishvananda/netlink v1.1.0
github.com/vishvananda/netns v0.0.0-20211101163701-50045581ed74 // indirect
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519
golang.org/x/net v0.0.0-20211101193420-4a448f8816b3
golang.org/x/sys v0.0.0-20211103235746-7861aae1554b
golang.org/x/crypto v0.0.0-20220331220935-ae2d96664a29
golang.org/x/net v0.0.0-20220403103023-749bd193bc2b
golang.org/x/sys v0.0.0-20220406155245-289d7a0edf71
golang.zx2c4.com/wintun v0.0.0-20211104114900-415007cec224
golang.zx2c4.com/wireguard/windows v0.5.1
google.golang.org/protobuf v1.27.1
golang.zx2c4.com/wireguard/windows v0.5.3
google.golang.org/protobuf v1.28.0
gopkg.in/yaml.v2 v2.4.0
)
require (
github.com/beorn7/perks v1.0.1 // indirect
github.com/cespare/xxhash/v2 v2.1.2 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/golang/protobuf v1.5.2 // indirect
github.com/matttproud/golang_protobuf_extensions v1.0.1 // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/prometheus/common v0.32.1 // indirect
github.com/prometheus/client_model v0.2.0 // indirect
github.com/prometheus/common v0.33.0 // indirect
github.com/prometheus/procfs v0.7.3 // indirect
github.com/vishvananda/netns v0.0.0-20211101163701-50045581ed74 // indirect
golang.org/x/mod v0.6.0-dev.0.20220106191415-9b9b3d81d5e3 // indirect
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211 // indirect
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c // indirect
golang.org/x/tools v0.1.10 // indirect
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 // indirect
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b // indirect
)

70
go.sum
View File

@@ -72,9 +72,11 @@ github.com/go-gl/glfw/v3.3/glfw v0.0.0-20200222043503-6f7a984d4dc4/go.mod h1:tQ2
github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-kit/kit v0.9.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-kit/log v0.1.0/go.mod h1:zbhenjAZHb184qTLMA9ZjW7ThYL0H2mk7Q6pNt4vbaY=
github.com/go-kit/log v0.2.0/go.mod h1:NwTd00d/i8cPZ3xOwwiv2PO5MOcx78fFErGNcVmBjv0=
github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE=
github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk=
github.com/go-logfmt/logfmt v0.5.0/go.mod h1:wCYkCAKZfumFQihp8CzCvQ3paCTfi41vtzG1KdI/P7A=
github.com/go-logfmt/logfmt v0.5.1/go.mod h1:WYhtIu8zTZfxdn5+rREduYbwxfcBr/Vr6KEVveWlfTs=
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
@@ -143,12 +145,13 @@ github.com/jpillora/backoff v1.0.0/go.mod h1:J/6gKK9jxlEcS3zixgDgUAsiuZ7yrSoa/FX
github.com/json-iterator/go v1.1.6/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU=
github.com/json-iterator/go v1.1.10/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
github.com/json-iterator/go v1.1.11/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo=
github.com/jstemmer/go-junit-report v0.0.0-20190106144839-af01ea7f8024/go.mod h1:6v2b51hI/fHJwM22ozAgKL4VKDeJcHhJFhtBdhmNjmU=
github.com/jstemmer/go-junit-report v0.9.1/go.mod h1:Brl9GWCQeLvo8nXZwPNNblvFj/XSXhF0NWZEnDohbsk=
github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w=
github.com/julienschmidt/httprouter v1.3.0/go.mod h1:JR6WtHb+2LUe8TCKY3cZOxFyyO8IZAc4RVcycCCAKdM=
github.com/kardianos/service v1.2.0 h1:bGuZ/epo3vrt8IPC7mnKQolqFeYJb7Cs8Rk4PSOBB/g=
github.com/kardianos/service v1.2.0/go.mod h1:CIMRFEJVL+0DS1a3Nx06NaMn4Dz63Ng6O7dl0qH0zVM=
github.com/kardianos/service v1.2.1 h1:AYndMsehS+ywIS6RB9KOlcXzteWUzxgMgBymJD7+BYk=
github.com/kardianos/service v1.2.1/go.mod h1:CIMRFEJVL+0DS1a3Nx06NaMn4Dz63Ng6O7dl0qH0zVM=
github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
@@ -164,12 +167,13 @@ github.com/lxn/walk v0.0.0-20210112085537-c389da54e794/go.mod h1:E23UucZGqpuUANJ
github.com/lxn/win v0.0.0-20210218163916-a377121e959e/go.mod h1:KxxjdtRkfNoYDCUP5ryK7XJJNTnpC8atvtmTheChOtk=
github.com/matttproud/golang_protobuf_extensions v1.0.1 h1:4hp9jkHxhMHkqkrB3Ix0jegS5sx/RkqARlsWZ6pIwiU=
github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0=
github.com/miekg/dns v1.1.43 h1:JKfpVSCB84vrAmHzyrsxB5NAr5kLoMXZArPSw7Qlgyg=
github.com/miekg/dns v1.1.43/go.mod h1:+evo5L0630/F6ca/Z9+GAqzhjGyn8/c+TBaOyfEl0V4=
github.com/miekg/dns v1.1.48 h1:Ucfr7IIVyMBz4lRE8qmGUuZ4Wt3/ZGu9hmcMT3Uu4tQ=
github.com/miekg/dns v1.1.48/go.mod h1:e3IlAVfNqAllflbibAZEWOXOQ+Ynzk/dDozDxY7XnME=
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/reflect2 v0.0.0-20180701023420-4b7aa43c6742/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
github.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk=
github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/mwitkow/go-conntrack v0.0.0-20190716064945-2f068394615f/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/nbrownus/go-metrics-prometheus v0.0.0-20210712211119-974a6260965f h1:8dM0ilqKL0Uzl42GABzzC4Oqlc3kGRILz0vgoff7nwg=
@@ -182,8 +186,9 @@ github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZN
github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
github.com/prometheus/client_golang v1.0.0/go.mod h1:db9x61etRT2tGnBNRi70OPL5FsnadC4Ky3P0J6CfImo=
github.com/prometheus/client_golang v1.7.1/go.mod h1:PY5Wy2awLA44sXw4AOSfFBetzPP4j5+D6mVACh+pe2M=
github.com/prometheus/client_golang v1.11.0 h1:HNkLOAEQMIDv/K+04rukrLx6ch7msSRwf3/SASFAGtQ=
github.com/prometheus/client_golang v1.11.0/go.mod h1:Z6t4BnS23TR94PD6BsDNk8yVqroYurpAkEiz0P2BEV0=
github.com/prometheus/client_golang v1.12.1 h1:ZiaPsmm9uiBeaSMRznKsCDNtPCS0T3JVDGF+06gjBzk=
github.com/prometheus/client_golang v1.12.1/go.mod h1:3Z9XVyYiZYEO+YQWt3RD2R3jrbd179Rt297l4aS6nDY=
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
@@ -192,8 +197,9 @@ github.com/prometheus/client_model v0.2.0/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6T
github.com/prometheus/common v0.4.1/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
github.com/prometheus/common v0.10.0/go.mod h1:Tlit/dnDKsSWFlCLTWaA1cyBgKHSMdTB80sz/V91rCo=
github.com/prometheus/common v0.26.0/go.mod h1:M7rCNAaPfAosfx8veZJCuw84e35h3Cfd9VFqTh1DIvc=
github.com/prometheus/common v0.32.1 h1:hWIdL3N2HoUx3B8j3YN9mWor0qhY/NlEKZEaXxuIRh4=
github.com/prometheus/common v0.32.1/go.mod h1:vu+V0TpY+O6vW9J44gczi3Ap/oXXR10b+M/gUGO4Hls=
github.com/prometheus/common v0.33.0 h1:rHgav/0a6+uYgGdNt3jwz8FNSesO/Hsang3O0T9A5SE=
github.com/prometheus/common v0.33.0/go.mod h1:gB3sOl7P0TvJabZpLY5uQMpUqRCPPCyRLCZYc7JZTNE=
github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.2/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
github.com/prometheus/procfs v0.1.3/go.mod h1:lV6e/gmhEcM9IjHGsFOCxxuZ+z1YqCvr4OA4YeYWdaU=
@@ -217,8 +223,9 @@ github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
github.com/stretchr/testify v1.7.0 h1:nwc3DEeHmmLAfoZucVR881uASk0Mfjw8xYJ99tb5CcY=
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.1 h1:5TQK59W5E3v0r2duFAb7P95B6hEeOyEnHRa8MjYSMTY=
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/vishvananda/netlink v1.1.0 h1:1iyaYNBLmP6L0220aDnYQpo1QEV4t4hJ+xEEhhJH8j0=
github.com/vishvananda/netlink v1.1.0/go.mod h1:cTgwzPIzzgDAYoQrMm0EdrjRUBkTqKYppBueQtXaqoE=
github.com/vishvananda/netns v0.0.0-20191106174202-0a2b9b5464df/go.mod h1:JP3t17pCcGlemwknint6hfoeCVQrEMVwxRLRjXpq+BU=
@@ -228,7 +235,9 @@ github.com/yuin/goldmark v1.1.25/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9de
github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.1.32/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.3.5/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k=
github.com/yuin/goldmark v1.4.0/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k=
github.com/yuin/goldmark v1.4.1/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k=
go.opencensus.io v0.21.0/go.mod h1:mSImk1erAIZhrmZN+AvHh14ztQfjbGwt4TtuofqLduU=
go.opencensus.io v0.22.0/go.mod h1:+kGneAE2xo2IficOXnaByMWTGM9T73dGwxeWcUqIpI8=
go.opencensus.io v0.22.2/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
@@ -241,8 +250,10 @@ golang.org/x/crypto v0.0.0-20190605123033-f99c8df09eb5/go.mod h1:yigFU9vqHzYiE8U
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20210322153248-0c34fe9e7dc2/go.mod h1:T9bdIzuCu7OtxOm1hfPfRQxPLYneinmdGuTeoZ9dtd4=
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519 h1:7I4JAnoQBe7ZtJcBaYHi5UtiO8tQHbUSXxL+pnGRANg=
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/crypto v0.0.0-20211215153901-e495a2d5b3d3/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
golang.org/x/crypto v0.0.0-20220331220935-ae2d96664a29 h1:tkVvjkPTB7pnW3jnid7kNyAMPVWllTNOf/qKDze4p9o=
golang.org/x/crypto v0.0.0-20220331220935-ae2d96664a29/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8=
@@ -274,6 +285,8 @@ golang.org/x/mod v0.1.1-0.20191107180719-034126e5016b/go.mod h1:QqPTAvyqsEbceGzB
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.4.2/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.6.0-dev.0.20220106191415-9b9b3d81d5e3 h1:kQgndtyPBW/JIYERgdxfwMYh3AVStj88WQTlNDi2a+o=
golang.org/x/mod v0.6.0-dev.0.20220106191415-9b9b3d81d5e3/go.mod h1:3p9vT2HGsQu2K1YbXdKPJLVgG5VJdoTa1poYQBtP1AY=
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
@@ -304,17 +317,24 @@ golang.org/x/net v0.0.0-20200707034311-ab3426394381/go.mod h1:/O7V0waA8r7cgGh81R
golang.org/x/net v0.0.0-20200822124328-c89045814202/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4/go.mod h1:p54w0d4576C0XHj96bSt6lcn1PtDYWL6XObtHCRCNQM=
golang.org/x/net v0.0.0-20210525063256-abc453219eb5/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20210726213435-c6fcb2dbf985/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20210805182204-aaa1db679c0d/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20211020060615-d418f374d309/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20211101193420-4a448f8816b3 h1:VrJZAjbekhoRn7n5FBujY31gboH+iB3pdLxn3gE9FjU=
golang.org/x/net v0.0.0-20211101193420-4a448f8816b3/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20211015210444-4f30a5c0130f/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20211215060638-4ddde0e984e9/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20220127200216-cd36cc0744dd/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
golang.org/x/net v0.0.0-20220225172249-27dd8689420f/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
golang.org/x/net v0.0.0-20220403103023-749bd193bc2b h1:vI32FkLJNAWtGD4BwkThwEy6XS7ZLLMHkSkYfF8M0W0=
golang.org/x/net v0.0.0-20220403103023-749bd193bc2b/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20191202225959-858c2ad4c8b6/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20200107190931-bf48bf16ab8d/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20210514164344-f6687ab2804c/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20220223155221-ee480838109b/go.mod h1:DAh4E804XQdzx2j+YRIaUnCqCV2RuMz24cGBJ5QYIrc=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
@@ -366,14 +386,19 @@ golang.org/x/sys v0.0.0-20201015000850-e3ed0017c211/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20201018230417-eeed37f84f13/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210303074136-134d130e1a04/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210510120138-977fb7262007/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210603081109-ebe580a85c40/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210630005230-0f9fa26af87c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210809222454-d867a43fc93e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20211025201205-69cdffdb9359/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20211103235746-7861aae1554b h1:1VkfZQv42XQlA/jchYumAnv1UPo6RgF9rJFkTgZIxO4=
golang.org/x/sys v0.0.0-20211019181941-9d821ace8654/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20211103235746-7861aae1554b/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20211216021012-1d35b9e2eb4e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220114195835-da31bd327af9/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220406155245-289d7a0edf71 h1:PRD0hj6tTuUnCFD08vkvjkYFbQg/9lV8KIxe1y4/cvU=
golang.org/x/sys v0.0.0-20220406155245-289d7a0edf71/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211 h1:JGgROgKl9N8DuW20oFS5gxc+lE67/N3FcwmBPMe7ArY=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
@@ -383,7 +408,8 @@ golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.8-0.20211004125949-5bd84dd9b33b/go.mod h1:EFNZuWvGYxIRUEX+K8UmCFwYmZjqcrnq15ZuVldZkZ0=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
golang.org/x/text v0.3.8-0.20211105212822-18b340fc7af2/go.mod h1:EFNZuWvGYxIRUEX+K8UmCFwYmZjqcrnq15ZuVldZkZ0=
golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20191024005414-555d28b269f0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
@@ -429,7 +455,10 @@ golang.org/x/tools v0.0.0-20200729194436-6467de6f59a7/go.mod h1:njjCfa9FT2d7l9Bc
golang.org/x/tools v0.0.0-20200804011535-6c149bb5ef0d/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA=
golang.org/x/tools v0.0.0-20200825202427-b303f430e36d/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA=
golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.1.6-0.20210726203631-07bc1bf47fb2/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
golang.org/x/tools v0.1.7/go.mod h1:LGqMHiF4EqQNHR1JncWGqT5BVaXmza+X+BDGol+dOxo=
golang.org/x/tools v0.1.10 h1:QjFRCZxdOhBJ/UNgnBZLbNV13DlbnK0quyivTnXJM20=
golang.org/x/tools v0.1.10/go.mod h1:Uh6Zz+xoGYZom868N8YTex3t7RhtHDBrE8Gzo9bV56E=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
@@ -437,8 +466,8 @@ golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 h1:go1bK/D/BFZV2I8cIQd1N
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.zx2c4.com/wintun v0.0.0-20211104114900-415007cec224 h1:Ug9qvr1myri/zFN6xL17LSCBGFDnphBBhzmILHsM5TY=
golang.zx2c4.com/wintun v0.0.0-20211104114900-415007cec224/go.mod h1:deeaetjYA+DHMHg+sMSMI58GrEteJUUzzw7en6TJQcI=
golang.zx2c4.com/wireguard/windows v0.5.1 h1:OnYw96PF+CsIMrqWo5QP3Q59q5hY1rFErk/yN3cS+JQ=
golang.zx2c4.com/wireguard/windows v0.5.1/go.mod h1:EApyTk/ZNrkbZjurHL1nleDYnsPpJYBO7LZEBCyDAHk=
golang.zx2c4.com/wireguard/windows v0.5.3 h1:On6j2Rpn3OEMXqBq00QEDC7bWSZrPIHKIus8eIuExIE=
golang.zx2c4.com/wireguard/windows v0.5.3/go.mod h1:9TEe8TJmtwyQebdFwAkEWOPr3prrtqm+REGFifP60hI=
google.golang.org/api v0.4.0/go.mod h1:8k5glujaEP+g9n7WNsDg8QP6cUVNI86fCNMcbazEtwE=
google.golang.org/api v0.7.0/go.mod h1:WtwebWUNSVBH/HAw79HIFXZNqEvBhG+Ra+ax0hx3E3M=
google.golang.org/api v0.8.0/go.mod h1:o4eAsZoiT+ibD93RtjEohWalFOjRDx6CVaqeizhEnKg=
@@ -514,8 +543,8 @@ google.golang.org/protobuf v1.24.0/go.mod h1:r/3tXBNzIEhYS9I1OUVjXDlt8tc493IdKGj
google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlbajtzgsN7c=
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
google.golang.org/protobuf v1.27.1 h1:SnqbnDw1V7RiZcXPx5MEeqPv2s79L9i7BJUlG/+RurQ=
google.golang.org/protobuf v1.27.1/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
google.golang.org/protobuf v1.28.0 h1:w43yiav+6bVFTBQFZX0r7ipe9JQ1QsbMgHwbBziscLw=
google.golang.org/protobuf v1.28.0/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I=
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
@@ -530,8 +559,9 @@ gopkg.in/yaml.v2 v2.2.5/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.3.0/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY=
gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c h1:dUUwHk2QECo/6vqA44rthZ8ie2QXMNeKRTHCNY2nXvo=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b h1:h8qDotaEPuJATrMmW04NCwg7v22aHH28wwpauUhK9Oo=
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190106161140-3f1c8253044a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190418001031-e561f6794a2a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=

View File

@@ -5,21 +5,23 @@ import (
"github.com/slackhq/nebula/udp"
)
func HandleIncomingHandshake(f *Interface, addr *udp.Addr, packet []byte, h *header.H, hostinfo *HostInfo) {
func HandleIncomingHandshake(f *Interface, addr *udp.Addr, via interface{}, packet []byte, h *header.H, hostinfo *HostInfo) {
// First remote allow list check before we know the vpnIp
if !f.lightHouse.remoteAllowList.AllowUnknownVpnIp(addr.IP) {
f.l.WithField("udpAddr", addr).Debug("lighthouse.remote_allow_list denied incoming handshake")
return
if addr != nil {
if !f.lightHouse.GetRemoteAllowList().AllowUnknownVpnIp(addr.IP) {
f.l.WithField("udpAddr", addr).Debug("lighthouse.remote_allow_list denied incoming handshake")
return
}
}
switch h.Subtype {
case header.HandshakeIXPSK0:
switch h.MessageCounter {
case 1:
ixHandshakeStage1(f, addr, packet, h)
ixHandshakeStage1(f, addr, via, packet, h)
case 2:
newHostinfo, _ := f.handshakeManager.QueryIndex(h.RemoteIndex)
tearDown := ixHandshakeStage2(f, addr, newHostinfo, packet, h)
tearDown := ixHandshakeStage2(f, addr, via, newHostinfo, packet, h)
if tearDown && newHostinfo != nil {
f.handshakeManager.DeleteHostInfo(newHostinfo)
}

View File

@@ -5,7 +5,6 @@ import (
"time"
"github.com/flynn/noise"
"github.com/golang/protobuf/proto"
"github.com/slackhq/nebula/header"
"github.com/slackhq/nebula/iputil"
"github.com/slackhq/nebula/udp"
@@ -43,7 +42,7 @@ func ixHandshakeStage0(f *Interface, vpnIp iputil.VpnIp, hostinfo *HostInfo) {
hs := &NebulaHandshake{
Details: hsProto,
}
hsBytes, err = proto.Marshal(hs)
hsBytes, err = hs.Marshal()
if err != nil {
f.l.WithError(err).WithField("vpnIp", vpnIp).
@@ -70,7 +69,7 @@ func ixHandshakeStage0(f *Interface, vpnIp iputil.VpnIp, hostinfo *HostInfo) {
hostinfo.handshakeStart = time.Now()
}
func ixHandshakeStage1(f *Interface, addr *udp.Addr, packet []byte, h *header.H) {
func ixHandshakeStage1(f *Interface, addr *udp.Addr, via interface{}, packet []byte, h *header.H) {
ci := f.newConnectionState(f.l, false, noise.HandshakeIX, []byte{}, 0)
// Mark packet 1 as seen so it doesn't show up as missed
ci.window.Update(f.l, 1)
@@ -83,7 +82,7 @@ func ixHandshakeStage1(f *Interface, addr *udp.Addr, packet []byte, h *header.H)
}
hs := &NebulaHandshake{}
err = proto.Unmarshal(msg, hs)
err = hs.Unmarshal(msg)
/*
l.Debugln("GOT INDEX: ", hs.Details.InitiatorIndex)
*/
@@ -114,9 +113,11 @@ func ixHandshakeStage1(f *Interface, addr *udp.Addr, packet []byte, h *header.H)
return
}
if !f.lightHouse.remoteAllowList.Allow(vpnIp, addr.IP) {
f.l.WithField("vpnIp", vpnIp).WithField("udpAddr", addr).Debug("lighthouse.remote_allow_list denied incoming handshake")
return
if addr != nil {
if !f.lightHouse.GetRemoteAllowList().Allow(vpnIp, addr.IP) {
f.l.WithField("vpnIp", vpnIp).WithField("udpAddr", addr).Debug("lighthouse.remote_allow_list denied incoming handshake")
return
}
}
myIndex, err := generateIndex(f.l)
@@ -136,6 +137,11 @@ func ixHandshakeStage1(f *Interface, addr *udp.Addr, packet []byte, h *header.H)
vpnIp: vpnIp,
HandshakePacket: make(map[uint8][]byte, 0),
lastHandshakeTime: hs.Details.Time,
relayState: RelayState{
relays: map[iputil.VpnIp]struct{}{},
relayForByIp: map[iputil.VpnIp]*Relay{},
relayForByIdx: map[uint32]*Relay{},
},
}
hostinfo.Lock()
@@ -154,7 +160,7 @@ func ixHandshakeStage1(f *Interface, addr *udp.Addr, packet []byte, h *header.H)
// Update the time in case their clock is way off from ours
hs.Details.Time = uint64(time.Now().UnixNano())
hsBytes, err := proto.Marshal(hs)
hsBytes, err := hs.Marshal()
if err != nil {
f.l.WithError(err).WithField("vpnIp", hostinfo.vpnIp).WithField("udpAddr", addr).
WithField("certName", certName).
@@ -224,17 +230,31 @@ func ixHandshakeStage1(f *Interface, addr *udp.Addr, packet []byte, h *header.H)
msg = existing.HandshakePacket[2]
f.messageMetrics.Tx(header.Handshake, header.MessageSubType(msg[1]), 1)
err := f.outside.WriteTo(msg, addr)
if err != nil {
f.l.WithField("vpnIp", existing.vpnIp).WithField("udpAddr", addr).
WithField("handshake", m{"stage": 2, "style": "ix_psk0"}).WithField("cached", true).
WithError(err).Error("Failed to send handshake message")
if addr != nil {
err := f.outside.WriteTo(msg, addr)
if err != nil {
f.l.WithField("vpnIp", existing.vpnIp).WithField("udpAddr", addr).
WithField("handshake", m{"stage": 2, "style": "ix_psk0"}).WithField("cached", true).
WithError(err).Error("Failed to send handshake message")
} else {
f.l.WithField("vpnIp", existing.vpnIp).WithField("udpAddr", addr).
WithField("handshake", m{"stage": 2, "style": "ix_psk0"}).WithField("cached", true).
Info("Handshake message sent")
}
return
} else {
f.l.WithField("vpnIp", existing.vpnIp).WithField("udpAddr", addr).
via2 := via.(*ViaSender)
if via2 == nil {
f.l.Error("Handshake send failed: both addr and via are nil.")
return
}
hostinfo.relayState.InsertRelayTo(via2.relayHI.vpnIp)
f.SendVia(via2.relayHI, via2.relay, msg, make([]byte, 12), make([]byte, mtu), false)
f.l.WithField("vpnIp", existing.vpnIp).WithField("relay", via2.relayHI.vpnIp).
WithField("handshake", m{"stage": 2, "style": "ix_psk0"}).WithField("cached", true).
Info("Handshake message sent")
return
}
return
case ErrExistingHostInfo:
// This means there was an existing tunnel and this handshake was older than the one we are currently based on
f.l.WithField("vpnIp", vpnIp).WithField("udpAddr", addr).
@@ -287,17 +307,35 @@ func ixHandshakeStage1(f *Interface, addr *udp.Addr, packet []byte, h *header.H)
// Do the send
f.messageMetrics.Tx(header.Handshake, header.MessageSubType(msg[1]), 1)
err = f.outside.WriteTo(msg, addr)
if err != nil {
f.l.WithField("vpnIp", vpnIp).WithField("udpAddr", addr).
WithField("certName", certName).
WithField("fingerprint", fingerprint).
WithField("issuer", issuer).
WithField("initiatorIndex", hs.Details.InitiatorIndex).WithField("responderIndex", hs.Details.ResponderIndex).
WithField("remoteIndex", h.RemoteIndex).WithField("handshake", m{"stage": 2, "style": "ix_psk0"}).
WithError(err).Error("Failed to send handshake")
if addr != nil {
err = f.outside.WriteTo(msg, addr)
if err != nil {
f.l.WithField("vpnIp", vpnIp).WithField("udpAddr", addr).
WithField("certName", certName).
WithField("fingerprint", fingerprint).
WithField("issuer", issuer).
WithField("initiatorIndex", hs.Details.InitiatorIndex).WithField("responderIndex", hs.Details.ResponderIndex).
WithField("remoteIndex", h.RemoteIndex).WithField("handshake", m{"stage": 2, "style": "ix_psk0"}).
WithError(err).Error("Failed to send handshake")
} else {
f.l.WithField("vpnIp", vpnIp).WithField("udpAddr", addr).
WithField("certName", certName).
WithField("fingerprint", fingerprint).
WithField("issuer", issuer).
WithField("initiatorIndex", hs.Details.InitiatorIndex).WithField("responderIndex", hs.Details.ResponderIndex).
WithField("remoteIndex", h.RemoteIndex).WithField("handshake", m{"stage": 2, "style": "ix_psk0"}).
WithField("sentCachedPackets", len(hostinfo.packetStore)).
Info("Handshake message sent")
}
} else {
f.l.WithField("vpnIp", vpnIp).WithField("udpAddr", addr).
via2 := via.(*ViaSender)
if via2 == nil {
f.l.Error("Handshake send failed: both addr and via are nil.")
return
}
hostinfo.relayState.InsertRelayTo(via2.relayHI.vpnIp)
f.SendVia(via2.relayHI, via2.relay, msg, make([]byte, 12), make([]byte, mtu), false)
f.l.WithField("vpnIp", vpnIp).WithField("relay", via2.relayHI.vpnIp).
WithField("certName", certName).
WithField("fingerprint", fingerprint).
WithField("issuer", issuer).
@@ -312,7 +350,7 @@ func ixHandshakeStage1(f *Interface, addr *udp.Addr, packet []byte, h *header.H)
return
}
func ixHandshakeStage2(f *Interface, addr *udp.Addr, hostinfo *HostInfo, packet []byte, h *header.H) bool {
func ixHandshakeStage2(f *Interface, addr *udp.Addr, via interface{}, hostinfo *HostInfo, packet []byte, h *header.H) bool {
if hostinfo == nil {
// Nothing here to tear down, got a bogus stage 2 packet
return true
@@ -321,9 +359,11 @@ func ixHandshakeStage2(f *Interface, addr *udp.Addr, hostinfo *HostInfo, packet
hostinfo.Lock()
defer hostinfo.Unlock()
if !f.lightHouse.remoteAllowList.Allow(hostinfo.vpnIp, addr.IP) {
f.l.WithField("vpnIp", hostinfo.vpnIp).WithField("udpAddr", addr).Debug("lighthouse.remote_allow_list denied incoming handshake")
return false
if addr != nil {
if !f.lightHouse.GetRemoteAllowList().Allow(hostinfo.vpnIp, addr.IP) {
f.l.WithField("vpnIp", hostinfo.vpnIp).WithField("udpAddr", addr).Debug("lighthouse.remote_allow_list denied incoming handshake")
return false
}
}
ci := hostinfo.ConnectionState
@@ -364,7 +404,7 @@ func ixHandshakeStage2(f *Interface, addr *udp.Addr, hostinfo *HostInfo, packet
}
hs := &NebulaHandshake{}
err = proto.Unmarshal(msg, hs)
err = hs.Unmarshal(msg)
if err != nil || hs.Details == nil {
f.l.WithError(err).WithField("vpnIp", hostinfo.vpnIp).WithField("udpAddr", addr).
WithField("handshake", m{"stage": 2, "style": "ix_psk0"}).Error("Failed unmarshal handshake message")
@@ -451,7 +491,12 @@ func ixHandshakeStage2(f *Interface, addr *udp.Addr, hostinfo *HostInfo, packet
ci.eKey = NewNebulaCipherState(eKey)
// Make sure the current udpAddr being used is set for responding
hostinfo.SetRemote(addr)
if addr != nil {
hostinfo.SetRemote(addr)
} else {
via2 := via.(*ViaSender)
hostinfo.relayState.InsertRelayTo(via2.relayHI.vpnIp)
}
// Build up the radix for the firewall if we have subnets in the cert
hostinfo.CreateRemoteCIDR(remoteCert)

View File

@@ -20,6 +20,7 @@ const (
DefaultHandshakeTryInterval = time.Millisecond * 100
DefaultHandshakeRetries = 10
DefaultHandshakeTriggerBuffer = 64
DefaultUseRelays = true
)
var (
@@ -27,6 +28,7 @@ var (
tryInterval: DefaultHandshakeTryInterval,
retries: DefaultHandshakeRetries,
triggerBuffer: DefaultHandshakeTriggerBuffer,
useRelays: DefaultUseRelays,
}
)
@@ -34,6 +36,7 @@ type HandshakeConfig struct {
tryInterval time.Duration
retries int
triggerBuffer int
useRelays bool
messageMetrics *MessageMetrics
}
@@ -79,7 +82,6 @@ func (c *HandshakeManager) Run(ctx context.Context, f udp.EncWriter) {
case <-ctx.Done():
return
case vpnIP := <-c.trigger:
c.l.WithField("vpnIp", vpnIP).Debug("HandshakeManager: triggered")
c.handleOutbound(vpnIP, f, true)
case now := <-clockSource.C:
c.NextOutboundHandshakeTimerTick(now, f)
@@ -145,6 +147,8 @@ func (c *HandshakeManager) handleOutbound(vpnIp iputil.VpnIp, f udp.EncWriter, l
// Get a remotes object if we don't already have one.
// This is mainly to protect us as this should never be the case
// NB ^ This comment doesn't jive. It's how the thing gets intiailized.
// It's the common path. Should it update every time, in case a future LH query/queries give us more info?
if hostinfo.remotes == nil {
hostinfo.remotes = c.lightHouse.QueryCache(vpnIp)
}
@@ -181,6 +185,77 @@ func (c *HandshakeManager) handleOutbound(vpnIp iputil.VpnIp, f udp.EncWriter, l
Info("Handshake message sent")
}
if c.config.useRelays && len(hostinfo.remotes.relays) > 0 {
hostinfo.logger(c.l).WithField("relayIps", hostinfo.remotes.relays).Info("Attempt to relay through hosts")
// Send a RelayRequest to all known Relay IP's
for _, relay := range hostinfo.remotes.relays {
// Don't relay to myself, and don't relay through the host I'm trying to connect to
if *relay == vpnIp || *relay == c.lightHouse.myVpnIp {
continue
}
relayHostInfo, err := c.mainHostMap.QueryVpnIp(*relay)
if err != nil || relayHostInfo.remote == nil {
hostinfo.logger(c.l).WithError(err).WithField("relay", relay.String()).Info("Establish tunnel to relay target.")
f.Handshake(*relay)
continue
}
// Check the relay HostInfo to see if we already established a relay through it
if existingRelay, ok := relayHostInfo.relayState.QueryRelayForByIp(vpnIp); ok {
switch existingRelay.State {
case Established:
hostinfo.logger(c.l).WithField("relay", relay.String()).Info("Send handshake via relay")
f.SendVia(relayHostInfo, existingRelay, hostinfo.HandshakePacket[0], make([]byte, 12), make([]byte, mtu), false)
case Requested:
hostinfo.logger(c.l).WithField("relay", relay.String()).Info("Re-send CreateRelay request")
// Re-send the CreateRelay request, in case the previous one was lost.
m := NebulaControl{
Type: NebulaControl_CreateRelayRequest,
InitiatorRelayIndex: existingRelay.LocalIndex,
RelayFromIp: uint32(c.lightHouse.myVpnIp),
RelayToIp: uint32(vpnIp),
}
msg, err := m.Marshal()
if err != nil {
hostinfo.logger(c.l).
WithError(err).
Error("Failed to marshal Control message to create relay")
} else {
f.SendMessageToVpnIp(header.Control, 0, *relay, msg, make([]byte, 12), make([]byte, mtu))
}
default:
hostinfo.logger(c.l).
WithField("vpnIp", vpnIp).
WithField("state", existingRelay.State).
WithField("relayVpnIp", relayHostInfo.vpnIp).
Errorf("Relay unexpected state")
}
} else {
// No relays exist or requested yet.
if relayHostInfo.remote != nil {
idx, err := AddRelay(c.l, relayHostInfo, c.mainHostMap, vpnIp, nil, TerminalType, Requested)
if err != nil {
hostinfo.logger(c.l).WithField("relay", relay.String()).WithError(err).Info("Failed to add relay to hostmap")
}
m := NebulaControl{
Type: NebulaControl_CreateRelayRequest,
InitiatorRelayIndex: idx,
RelayFromIp: uint32(c.lightHouse.myVpnIp),
RelayToIp: uint32(vpnIp),
}
msg, err := m.Marshal()
if err != nil {
hostinfo.logger(c.l).
WithError(err).
Error("Failed to marshal Control message to create relay")
} else {
f.SendMessageToVpnIp(header.Control, 0, *relay, msg, make([]byte, 12), make([]byte, mtu))
}
}
}
}
}
// Increment the counter to increase our delay, linear backoff
hostinfo.HandshakeCounter++
@@ -284,6 +359,9 @@ func (c *HandshakeManager) CheckAndComplete(hostinfo *HostInfo, handshakePacket
delete(c.mainHostMap.Hosts, existingHostInfo.vpnIp)
delete(c.mainHostMap.Indexes, existingHostInfo.localIndexId)
delete(c.mainHostMap.RemoteIndexes, existingHostInfo.remoteIndexId)
for _, relayIdx := range existingHostInfo.relayState.CopyRelayForIdxs() {
delete(c.mainHostMap.Relays, relayIdx)
}
}
c.mainHostMap.addHostInfo(hostinfo, f)
@@ -305,6 +383,9 @@ func (c *HandshakeManager) Complete(hostinfo *HostInfo, f *Interface) {
delete(c.mainHostMap.Hosts, existingHostInfo.vpnIp)
delete(c.mainHostMap.Indexes, existingHostInfo.localIndexId)
delete(c.mainHostMap.RemoteIndexes, existingHostInfo.remoteIndexId)
for _, relayIdx := range existingHostInfo.relayState.CopyRelayForIdxs() {
delete(c.mainHostMap.Relays, relayIdx)
}
}
existingRemoteIndex, found := c.mainHostMap.RemoteIndexes[hostinfo.remoteIndexId]

View File

@@ -7,13 +7,13 @@ import (
"github.com/slackhq/nebula/header"
"github.com/slackhq/nebula/iputil"
"github.com/slackhq/nebula/test"
"github.com/slackhq/nebula/udp"
"github.com/slackhq/nebula/util"
"github.com/stretchr/testify/assert"
)
func Test_NewHandshakeManagerVpnIp(t *testing.T) {
l := util.NewTestLogger()
l := test.NewLogger()
_, tuncidr, _ := net.ParseCIDR("172.1.1.1/24")
_, vpncidr, _ := net.ParseCIDR("172.1.1.1/24")
_, localrange, _ := net.ParseCIDR("10.1.1.1/24")
@@ -21,8 +21,13 @@ func Test_NewHandshakeManagerVpnIp(t *testing.T) {
preferredRanges := []*net.IPNet{localrange}
mw := &mockEncWriter{}
mainHM := NewHostMap(l, "test", vpncidr, preferredRanges)
lh := &LightHouse{
atomicStaticList: make(map[iputil.VpnIp]struct{}),
atomicLighthouses: make(map[iputil.VpnIp]struct{}),
addrMap: make(map[iputil.VpnIp]*RemoteList),
}
blah := NewHandshakeManager(l, tuncidr, preferredRanges, mainHM, &LightHouse{}, &udp.Conn{}, defaultHandshakeConfig)
blah := NewHandshakeManager(l, tuncidr, preferredRanges, mainHM, lh, &udp.Conn{}, defaultHandshakeConfig)
now := time.Now()
blah.NextOutboundHandshakeTimerTick(now, mw)
@@ -66,7 +71,7 @@ func Test_NewHandshakeManagerVpnIp(t *testing.T) {
}
func Test_NewHandshakeManagerTrigger(t *testing.T) {
l := util.NewTestLogger()
l := test.NewLogger()
_, tuncidr, _ := net.ParseCIDR("172.1.1.1/24")
_, vpncidr, _ := net.ParseCIDR("172.1.1.1/24")
_, localrange, _ := net.ParseCIDR("10.1.1.1/24")
@@ -74,7 +79,12 @@ func Test_NewHandshakeManagerTrigger(t *testing.T) {
preferredRanges := []*net.IPNet{localrange}
mw := &mockEncWriter{}
mainHM := NewHostMap(l, "test", vpncidr, preferredRanges)
lh := &LightHouse{addrMap: make(map[iputil.VpnIp]*RemoteList), l: l}
lh := &LightHouse{
addrMap: make(map[iputil.VpnIp]*RemoteList),
l: l,
atomicStaticList: make(map[iputil.VpnIp]struct{}),
atomicLighthouses: make(map[iputil.VpnIp]struct{}),
}
blah := NewHandshakeManager(l, tuncidr, preferredRanges, mainHM, lh, &udp.Conn{}, defaultHandshakeConfig)
@@ -122,3 +132,9 @@ type mockEncWriter struct {
func (mw *mockEncWriter) SendMessageToVpnIp(t header.MessageType, st header.MessageSubType, vpnIp iputil.VpnIp, p, nb, out []byte) {
return
}
func (mw *mockEncWriter) SendVia(via interface{}, relay interface{}, ad, nb, out []byte, nocopy bool) {
return
}
func (mw *mockEncWriter) Handshake(vpnIP iputil.VpnIp) {}

View File

@@ -36,6 +36,7 @@ const (
LightHouse MessageType = 3
Test MessageType = 4
CloseTunnel MessageType = 5
Control MessageType = 6
)
var typeMap = map[MessageType]string{
@@ -45,8 +46,14 @@ var typeMap = map[MessageType]string{
LightHouse: "lightHouse",
Test: "test",
CloseTunnel: "closeTunnel",
Control: "control",
}
const (
MessageNone MessageSubType = 0
MessageRelay MessageSubType = 1
)
const (
TestRequest MessageSubType = 0
TestReply MessageSubType = 1
@@ -67,7 +74,10 @@ var subTypeTestMap = map[MessageSubType]string{
var subTypeNoneMap = map[MessageSubType]string{0: "none"}
var subTypeMap = map[MessageType]*map[MessageSubType]string{
Message: &subTypeNoneMap,
Message: {
MessageNone: "none",
MessageRelay: "relay",
},
RecvError: &subTypeNoneMap,
LightHouse: &subTypeNoneMap,
Test: &subTypeTestMap,
@@ -75,6 +85,7 @@ var subTypeMap = map[MessageType]*map[MessageSubType]string{
Handshake: {
HandshakeIXPSK0: "ix_psk0",
},
Control: &subTypeNoneMap,
}
type H struct {

View File

@@ -82,10 +82,14 @@ func TestTypeMap(t *testing.T) {
LightHouse: "lightHouse",
Test: "test",
CloseTunnel: "closeTunnel",
Control: "control",
}, typeMap)
assert.Equal(t, map[MessageType]*map[MessageSubType]string{
Message: &subTypeNoneMap,
Message: {
MessageNone: "none",
MessageRelay: "relay",
},
RecvError: &subTypeNoneMap,
LightHouse: &subTypeNoneMap,
Test: &subTypeTestMap,
@@ -93,6 +97,7 @@ func TestTypeMap(t *testing.T) {
Handshake: {
HandshakeIXPSK0: "ix_psk0",
},
Control: &subTypeNoneMap,
}, subTypeMap)
}

View File

@@ -27,19 +27,127 @@ const MaxRemotes = 10
// This helps prevent flapping due to packets already in flight
const RoamingSuppressSeconds = 2
const (
Requested = iota
Established
)
const (
Unknowntype = iota
ForwardingType
TerminalType
)
type Relay struct {
Type int
State int
LocalIndex uint32
RemoteIndex uint32
PeerIp iputil.VpnIp
}
type HostMap struct {
sync.RWMutex //Because we concurrently read and write to our maps
name string
Indexes map[uint32]*HostInfo
Relays map[uint32]*HostInfo // Maps a Relay IDX to a Relay HostInfo object
RemoteIndexes map[uint32]*HostInfo
Hosts map[iputil.VpnIp]*HostInfo
preferredRanges []*net.IPNet
vpnCIDR *net.IPNet
unsafeRoutes *cidr.Tree4
metricsEnabled bool
l *logrus.Logger
}
type RelayState struct {
sync.RWMutex
relays map[iputil.VpnIp]struct{} // Set of VpnIp's of Hosts to use as relays to access this peer
relayForByIp map[iputil.VpnIp]*Relay // Maps VpnIps of peers for which this HostInfo is a relay to some Relay info
relayForByIdx map[uint32]*Relay // Maps a local index to some Relay info
}
func (rs *RelayState) DeleteRelay(ip iputil.VpnIp) {
rs.Lock()
defer rs.Unlock()
delete(rs.relays, ip)
}
func (rs *RelayState) GetRelayForByIp(ip iputil.VpnIp) (*Relay, bool) {
rs.RLock()
defer rs.RUnlock()
r, ok := rs.relayForByIp[ip]
return r, ok
}
func (rs *RelayState) InsertRelayTo(ip iputil.VpnIp) {
rs.Lock()
defer rs.Unlock()
rs.relays[ip] = struct{}{}
}
func (rs *RelayState) CopyRelayIps() []iputil.VpnIp {
rs.RLock()
defer rs.RUnlock()
ret := make([]iputil.VpnIp, 0, len(rs.relays))
for ip := range rs.relays {
ret = append(ret, ip)
}
return ret
}
func (rs *RelayState) CopyRelayForIps() []iputil.VpnIp {
rs.RLock()
defer rs.RUnlock()
currentRelays := make([]iputil.VpnIp, 0, len(rs.relayForByIp))
for relayIp := range rs.relayForByIp {
currentRelays = append(currentRelays, relayIp)
}
return currentRelays
}
func (rs *RelayState) CopyRelayForIdxs() []uint32 {
rs.RLock()
defer rs.RUnlock()
ret := make([]uint32, 0, len(rs.relayForByIdx))
for i := range rs.relayForByIdx {
ret = append(ret, i)
}
return ret
}
func (rs *RelayState) RemoveRelay(localIdx uint32) (iputil.VpnIp, bool) {
rs.Lock()
defer rs.Unlock()
relay, ok := rs.relayForByIdx[localIdx]
if !ok {
return iputil.VpnIp(0), false
}
delete(rs.relayForByIdx, localIdx)
delete(rs.relayForByIp, relay.PeerIp)
return relay.PeerIp, true
}
func (rs *RelayState) QueryRelayForByIp(vpnIp iputil.VpnIp) (*Relay, bool) {
rs.RLock()
defer rs.RUnlock()
r, ok := rs.relayForByIp[vpnIp]
return r, ok
}
func (rs *RelayState) QueryRelayForByIdx(idx uint32) (*Relay, bool) {
rs.RLock()
defer rs.RUnlock()
r, ok := rs.relayForByIdx[idx]
return r, ok
}
func (rs *RelayState) InsertRelay(ip iputil.VpnIp, idx uint32, r *Relay) {
rs.Lock()
defer rs.Unlock()
rs.relayForByIp[ip] = r
rs.relayForByIdx[idx] = r
}
type HostInfo struct {
sync.RWMutex
@@ -58,6 +166,7 @@ type HostInfo struct {
vpnIp iputil.VpnIp
recvError int
remoteCidr *cidr.Tree4
relayState RelayState
// lastRebindCount is the other side of Interface.rebindCount, if these values don't match then we need to ask LH
// for a punch from the remote end of this tunnel. The goal being to prime their conntrack for our traffic just like
@@ -73,6 +182,12 @@ type HostInfo struct {
lastRoamRemote *udp.Addr
}
type ViaSender struct {
relayHI *HostInfo // relayHI is the host info object of the relay
remoteIdx uint32 // remoteIdx is the index included in the header of the received packet
relay *Relay // relay contains the rest of the relay information, including the PeerIP of the host trying to communicate with us.
}
type cachedPacket struct {
messageType header.MessageType
messageSubType header.MessageSubType
@@ -91,14 +206,15 @@ func NewHostMap(l *logrus.Logger, name string, vpnCIDR *net.IPNet, preferredRang
h := map[iputil.VpnIp]*HostInfo{}
i := map[uint32]*HostInfo{}
r := map[uint32]*HostInfo{}
relays := map[uint32]*HostInfo{}
m := HostMap{
name: name,
Indexes: i,
Relays: relays,
RemoteIndexes: r,
Hosts: h,
preferredRanges: preferredRanges,
vpnCIDR: vpnCIDR,
unsafeRoutes: cidr.NewTree4(),
l: l,
}
return &m
@@ -110,11 +226,40 @@ func (hm *HostMap) EmitStats(name string) {
hostLen := len(hm.Hosts)
indexLen := len(hm.Indexes)
remoteIndexLen := len(hm.RemoteIndexes)
relaysLen := len(hm.Relays)
hm.RUnlock()
metrics.GetOrRegisterGauge("hostmap."+name+".hosts", nil).Update(int64(hostLen))
metrics.GetOrRegisterGauge("hostmap."+name+".indexes", nil).Update(int64(indexLen))
metrics.GetOrRegisterGauge("hostmap."+name+".remoteIndexes", nil).Update(int64(remoteIndexLen))
metrics.GetOrRegisterGauge("hostmap."+name+".relayIndexes", nil).Update(int64(relaysLen))
}
func (hm *HostMap) RemoveRelay(localIdx uint32) {
hm.Lock()
hiRelay, ok := hm.Relays[localIdx]
if !ok {
hm.Unlock()
return
}
delete(hm.Relays, localIdx)
hm.Unlock()
ip, ok := hiRelay.relayState.RemoveRelay(localIdx)
if !ok {
return
}
hiPeer, err := hm.QueryVpnIp(ip)
if err != nil {
return
}
var otherPeerIdx uint32
hiPeer.relayState.DeleteRelay(hiRelay.vpnIp)
relay, ok := hiPeer.relayState.GetRelayForByIp(hiRelay.vpnIp)
if ok {
otherPeerIdx = relay.LocalIndex
}
// I am a relaying host. I need to remove the other relay, too.
hm.RemoveRelay(otherPeerIdx)
}
func (hm *HostMap) GetIndexByVpnIp(vpnIp iputil.VpnIp) (uint32, error) {
@@ -142,6 +287,11 @@ func (hm *HostMap) AddVpnIp(vpnIp iputil.VpnIp, init func(hostinfo *HostInfo)) (
promoteCounter: 0,
vpnIp: vpnIp,
HandshakePacket: make(map[uint8][]byte, 0),
relayState: RelayState{
relays: map[iputil.VpnIp]struct{}{},
relayForByIp: map[iputil.VpnIp]*Relay{},
relayForByIdx: map[uint32]*Relay{},
},
}
if init != nil {
init(h)
@@ -247,9 +397,37 @@ func (hm *HostMap) DeleteReverseIndex(index uint32) {
}
func (hm *HostMap) DeleteHostInfo(hostinfo *HostInfo) {
// Delete the host itself, ensuring it's not modified anymore
hm.Lock()
hm.unlockedDeleteHostInfo(hostinfo)
hm.Unlock()
// And tear down all the relays going through this host
for _, localIdx := range hostinfo.relayState.CopyRelayForIdxs() {
hm.RemoveRelay(localIdx)
}
// And tear down the relays this deleted hostInfo was using to be reached
teardownRelayIdx := []uint32{}
for _, relayIp := range hostinfo.relayState.CopyRelayIps() {
relayHostInfo, err := hm.QueryVpnIp(relayIp)
if err != nil {
hm.l.WithError(err).WithField("relay", relayIp).Info("Missing relay host in hostmap")
} else {
if r, ok := relayHostInfo.relayState.QueryRelayForByIp(hostinfo.vpnIp); ok {
teardownRelayIdx = append(teardownRelayIdx, r.LocalIndex)
}
}
}
for _, localIdx := range teardownRelayIdx {
hm.RemoveRelay(localIdx)
}
}
func (hm *HostMap) DeleteRelayIdx(localIdx uint32) {
hm.Lock()
defer hm.Unlock()
hm.unlockedDeleteHostInfo(hostinfo)
delete(hm.RemoteIndexes, localIdx)
}
func (hm *HostMap) unlockedDeleteHostInfo(hostinfo *HostInfo) {
@@ -284,7 +462,7 @@ func (hm *HostMap) unlockedDeleteHostInfo(hostinfo *HostInfo) {
}
func (hm *HostMap) QueryIndex(index uint32) (*HostInfo, error) {
//TODO: we probably just want ot return bool instead of error, or at least a static error
//TODO: we probably just want to return bool instead of error, or at least a static error
hm.RLock()
if h, ok := hm.Indexes[index]; ok {
hm.RUnlock()
@@ -294,6 +472,17 @@ func (hm *HostMap) QueryIndex(index uint32) (*HostInfo, error) {
return nil, errors.New("unable to find index")
}
}
func (hm *HostMap) QueryRelayIndex(index uint32) (*HostInfo, error) {
//TODO: we probably just want to return bool instead of error, or at least a static error
hm.RLock()
if h, ok := hm.Relays[index]; ok {
hm.RUnlock()
return h, nil
} else {
hm.RUnlock()
return nil, errors.New("unable to find index")
}
}
func (hm *HostMap) QueryReverseIndex(index uint32) (*HostInfo, error) {
hm.RLock()
@@ -332,15 +521,6 @@ func (hm *HostMap) queryVpnIp(vpnIp iputil.VpnIp, promoteIfce *Interface) (*Host
return nil, errors.New("unable to find host")
}
func (hm *HostMap) queryUnsafeRoute(ip iputil.VpnIp) iputil.VpnIp {
r := hm.unsafeRoutes.MostSpecificContains(ip)
if r != nil {
return r.(iputil.VpnIp)
} else {
return 0
}
}
// We already have the hm Lock when this is called, so make sure to not call
// any other methods that might try to grab it again
func (hm *HostMap) addHostInfo(hostinfo *HostInfo, f *Interface) {
@@ -408,17 +588,6 @@ func (hm *HostMap) Punchy(ctx context.Context, conn *udp.Conn) {
}
}
func (hm *HostMap) addUnsafeRoutes(routes *[]route) {
for _, r := range *routes {
hm.l.WithField("route", r.route).WithField("via", r.via).Warn("Adding UNSAFE Route")
hm.unsafeRoutes.AddCIDR(r.route, iputil.Ip2VpnIp(*r.via))
}
}
func (i *HostInfo) BindConnectionState(cs *ConnectionState) {
i.ConnectionState = cs
}
// TryPromoteBest handles re-querying lighthouses and probing for better paths
// NOTE: It is an error to call this if you are a lighthouse since they should not roam clients!
func (i *HostInfo) TryPromoteBest(preferredRanges []*net.IPNet, ifce *Interface) {
@@ -426,24 +595,27 @@ func (i *HostInfo) TryPromoteBest(preferredRanges []*net.IPNet, ifce *Interface)
if c%PromoteEvery == 0 {
// The lock here is currently protecting i.remote access
i.RLock()
defer i.RUnlock()
remote := i.remote
i.RUnlock()
// return early if we are already on a preferred remote
rIP := i.remote.IP
for _, l := range preferredRanges {
if l.Contains(rIP) {
return
if remote != nil {
rIP := remote.IP
for _, l := range preferredRanges {
if l.Contains(rIP) {
return
}
}
}
i.remotes.ForEach(preferredRanges, func(addr *udp.Addr, preferred bool) {
if addr == nil || !preferred {
if remote != nil && (addr == nil || !preferred) {
return
}
// Try to send a test packet to that host, this should
// cause it to detect a roaming event and switch remotes
ifce.send(header.Test, header.TestRequest, i.ConnectionState, i, addr, []byte(""), make([]byte, 12, 12), make([]byte, mtu))
ifce.sendTo(header.Test, header.TestRequest, i.ConnectionState, i, addr, []byte(""), make([]byte, 12, 12), make([]byte, mtu))
})
}
@@ -526,6 +698,10 @@ func (i *HostInfo) SetRemote(remote *udp.Addr) {
// SetRemoteIfPreferred returns true if the remote was changed. The lastRoam
// time on the HostInfo will also be updated.
func (i *HostInfo) SetRemoteIfPreferred(hm *HostMap, newRemote *udp.Addr) bool {
if newRemote == nil {
// relays have nil udp Addrs
return false
}
currentRemote := i.remote
if currentRemote == nil {
i.SetRemote(newRemote)
@@ -559,10 +735,6 @@ func (i *HostInfo) SetRemoteIfPreferred(hm *HostMap, newRemote *udp.Addr) bool {
return false
}
func (i *HostInfo) ClearConnectionState() {
i.ConnectionState = nil
}
func (i *HostInfo) RecvErrorExceeded() bool {
if i.recvError < 3 {
i.recvError += 1

145
inside.go
View File

@@ -14,7 +14,9 @@ import (
func (f *Interface) consumeInsidePacket(packet []byte, fwPacket *firewall.Packet, nb, out []byte, q int, localCache firewall.ConntrackCache) {
err := newPacket(packet, false, fwPacket)
if err != nil {
f.l.WithField("packet", packet).Debugf("Error while validating outbound packet: %s", err)
if f.l.Level >= logrus.DebugLevel {
f.l.WithField("packet", packet).Debugf("Error while validating outbound packet: %s", err)
}
return
}
@@ -23,8 +25,18 @@ func (f *Interface) consumeInsidePacket(packet []byte, fwPacket *firewall.Packet
return
}
// Ignore packets from self to self
if fwPacket.RemoteIP == f.myVpnIp {
// Immediately forward packets from self to self.
// This should only happen on Darwin-based hosts, which routes packets from
// the Nebula IP to the Nebula IP through the Nebula TUN device.
if immediatelyForwardToSelf {
_, err := f.readers[q].Write(packet)
if err != nil {
f.l.WithError(err).Error("Failed to forward to tun")
}
}
// Otherwise, drop. On linux, we should never see these packets - Linux
// routes packets from the nebula IP to the nebula IP through the loopback device.
return
}
@@ -58,7 +70,7 @@ func (f *Interface) consumeInsidePacket(packet []byte, fwPacket *firewall.Packet
dropReason := f.firewall.Drop(packet, *fwPacket, false, hostinfo, f.caPool, localCache)
if dropReason == nil {
f.sendNoMetrics(header.Message, 0, ci, hostinfo, hostinfo.remote, packet, nb, out, q)
f.sendNoMetrics(header.Message, 0, ci, hostinfo, nil, packet, nb, out, q)
} else if f.l.Level >= logrus.DebugLevel {
hostinfo.logger(f.l).
@@ -68,11 +80,14 @@ func (f *Interface) consumeInsidePacket(packet []byte, fwPacket *firewall.Packet
}
}
func (f *Interface) Handshake(vpnIp iputil.VpnIp) {
f.getOrHandshake(vpnIp)
}
// getOrHandshake returns nil if the vpnIp is not routable
func (f *Interface) getOrHandshake(vpnIp iputil.VpnIp) *HostInfo {
//TODO: we can find contains without converting back to bytes
if f.hostMap.vpnCIDR.Contains(vpnIp.ToIP()) == false {
vpnIp = f.hostMap.queryUnsafeRoute(vpnIp)
if !ipMaskContains(f.lightHouse.myVpnIp, f.lightHouse.myVpnZeros, vpnIp) {
vpnIp = f.inside.RouteFor(vpnIp)
if vpnIp == 0 {
return nil
}
@@ -110,7 +125,7 @@ func (f *Interface) getOrHandshake(vpnIp iputil.VpnIp) *HostInfo {
// If this is a static host, we don't need to wait for the HostQueryReply
// We can trigger the handshake right now
if _, ok := f.lightHouse.staticList[vpnIp]; ok {
if _, ok := f.lightHouse.GetStaticHostList()[vpnIp]; ok {
select {
case f.handshakeManager.trigger <- vpnIp:
default:
@@ -146,7 +161,7 @@ func (f *Interface) sendMessageNow(t header.MessageType, st header.MessageSubTyp
return
}
f.sendNoMetrics(header.Message, st, hostInfo.ConnectionState, hostInfo, hostInfo.remote, p, nb, out, 0)
f.sendNoMetrics(header.Message, st, hostInfo.ConnectionState, hostInfo, nil, p, nb, out, 0)
}
// SendMessageToVpnIp handles real ip:port lookup and sends to the current best known address for vpnIp
@@ -177,21 +192,93 @@ func (f *Interface) SendMessageToVpnIp(t header.MessageType, st header.MessageSu
}
func (f *Interface) sendMessageToVpnIp(t header.MessageType, st header.MessageSubType, hostInfo *HostInfo, p, nb, out []byte) {
f.send(t, st, hostInfo.ConnectionState, hostInfo, hostInfo.remote, p, nb, out)
f.send(t, st, hostInfo.ConnectionState, hostInfo, p, nb, out)
}
func (f *Interface) send(t header.MessageType, st header.MessageSubType, ci *ConnectionState, hostinfo *HostInfo, remote *udp.Addr, p, nb, out []byte) {
func (f *Interface) send(t header.MessageType, st header.MessageSubType, ci *ConnectionState, hostinfo *HostInfo, p, nb, out []byte) {
f.messageMetrics.Tx(t, st, 1)
f.sendNoMetrics(t, st, ci, hostinfo, nil, p, nb, out, 0)
}
func (f *Interface) sendTo(t header.MessageType, st header.MessageSubType, ci *ConnectionState, hostinfo *HostInfo, remote *udp.Addr, p, nb, out []byte) {
f.messageMetrics.Tx(t, st, 1)
f.sendNoMetrics(t, st, ci, hostinfo, remote, p, nb, out, 0)
}
// sendVia sends a payload through a Relay tunnel. No authentication or encryption is done
// to the payload for the ultimate target host, making this a useful method for sending
// handshake messages to peers through relay tunnels.
// via is the HostInfo through which the message is relayed.
// ad is the plaintext data to authenticate, but not encrypt
// nb is a buffer used to store the nonce value, re-used for performance reasons.
// out is a buffer used to store the result of the Encrypt operation
// q indicates which writer to use to send the packet.
func (f *Interface) SendVia(viaIfc interface{},
relayIfc interface{},
ad,
nb,
out []byte,
nocopy bool,
) {
via := viaIfc.(*HostInfo)
relay := relayIfc.(*Relay)
c := atomic.AddUint64(&via.ConnectionState.atomicMessageCounter, 1)
out = header.Encode(out, header.Version, header.Message, header.MessageRelay, relay.RemoteIndex, c)
f.connectionManager.Out(via.vpnIp)
// Authenticate the header and payload, but do not encrypt for this message type.
// The payload consists of the inner, unencrypted Nebula header, as well as the end-to-end encrypted payload.
if len(out)+len(ad)+via.ConnectionState.eKey.Overhead() > cap(out) {
via.logger(f.l).
WithField("outCap", cap(out)).
WithField("payloadLen", len(ad)).
WithField("headerLen", len(out)).
WithField("cipherOverhead", via.ConnectionState.eKey.Overhead()).
Error("SendVia out buffer not large enough for relay")
return
}
// The header bytes are written to the 'out' slice; Grow the slice to hold the header and associated data payload.
offset := len(out)
out = out[:offset+len(ad)]
// In one call path, the associated data _is_ already stored in out. In other call paths, the associated data must
// be copied into 'out'.
if !nocopy {
copy(out[offset:], ad)
}
var err error
out, err = via.ConnectionState.eKey.EncryptDanger(out, out, nil, c, nb)
if err != nil {
via.logger(f.l).WithError(err).Info("Failed to EncryptDanger in sendVia")
return
}
err = f.writers[0].WriteTo(out, via.remote)
if err != nil {
via.logger(f.l).WithError(err).Info("Failed to WriteTo in sendVia")
}
}
func (f *Interface) sendNoMetrics(t header.MessageType, st header.MessageSubType, ci *ConnectionState, hostinfo *HostInfo, remote *udp.Addr, p, nb, out []byte, q int) {
if ci.eKey == nil {
//TODO: log warning
return
}
useRelay := remote == nil && hostinfo.remote == nil
fullOut := out
if useRelay {
if len(out) < header.Len {
// out always has a capacity of mtu, but not always a length greater than the header.Len.
// Grow it to make sure the next operation works.
out = out[:header.Len]
}
// Save a header's worth of data at the front of the 'out' buffer.
out = out[header.Len:]
}
var err error
//TODO: enable if we do more than 1 tun queue
//ci.writeLock.Lock()
c := atomic.AddUint64(&ci.atomicMessageCounter, 1)
@@ -212,6 +299,7 @@ func (f *Interface) sendNoMetrics(t header.MessageType, st header.MessageSubType
}
}
var err error
out, err = ci.eKey.EncryptDanger(out, out, p, c, nb)
//TODO: see above note on lock
//ci.writeLock.Unlock()
@@ -223,10 +311,37 @@ func (f *Interface) sendNoMetrics(t header.MessageType, st header.MessageSubType
return
}
err = f.writers[q].WriteTo(out, remote)
if err != nil {
hostinfo.logger(f.l).WithError(err).
WithField("udpAddr", remote).Error("Failed to write outgoing packet")
if remote != nil {
err = f.writers[q].WriteTo(out, remote)
if err != nil {
hostinfo.logger(f.l).WithError(err).
WithField("udpAddr", remote).Error("Failed to write outgoing packet")
}
} else if hostinfo.remote != nil {
err = f.writers[q].WriteTo(out, hostinfo.remote)
if err != nil {
hostinfo.logger(f.l).WithError(err).
WithField("udpAddr", remote).Error("Failed to write outgoing packet")
}
} else {
// Try to send via a relay
for _, relayIP := range hostinfo.relayState.CopyRelayIps() {
relayHostInfo, err := f.hostMap.QueryVpnIp(relayIP)
if err != nil {
hostinfo.logger(f.l).WithField("relayIp", relayIP).WithError(err).Info("sendNoMetrics failed to find HostInfo")
continue
}
relay, ok := relayHostInfo.relayState.QueryRelayForByIp(hostinfo.vpnIp)
if !ok {
hostinfo.logger(f.l).
WithField("relayIp", relayHostInfo.vpnIp).
WithField("relayTarget", hostinfo.vpnIp).
Info("sendNoMetrics relay missing object for target")
continue
}
f.SendVia(relayHostInfo, relay, out, nb, fullOut[:header.Len+len(out)], true)
break
}
}
return
}

3
inside_darwin.go Normal file
View File

@@ -0,0 +1,3 @@
package nebula
const immediatelyForwardToSelf bool = true

6
inside_generic.go Normal file
View File

@@ -0,0 +1,6 @@
//go:build !darwin
// +build !darwin
package nebula
const immediatelyForwardToSelf bool = false

View File

@@ -3,6 +3,7 @@ package nebula
import (
"context"
"errors"
"fmt"
"io"
"net"
"os"
@@ -16,24 +17,16 @@ import (
"github.com/slackhq/nebula/config"
"github.com/slackhq/nebula/firewall"
"github.com/slackhq/nebula/iputil"
"github.com/slackhq/nebula/overlay"
"github.com/slackhq/nebula/udp"
)
const mtu = 9001
type Inside interface {
io.ReadWriteCloser
Activate() error
CidrNet() *net.IPNet
DeviceName() string
WriteRaw([]byte) error
NewMultiQueueReader() (io.ReadWriteCloser, error)
}
type InterfaceConfig struct {
HostMap *HostMap
Outside *udp.Conn
Inside Inside
Inside overlay.Device
certState *CertState
Cipher string
Firewall *Firewall
@@ -49,6 +42,7 @@ type InterfaceConfig struct {
version string
caPool *cert.NebulaCAPool
disconnectInvalid bool
relayManager *relayManager
ConntrackCacheTimeout time.Duration
l *logrus.Logger
@@ -57,7 +51,7 @@ type InterfaceConfig struct {
type Interface struct {
hostMap *HostMap
outside *udp.Conn
inside Inside
inside overlay.Device
certState *CertState
cipher string
firewall *Firewall
@@ -74,6 +68,9 @@ type Interface struct {
caPool *cert.NebulaCAPool
disconnectInvalid bool
closed int32
relayManager *relayManager
sendRecvErrorConfig sendRecvErrorConfig
// rebindCount is used to decide if an active tunnel should trigger a punch notification through a lighthouse
rebindCount int8
@@ -91,6 +88,40 @@ type Interface struct {
l *logrus.Logger
}
type sendRecvErrorConfig uint8
const (
sendRecvErrorAlways sendRecvErrorConfig = iota
sendRecvErrorNever
sendRecvErrorPrivate
)
func (s sendRecvErrorConfig) ShouldSendRecvError(ip net.IP) bool {
switch s {
case sendRecvErrorPrivate:
return ip.IsPrivate()
case sendRecvErrorAlways:
return true
case sendRecvErrorNever:
return false
default:
panic(fmt.Errorf("invalid sendRecvErrorConfig value: %d", s))
}
}
func (s sendRecvErrorConfig) String() string {
switch s {
case sendRecvErrorAlways:
return "always"
case sendRecvErrorNever:
return "never"
case sendRecvErrorPrivate:
return "private"
default:
return fmt.Sprintf("invalid(%d)", s)
}
}
func NewInterface(ctx context.Context, c *InterfaceConfig) (*Interface, error) {
if c.Outside == nil {
return nil, errors.New("no outside connection")
@@ -127,6 +158,7 @@ func NewInterface(ctx context.Context, c *InterfaceConfig) (*Interface, error) {
caPool: c.caPool,
disconnectInvalid: c.disconnectInvalid,
myVpnIp: myVpnIp,
relayManager: c.relayManager,
conntrackCacheTimeout: c.ConntrackCacheTimeout,
@@ -156,7 +188,7 @@ func (f *Interface) activate() {
f.l.WithError(err).Error("Failed to get udp listen address")
}
f.l.WithField("interface", f.inside.DeviceName()).WithField("network", f.inside.CidrNet().String()).
f.l.WithField("interface", f.inside.Name()).WithField("network", f.inside.Cidr().String()).
WithField("build", f.version).WithField("udpAddr", addr).
Info("Nebula interface is active")
@@ -238,6 +270,7 @@ func (f *Interface) RegisterConfigChangeCallbacks(c *config.C) {
c.RegisterReloadCallback(f.reloadCA)
c.RegisterReloadCallback(f.reloadCertKey)
c.RegisterReloadCallback(f.reloadFirewall)
c.RegisterReloadCallback(f.reloadSendRecvError)
for _, udpConn := range f.writers {
c.RegisterReloadCallback(udpConn.ReloadConfig)
}
@@ -315,6 +348,30 @@ func (f *Interface) reloadFirewall(c *config.C) {
Info("New firewall has been installed")
}
func (f *Interface) reloadSendRecvError(c *config.C) {
if c.InitialLoad() || c.HasChanged("listen.send_recv_error") {
stringValue := c.GetString("listen.send_recv_error", "always")
switch stringValue {
case "always":
f.sendRecvErrorConfig = sendRecvErrorAlways
case "never":
f.sendRecvErrorConfig = sendRecvErrorNever
case "private":
f.sendRecvErrorConfig = sendRecvErrorPrivate
default:
if c.GetBool("listen.send_recv_error", true) {
f.sendRecvErrorConfig = sendRecvErrorAlways
} else {
f.sendRecvErrorConfig = sendRecvErrorNever
}
}
f.l.WithField("sendRecvError", f.sendRecvErrorConfig.String()).
Info("Loaded send_recv_error config")
}
}
func (f *Interface) emitStats(ctx context.Context, i time.Duration) {
ticker := time.NewTicker(i)
defer ticker.Stop()

View File

@@ -4,6 +4,7 @@ import (
"encoding/binary"
"fmt"
"net"
"net/netip"
)
type VpnIp uint32
@@ -39,6 +40,12 @@ func (ip VpnIp) ToIP() net.IP {
return nip
}
func (ip VpnIp) ToNetIpAddr() netip.Addr {
var nip [4]byte
binary.BigEndian.PutUint32(nip[:], uint32(ip))
return netip.AddrFrom4(nip)
}
func Ip2VpnIp(ip []byte) VpnIp {
if len(ip) == 16 {
return VpnIp(binary.BigEndian.Uint32(ip[12:16]))
@@ -46,6 +53,26 @@ func Ip2VpnIp(ip []byte) VpnIp {
return VpnIp(binary.BigEndian.Uint32(ip))
}
func ToNetIpAddr(ip net.IP) (netip.Addr, error) {
addr, ok := netip.AddrFromSlice(ip)
if !ok {
return netip.Addr{}, fmt.Errorf("invalid net.IP: %v", ip)
}
return addr, nil
}
func ToNetIpPrefix(ipNet net.IPNet) (netip.Prefix, error) {
addr, err := ToNetIpAddr(ipNet.IP)
if err != nil {
return netip.Prefix{}, err
}
ones, bits := ipNet.Mask.Size()
if ones == 0 && bits == 0 {
return netip.Prefix{}, fmt.Errorf("invalid net.IP: %v", ipNet)
}
return netip.PrefixFrom(addr, ones), nil
}
// ubtoa encodes the string form of the integer v to dst[start:] and
// returns the number of bytes written to dst. The caller must ensure
// that dst has sufficient length.

View File

@@ -7,14 +7,17 @@ import (
"fmt"
"net"
"sync"
"sync/atomic"
"time"
"unsafe"
"github.com/golang/protobuf/proto"
"github.com/rcrowley/go-metrics"
"github.com/sirupsen/logrus"
"github.com/slackhq/nebula/config"
"github.com/slackhq/nebula/header"
"github.com/slackhq/nebula/iputil"
"github.com/slackhq/nebula/udp"
"github.com/slackhq/nebula/util"
)
//TODO: if a lighthouse doesn't have an answer, clients AGGRESSIVELY REQUERY.. why? handshake manager and/or getOrHandshake?
@@ -22,13 +25,20 @@ import (
var ErrHostNotKnown = errors.New("host not known")
type netIpAndPort struct {
ip net.IP
port uint16
}
type LightHouse struct {
//TODO: We need a timer wheel to kick out vpnIps that haven't reported in a long time
sync.RWMutex //Because we concurrently read and write to our maps
amLighthouse bool
myVpnIp iputil.VpnIp
myVpnZeros iputil.VpnIp
myVpnNet *net.IPNet
punchConn *udp.Conn
punchy *Punchy
// Local cache of answers from light houses
// map of vpn Ip to answers
@@ -39,80 +49,315 @@ type LightHouse struct {
// respond with.
// - When we are not a lighthouse, this filters which addresses we accept
// from lighthouses.
remoteAllowList *RemoteAllowList
atomicRemoteAllowList *RemoteAllowList
// filters local addresses that we advertise to lighthouses
localAllowList *LocalAllowList
atomicLocalAllowList *LocalAllowList
// used to trigger the HandshakeManager when we receive HostQueryReply
handshakeTrigger chan<- iputil.VpnIp
// staticList exists to avoid having a bool in each addrMap entry
// atomicStaticList exists to avoid having a bool in each addrMap entry
// since static should be rare
staticList map[iputil.VpnIp]struct{}
lighthouses map[iputil.VpnIp]struct{}
interval int
nebulaPort uint32 // 32 bits because protobuf does not have a uint16
punchBack bool
punchDelay time.Duration
atomicStaticList map[iputil.VpnIp]struct{}
atomicLighthouses map[iputil.VpnIp]struct{}
atomicInterval int64
updateCancel context.CancelFunc
updateParentCtx context.Context
updateUdp udp.EncWriter
nebulaPort uint32 // 32 bits because protobuf does not have a uint16
atomicAdvertiseAddrs []netIpAndPort
// IP's of relays that can be used by peers to access me
atomicRelaysForMe []iputil.VpnIp
metrics *MessageMetrics
metricHolepunchTx metrics.Counter
l *logrus.Logger
}
func NewLightHouse(l *logrus.Logger, amLighthouse bool, myVpnIpNet *net.IPNet, ips []iputil.VpnIp, interval int, nebulaPort uint32, pc *udp.Conn, punchBack bool, punchDelay time.Duration, metricsEnabled bool) *LightHouse {
ones, _ := myVpnIpNet.Mask.Size()
h := LightHouse{
amLighthouse: amLighthouse,
myVpnIp: iputil.Ip2VpnIp(myVpnIpNet.IP),
myVpnZeros: iputil.VpnIp(32 - ones),
addrMap: make(map[iputil.VpnIp]*RemoteList),
nebulaPort: nebulaPort,
lighthouses: make(map[iputil.VpnIp]struct{}),
staticList: make(map[iputil.VpnIp]struct{}),
interval: interval,
punchConn: pc,
punchBack: punchBack,
punchDelay: punchDelay,
l: l,
// NewLightHouseFromConfig will build a Lighthouse struct from the values provided in the config object
// addrMap should be nil unless this is during a config reload
func NewLightHouseFromConfig(l *logrus.Logger, c *config.C, myVpnNet *net.IPNet, pc *udp.Conn, p *Punchy) (*LightHouse, error) {
amLighthouse := c.GetBool("lighthouse.am_lighthouse", false)
nebulaPort := uint32(c.GetInt("listen.port", 0))
if amLighthouse && nebulaPort == 0 {
return nil, util.NewContextualError("lighthouse.am_lighthouse enabled on node but no port number is set in config", nil, nil)
}
if metricsEnabled {
h.metrics = newLighthouseMetrics()
// If port is dynamic, discover it
if nebulaPort == 0 && pc != nil {
uPort, err := pc.LocalAddr()
if err != nil {
return nil, util.NewContextualError("Failed to get listening port", nil, err)
}
nebulaPort = uint32(uPort.Port)
}
ones, _ := myVpnNet.Mask.Size()
h := LightHouse{
amLighthouse: amLighthouse,
myVpnIp: iputil.Ip2VpnIp(myVpnNet.IP),
myVpnZeros: iputil.VpnIp(32 - ones),
myVpnNet: myVpnNet,
addrMap: make(map[iputil.VpnIp]*RemoteList),
nebulaPort: nebulaPort,
atomicLighthouses: make(map[iputil.VpnIp]struct{}),
atomicStaticList: make(map[iputil.VpnIp]struct{}),
punchConn: pc,
punchy: p,
l: l,
}
if c.GetBool("stats.lighthouse_metrics", false) {
h.metrics = newLighthouseMetrics()
h.metricHolepunchTx = metrics.GetOrRegisterCounter("messages.tx.holepunch", nil)
} else {
h.metricHolepunchTx = metrics.NilCounter{}
}
for _, ip := range ips {
h.lighthouses[ip] = struct{}{}
err := h.reload(c, true)
if err != nil {
return nil, err
}
return &h
c.RegisterReloadCallback(func(c *config.C) {
err := h.reload(c, false)
switch v := err.(type) {
case util.ContextualError:
v.Log(l)
case error:
l.WithError(err).Error("failed to reload lighthouse")
}
})
return &h, nil
}
func (lh *LightHouse) SetRemoteAllowList(allowList *RemoteAllowList) {
lh.Lock()
defer lh.Unlock()
lh.remoteAllowList = allowList
func (lh *LightHouse) GetStaticHostList() map[iputil.VpnIp]struct{} {
return *(*map[iputil.VpnIp]struct{})(atomic.LoadPointer((*unsafe.Pointer)(unsafe.Pointer(&lh.atomicStaticList))))
}
func (lh *LightHouse) SetLocalAllowList(allowList *LocalAllowList) {
lh.Lock()
defer lh.Unlock()
lh.localAllowList = allowList
func (lh *LightHouse) GetLighthouses() map[iputil.VpnIp]struct{} {
return *(*map[iputil.VpnIp]struct{})(atomic.LoadPointer((*unsafe.Pointer)(unsafe.Pointer(&lh.atomicLighthouses))))
}
func (lh *LightHouse) ValidateLHStaticEntries() error {
for lhIP, _ := range lh.lighthouses {
if _, ok := lh.staticList[lhIP]; !ok {
return fmt.Errorf("Lighthouse %s does not have a static_host_map entry", lhIP)
func (lh *LightHouse) GetRemoteAllowList() *RemoteAllowList {
return (*RemoteAllowList)(atomic.LoadPointer((*unsafe.Pointer)(unsafe.Pointer(&lh.atomicRemoteAllowList))))
}
func (lh *LightHouse) GetLocalAllowList() *LocalAllowList {
return (*LocalAllowList)(atomic.LoadPointer((*unsafe.Pointer)(unsafe.Pointer(&lh.atomicLocalAllowList))))
}
func (lh *LightHouse) GetAdvertiseAddrs() []netIpAndPort {
return *(*[]netIpAndPort)(atomic.LoadPointer((*unsafe.Pointer)(unsafe.Pointer(&lh.atomicAdvertiseAddrs))))
}
func (lh *LightHouse) GetRelaysForMe() []iputil.VpnIp {
return *(*[]iputil.VpnIp)(atomic.LoadPointer((*unsafe.Pointer)(unsafe.Pointer(&lh.atomicRelaysForMe))))
}
func (lh *LightHouse) GetUpdateInterval() int64 {
return atomic.LoadInt64(&lh.atomicInterval)
}
func (lh *LightHouse) reload(c *config.C, initial bool) error {
if initial || c.HasChanged("lighthouse.advertise_addrs") {
rawAdvAddrs := c.GetStringSlice("lighthouse.advertise_addrs", []string{})
advAddrs := make([]netIpAndPort, 0)
for i, rawAddr := range rawAdvAddrs {
fIp, fPort, err := udp.ParseIPAndPort(rawAddr)
if err != nil {
return util.NewContextualError("Unable to parse lighthouse.advertise_addrs entry", m{"addr": rawAddr, "entry": i + 1}, err)
}
if fPort == 0 {
fPort = uint16(lh.nebulaPort)
}
if ip4 := fIp.To4(); ip4 != nil && lh.myVpnNet.Contains(fIp) {
lh.l.WithField("addr", rawAddr).WithField("entry", i+1).
Warn("Ignoring lighthouse.advertise_addrs report because it is within the nebula network range")
continue
}
advAddrs = append(advAddrs, netIpAndPort{ip: fIp, port: fPort})
}
atomic.StorePointer((*unsafe.Pointer)(unsafe.Pointer(&lh.atomicAdvertiseAddrs)), unsafe.Pointer(&advAddrs))
if !initial {
lh.l.Info("lighthouse.advertise_addrs has changed")
}
}
if initial || c.HasChanged("lighthouse.interval") {
atomic.StoreInt64(&lh.atomicInterval, int64(c.GetInt("lighthouse.interval", 10)))
if !initial {
lh.l.Infof("lighthouse.interval changed to %v", lh.atomicInterval)
if lh.updateCancel != nil {
// May not always have a running routine
lh.updateCancel()
}
lh.LhUpdateWorker(lh.updateParentCtx, lh.updateUdp)
}
}
if initial || c.HasChanged("lighthouse.remote_allow_list") || c.HasChanged("lighthouse.remote_allow_ranges") {
ral, err := NewRemoteAllowListFromConfig(c, "lighthouse.remote_allow_list", "lighthouse.remote_allow_ranges")
if err != nil {
return util.NewContextualError("Invalid lighthouse.remote_allow_list", nil, err)
}
atomic.StorePointer((*unsafe.Pointer)(unsafe.Pointer(&lh.atomicRemoteAllowList)), unsafe.Pointer(ral))
if !initial {
//TODO: a diff will be annoyingly difficult
lh.l.Info("lighthouse.remote_allow_list and/or lighthouse.remote_allow_ranges has changed")
}
}
if initial || c.HasChanged("lighthouse.local_allow_list") {
lal, err := NewLocalAllowListFromConfig(c, "lighthouse.local_allow_list")
if err != nil {
return util.NewContextualError("Invalid lighthouse.local_allow_list", nil, err)
}
atomic.StorePointer((*unsafe.Pointer)(unsafe.Pointer(&lh.atomicLocalAllowList)), unsafe.Pointer(lal))
if !initial {
//TODO: a diff will be annoyingly difficult
lh.l.Info("lighthouse.local_allow_list has changed")
}
}
//NOTE: many things will get much simpler when we combine static_host_map and lighthouse.hosts in config
if initial || c.HasChanged("static_host_map") {
staticList := make(map[iputil.VpnIp]struct{})
err := lh.loadStaticMap(c, lh.myVpnNet, staticList)
if err != nil {
return err
}
atomic.StorePointer((*unsafe.Pointer)(unsafe.Pointer(&lh.atomicStaticList)), unsafe.Pointer(&staticList))
if !initial {
//TODO: we should remove any remote list entries for static hosts that were removed/modified?
lh.l.Info("static_host_map has changed")
}
}
if initial || c.HasChanged("lighthouse.hosts") {
lhMap := make(map[iputil.VpnIp]struct{})
err := lh.parseLighthouses(c, lh.myVpnNet, lhMap)
if err != nil {
return err
}
atomic.StorePointer((*unsafe.Pointer)(unsafe.Pointer(&lh.atomicLighthouses)), unsafe.Pointer(&lhMap))
if !initial {
//NOTE: we are not tearing down existing lighthouse connections because they might be used for non lighthouse traffic
lh.l.Info("lighthouse.hosts has changed")
}
}
if initial || c.HasChanged("relay.relays") {
switch c.GetBool("relay.am_relay", false) {
case true:
// Relays aren't allowed to specify other relays
if len(c.GetStringSlice("relay.relays", nil)) > 0 {
lh.l.Info("Ignoring relays from config because am_relay is true")
}
relaysForMe := []iputil.VpnIp{}
atomic.StorePointer((*unsafe.Pointer)(unsafe.Pointer(&lh.atomicRelaysForMe)), unsafe.Pointer(&relaysForMe))
case false:
relaysForMe := []iputil.VpnIp{}
for _, v := range c.GetStringSlice("relay.relays", nil) {
lh.l.WithField("RelayIP", v).Info("Read relay from config")
configRIP := net.ParseIP(v)
if configRIP != nil {
relaysForMe = append(relaysForMe, iputil.Ip2VpnIp(configRIP))
}
}
atomic.StorePointer((*unsafe.Pointer)(unsafe.Pointer(&lh.atomicRelaysForMe)), unsafe.Pointer(&relaysForMe))
}
}
return nil
}
func (lh *LightHouse) parseLighthouses(c *config.C, tunCidr *net.IPNet, lhMap map[iputil.VpnIp]struct{}) error {
lhs := c.GetStringSlice("lighthouse.hosts", []string{})
if lh.amLighthouse && len(lhs) != 0 {
lh.l.Warn("lighthouse.am_lighthouse enabled on node but upstream lighthouses exist in config")
}
for i, host := range lhs {
ip := net.ParseIP(host)
if ip == nil {
return util.NewContextualError("Unable to parse lighthouse host entry", m{"host": host, "entry": i + 1}, nil)
}
if !tunCidr.Contains(ip) {
return util.NewContextualError("lighthouse host is not in our subnet, invalid", m{"vpnIp": ip, "network": tunCidr.String()}, nil)
}
lhMap[iputil.Ip2VpnIp(ip)] = struct{}{}
}
if !lh.amLighthouse && len(lhMap) == 0 {
lh.l.Warn("No lighthouse.hosts configured, this host will only be able to initiate tunnels with static_host_map entries")
}
staticList := lh.GetStaticHostList()
for lhIP, _ := range lhMap {
if _, ok := staticList[lhIP]; !ok {
return fmt.Errorf("lighthouse %s does not have a static_host_map entry", lhIP)
}
}
return nil
}
func (lh *LightHouse) loadStaticMap(c *config.C, tunCidr *net.IPNet, staticList map[iputil.VpnIp]struct{}) error {
shm := c.GetMap("static_host_map", map[interface{}]interface{}{})
i := 0
for k, v := range shm {
rip := net.ParseIP(fmt.Sprintf("%v", k))
if rip == nil {
return util.NewContextualError("Unable to parse static_host_map entry", m{"host": k, "entry": i + 1}, nil)
}
if !tunCidr.Contains(rip) {
return util.NewContextualError("static_host_map key is not in our subnet, invalid", m{"vpnIp": rip, "network": tunCidr.String(), "entry": i + 1}, nil)
}
vpnIp := iputil.Ip2VpnIp(rip)
vals, ok := v.([]interface{})
if ok {
for _, v := range vals {
ip, port, err := udp.ParseIPAndPort(fmt.Sprintf("%v", v))
if err != nil {
return util.NewContextualError("Static host address could not be parsed", m{"vpnIp": vpnIp, "entry": i + 1}, err)
}
lh.addStaticRemote(vpnIp, udp.NewAddr(ip, port), staticList)
}
} else {
ip, port, err := udp.ParseIPAndPort(fmt.Sprintf("%v", v))
if err != nil {
return util.NewContextualError("Static host address could not be parsed", m{"vpnIp": vpnIp, "entry": i + 1}, err)
}
lh.addStaticRemote(vpnIp, udp.NewAddr(ip, port), staticList)
}
i++
}
return nil
}
@@ -140,16 +385,17 @@ func (lh *LightHouse) QueryServer(ip iputil.VpnIp, f udp.EncWriter) {
}
// Send a query to the lighthouses and hope for the best next time
query, err := proto.Marshal(NewLhQueryByInt(ip))
query, err := NewLhQueryByInt(ip).Marshal()
if err != nil {
lh.l.WithError(err).WithField("vpnIp", ip).Error("Failed to marshal lighthouse query payload")
return
}
lh.metricTx(NebulaMeta_HostQuery, int64(len(lh.lighthouses)))
lighthouses := lh.GetLighthouses()
lh.metricTx(NebulaMeta_HostQuery, int64(len(lighthouses)))
nb := make([]byte, 12, 12)
out := make([]byte, mtu)
for n := range lh.lighthouses {
for n := range lighthouses {
f.SendMessageToVpnIp(header.LightHouse, 0, n, query, nb, out)
}
}
@@ -197,7 +443,7 @@ func (lh *LightHouse) queryAndPrepMessage(vpnIp iputil.VpnIp, f func(*cache) (in
func (lh *LightHouse) DeleteVpnIp(vpnIp iputil.VpnIp) {
// First we check the static mapping
// and do nothing if it is there
if _, ok := lh.staticList[vpnIp]; ok {
if _, ok := lh.GetStaticHostList()[vpnIp]; ok {
return
}
lh.Lock()
@@ -214,7 +460,8 @@ func (lh *LightHouse) DeleteVpnIp(vpnIp iputil.VpnIp) {
// AddStaticRemote adds a static host entry for vpnIp as ourselves as the owner
// We are the owner because we don't want a lighthouse server to advertise for static hosts it was configured with
// And we don't want a lighthouse query reply to interfere with our learned cache if we are a client
func (lh *LightHouse) AddStaticRemote(vpnIp iputil.VpnIp, toAddr *udp.Addr) {
//NOTE: this function should not interact with any hot path objects, like lh.staticList, the caller should handle it
func (lh *LightHouse) addStaticRemote(vpnIp iputil.VpnIp, toAddr *udp.Addr, staticList map[iputil.VpnIp]struct{}) {
lh.Lock()
am := lh.unlockedGetRemoteList(vpnIp)
am.Lock()
@@ -236,8 +483,8 @@ func (lh *LightHouse) AddStaticRemote(vpnIp iputil.VpnIp, toAddr *udp.Addr) {
am.unlockedPrependV6(lh.myVpnIp, to)
}
// Mark it as static
lh.staticList[vpnIp] = struct{}{}
// Mark it as static in the caller provided map
staticList[vpnIp] = struct{}{}
}
// unlockedGetRemoteList assumes you have the lh lock
@@ -252,7 +499,7 @@ func (lh *LightHouse) unlockedGetRemoteList(vpnIp iputil.VpnIp) *RemoteList {
// unlockedShouldAddV4 checks if to is allowed by our allow list
func (lh *LightHouse) unlockedShouldAddV4(vpnIp iputil.VpnIp, to *Ip4AndPort) bool {
allow := lh.remoteAllowList.AllowIpV4(vpnIp, iputil.VpnIp(to.Ip))
allow := lh.GetRemoteAllowList().AllowIpV4(vpnIp, iputil.VpnIp(to.Ip))
if lh.l.Level >= logrus.TraceLevel {
lh.l.WithField("remoteIp", vpnIp).WithField("allow", allow).Trace("remoteAllowList.Allow")
}
@@ -266,7 +513,7 @@ func (lh *LightHouse) unlockedShouldAddV4(vpnIp iputil.VpnIp, to *Ip4AndPort) bo
// unlockedShouldAddV6 checks if to is allowed by our allow list
func (lh *LightHouse) unlockedShouldAddV6(vpnIp iputil.VpnIp, to *Ip6AndPort) bool {
allow := lh.remoteAllowList.AllowIpV6(vpnIp, to.Hi, to.Lo)
allow := lh.GetRemoteAllowList().AllowIpV6(vpnIp, to.Hi, to.Lo)
if lh.l.Level >= logrus.TraceLevel {
lh.l.WithField("remoteIp", lhIp6ToIp(to)).WithField("allow", allow).Trace("remoteAllowList.Allow")
}
@@ -287,7 +534,7 @@ func lhIp6ToIp(v *Ip6AndPort) net.IP {
}
func (lh *LightHouse) IsLighthouseIP(vpnIp iputil.VpnIp) bool {
if _, ok := lh.lighthouses[vpnIp]; ok {
if _, ok := lh.GetLighthouses()[vpnIp]; ok {
return true
}
return false
@@ -329,18 +576,24 @@ func NewUDPAddrFromLH6(ipp *Ip6AndPort) *udp.Addr {
}
func (lh *LightHouse) LhUpdateWorker(ctx context.Context, f udp.EncWriter) {
if lh.amLighthouse || lh.interval == 0 {
lh.updateParentCtx = ctx
lh.updateUdp = f
interval := lh.GetUpdateInterval()
if lh.amLighthouse || interval == 0 {
return
}
clockSource := time.NewTicker(time.Second * time.Duration(lh.interval))
clockSource := time.NewTicker(time.Second * time.Duration(interval))
updateCtx, cancel := context.WithCancel(ctx)
lh.updateCancel = cancel
defer clockSource.Stop()
for {
lh.SendUpdate(f)
select {
case <-ctx.Done():
case <-updateCtx.Done():
return
case <-clockSource.C:
continue
@@ -352,7 +605,16 @@ func (lh *LightHouse) SendUpdate(f udp.EncWriter) {
var v4 []*Ip4AndPort
var v6 []*Ip6AndPort
for _, e := range *localIps(lh.l, lh.localAllowList) {
for _, e := range lh.GetAdvertiseAddrs() {
if ip := e.ip.To4(); ip != nil {
v4 = append(v4, NewIp4AndPort(e.ip, uint32(e.port)))
} else {
v6 = append(v6, NewIp6AndPort(e.ip, uint32(e.port)))
}
}
lal := lh.GetLocalAllowList()
for _, e := range *localIps(lh.l, lal) {
if ip4 := e.To4(); ip4 != nil && ipMaskContains(lh.myVpnIp, lh.myVpnZeros, iputil.Ip2VpnIp(ip4)) {
continue
}
@@ -364,26 +626,34 @@ func (lh *LightHouse) SendUpdate(f udp.EncWriter) {
v6 = append(v6, NewIp6AndPort(e, lh.nebulaPort))
}
}
var relays []uint32
for _, r := range lh.GetRelaysForMe() {
relays = append(relays, (uint32)(r))
}
m := &NebulaMeta{
Type: NebulaMeta_HostUpdateNotification,
Details: &NebulaMetaDetails{
VpnIp: uint32(lh.myVpnIp),
Ip4AndPorts: v4,
Ip6AndPorts: v6,
RelayVpnIp: relays,
},
}
lh.metricTx(NebulaMeta_HostUpdateNotification, int64(len(lh.lighthouses)))
lighthouses := lh.GetLighthouses()
lh.metricTx(NebulaMeta_HostUpdateNotification, int64(len(lighthouses)))
nb := make([]byte, 12, 12)
out := make([]byte, mtu)
mm, err := proto.Marshal(m)
mm, err := m.Marshal()
if err != nil {
lh.l.WithError(err).Error("Error while marshaling for lighthouse update")
return
}
for vpnIp := range lh.lighthouses {
for vpnIp := range lighthouses {
f.SendMessageToVpnIp(header.LightHouse, 0, vpnIp, mm, nb, out)
}
}
@@ -430,6 +700,7 @@ func (lhh *LightHouseHandler) resetMeta() *NebulaMeta {
// Keep the array memory around
details.Ip4AndPorts = details.Ip4AndPorts[:0]
details.Ip6AndPorts = details.Ip6AndPorts[:0]
details.RelayVpnIp = details.RelayVpnIp[:0]
lhh.meta.Details = details
return lhh.meta
@@ -546,6 +817,10 @@ func (lhh *LightHouseHandler) coalesceAnswers(c *cache, n *NebulaMeta) {
n.Details.Ip6AndPorts = append(n.Details.Ip6AndPorts, c.v6.reported...)
}
}
if c.relay != nil {
n.Details.RelayVpnIp = append(n.Details.RelayVpnIp, c.relay.relay...)
}
}
func (lhh *LightHouseHandler) handleHostQueryReply(n *NebulaMeta, vpnIp iputil.VpnIp) {
@@ -561,6 +836,7 @@ func (lhh *LightHouseHandler) handleHostQueryReply(n *NebulaMeta, vpnIp iputil.V
certVpnIp := iputil.VpnIp(n.Details.VpnIp)
am.unlockedSetV4(vpnIp, certVpnIp, n.Details.Ip4AndPorts, lhh.lh.unlockedShouldAddV4)
am.unlockedSetV6(vpnIp, certVpnIp, n.Details.Ip6AndPorts, lhh.lh.unlockedShouldAddV6)
am.unlockedSetRelay(vpnIp, certVpnIp, n.Details.RelayVpnIp)
am.Unlock()
// Non-blocking attempt to trigger, skip if it would block
@@ -594,6 +870,7 @@ func (lhh *LightHouseHandler) handleHostUpdateNotification(n *NebulaMeta, vpnIp
certVpnIp := iputil.VpnIp(n.Details.VpnIp)
am.unlockedSetV4(vpnIp, certVpnIp, n.Details.Ip4AndPorts, lhh.lh.unlockedShouldAddV4)
am.unlockedSetV6(vpnIp, certVpnIp, n.Details.Ip6AndPorts, lhh.lh.unlockedShouldAddV6)
am.unlockedSetRelay(vpnIp, certVpnIp, n.Details.RelayVpnIp)
am.Unlock()
}
@@ -609,7 +886,7 @@ func (lhh *LightHouseHandler) handleHostPunchNotification(n *NebulaMeta, vpnIp i
}
go func() {
time.Sleep(lhh.lh.punchDelay)
time.Sleep(lhh.lh.punchy.GetDelay())
lhh.lh.metricHolepunchTx.Inc(1)
lhh.lh.punchConn.WriteTo(empty, vpnPeer)
}()
@@ -631,7 +908,7 @@ func (lhh *LightHouseHandler) handleHostPunchNotification(n *NebulaMeta, vpnIp i
// This sends a nebula test packet to the host trying to contact us. In the case
// of a double nat or other difficult scenario, this may help establish
// a tunnel.
if lhh.lh.punchBack {
if lhh.lh.punchy.GetRespond() {
queryVpnIp := iputil.VpnIp(n.Details.VpnIp)
go func() {
time.Sleep(time.Second * 5)

View File

@@ -5,11 +5,11 @@ import (
"net"
"testing"
"github.com/golang/protobuf/proto"
"github.com/slackhq/nebula/config"
"github.com/slackhq/nebula/header"
"github.com/slackhq/nebula/iputil"
"github.com/slackhq/nebula/test"
"github.com/slackhq/nebula/udp"
"github.com/slackhq/nebula/util"
"github.com/stretchr/testify/assert"
)
@@ -19,7 +19,7 @@ func TestOldIPv4Only(t *testing.T) {
// This test ensures our new ipv6 enabled LH protobuf IpAndPorts works with the old style to enable backwards compatibility
b := []byte{8, 129, 130, 132, 80, 16, 10}
var m Ip4AndPort
err := proto.Unmarshal(b, &m)
err := m.Unmarshal(b)
assert.NoError(t, err)
assert.Equal(t, "10.1.1.1", iputil.VpnIp(m.GetIp()).String())
}
@@ -35,45 +35,44 @@ func TestNewLhQuery(t *testing.T) {
assert.IsType(t, &NebulaMeta{}, a)
// It should also Marshal fine
b, err := proto.Marshal(a)
b, err := a.Marshal()
assert.Nil(t, err)
// and then Unmarshal fine
n := &NebulaMeta{}
err = proto.Unmarshal(b, n)
err = n.Unmarshal(b)
assert.Nil(t, err)
}
func Test_lhStaticMapping(t *testing.T) {
l := util.NewTestLogger()
l := test.NewLogger()
_, myVpnNet, _ := net.ParseCIDR("10.128.0.1/16")
lh1 := "10.128.0.2"
lh1IP := net.ParseIP(lh1)
udpServer, _ := udp.NewListener(l, "0.0.0.0", 0, true, 2)
meh := NewLightHouse(l, true, &net.IPNet{IP: net.IP{0, 0, 0, 1}, Mask: net.IPMask{255, 255, 255, 255}}, []iputil.VpnIp{iputil.Ip2VpnIp(lh1IP)}, 10, 10003, udpServer, false, 1, false)
meh.AddStaticRemote(iputil.Ip2VpnIp(lh1IP), udp.NewAddr(lh1IP, uint16(4242)))
err := meh.ValidateLHStaticEntries()
c := config.NewC(l)
c.Settings["lighthouse"] = map[interface{}]interface{}{"hosts": []interface{}{lh1}}
c.Settings["static_host_map"] = map[interface{}]interface{}{lh1: []interface{}{"1.1.1.1:4242"}}
_, err := NewLightHouseFromConfig(l, c, myVpnNet, nil, nil)
assert.Nil(t, err)
lh2 := "10.128.0.3"
lh2IP := net.ParseIP(lh2)
meh = NewLightHouse(l, true, &net.IPNet{IP: net.IP{0, 0, 0, 1}, Mask: net.IPMask{255, 255, 255, 255}}, []iputil.VpnIp{iputil.Ip2VpnIp(lh1IP), iputil.Ip2VpnIp(lh2IP)}, 10, 10003, udpServer, false, 1, false)
meh.AddStaticRemote(iputil.Ip2VpnIp(lh1IP), udp.NewAddr(lh1IP, uint16(4242)))
err = meh.ValidateLHStaticEntries()
assert.EqualError(t, err, "Lighthouse 10.128.0.3 does not have a static_host_map entry")
c = config.NewC(l)
c.Settings["lighthouse"] = map[interface{}]interface{}{"hosts": []interface{}{lh1, lh2}}
c.Settings["static_host_map"] = map[interface{}]interface{}{lh1: []interface{}{"100.1.1.1:4242"}}
_, err = NewLightHouseFromConfig(l, c, myVpnNet, nil, nil)
assert.EqualError(t, err, "lighthouse 10.128.0.3 does not have a static_host_map entry")
}
func BenchmarkLighthouseHandleRequest(b *testing.B) {
l := util.NewTestLogger()
lh1 := "10.128.0.2"
lh1IP := net.ParseIP(lh1)
l := test.NewLogger()
_, myVpnNet, _ := net.ParseCIDR("10.128.0.1/0")
udpServer, _ := udp.NewListener(l, "0.0.0.0", 0, true, 2)
lh := NewLightHouse(l, true, &net.IPNet{IP: net.IP{0, 0, 0, 1}, Mask: net.IPMask{0, 0, 0, 0}}, []iputil.VpnIp{iputil.Ip2VpnIp(lh1IP)}, 10, 10003, udpServer, false, 1, false)
c := config.NewC(l)
lh, err := NewLightHouseFromConfig(l, c, myVpnNet, nil, nil)
if !assert.NoError(b, err) {
b.Fatal()
}
hAddr := udp.NewAddrFromString("4.5.6.7:12345")
hAddr2 := udp.NewAddrFromString("4.5.6.7:12346")
@@ -112,7 +111,7 @@ func BenchmarkLighthouseHandleRequest(b *testing.B) {
Ip4AndPorts: nil,
},
}
p, err := proto.Marshal(req)
p, err := req.Marshal()
assert.NoError(b, err)
for n := 0; n < b.N; n++ {
lhh.HandleRequest(rAddr, 2, p, mw)
@@ -127,7 +126,7 @@ func BenchmarkLighthouseHandleRequest(b *testing.B) {
Ip4AndPorts: nil,
},
}
p, err := proto.Marshal(req)
p, err := req.Marshal()
assert.NoError(b, err)
for n := 0; n < b.N; n++ {
@@ -137,7 +136,7 @@ func BenchmarkLighthouseHandleRequest(b *testing.B) {
}
func TestLighthouse_Memory(t *testing.T) {
l := util.NewTestLogger()
l := test.NewLogger()
myUdpAddr0 := &udp.Addr{IP: net.ParseIP("10.0.0.2"), Port: 4242}
myUdpAddr1 := &udp.Addr{IP: net.ParseIP("192.168.0.2"), Port: 4242}
@@ -160,8 +159,11 @@ func TestLighthouse_Memory(t *testing.T) {
theirUdpAddr4 := &udp.Addr{IP: net.ParseIP("24.15.0.3"), Port: 4242}
theirVpnIp := iputil.Ip2VpnIp(net.ParseIP("10.128.0.3"))
udpServer, _ := udp.NewListener(l, "0.0.0.0", 0, true, 2)
lh := NewLightHouse(l, true, &net.IPNet{IP: net.IP{10, 128, 0, 1}, Mask: net.IPMask{255, 255, 255, 0}}, []iputil.VpnIp{}, 10, 10003, udpServer, false, 1, false)
c := config.NewC(l)
c.Settings["lighthouse"] = map[interface{}]interface{}{"am_lighthouse": true}
c.Settings["listen"] = map[interface{}]interface{}{"port": 4242}
lh, err := NewLightHouseFromConfig(l, c, &net.IPNet{IP: net.IP{10, 128, 0, 1}, Mask: net.IPMask{255, 255, 255, 0}}, nil, nil)
assert.NoError(t, err)
lhh := lh.NewRequestHandler()
// Test that my first update responds with just that
@@ -179,9 +181,16 @@ func TestLighthouse_Memory(t *testing.T) {
r = newLHHostRequest(myUdpAddr0, myVpnIp, myVpnIp, lhh)
assertIp4InArray(t, r.msg.Details.Ip4AndPorts, myUdpAddr1, myUdpAddr4)
// Update a different host
// Update a different host and ask about it
newLHHostUpdate(theirUdpAddr0, theirVpnIp, []*udp.Addr{theirUdpAddr1, theirUdpAddr2, theirUdpAddr3, theirUdpAddr4}, lhh)
r = newLHHostRequest(theirUdpAddr0, theirVpnIp, theirVpnIp, lhh)
assertIp4InArray(t, r.msg.Details.Ip4AndPorts, theirUdpAddr1, theirUdpAddr2, theirUdpAddr3, theirUdpAddr4)
// Have both hosts ask about the other
r = newLHHostRequest(theirUdpAddr0, theirVpnIp, myVpnIp, lhh)
assertIp4InArray(t, r.msg.Details.Ip4AndPorts, myUdpAddr1, myUdpAddr4)
r = newLHHostRequest(myUdpAddr0, myVpnIp, theirVpnIp, lhh)
assertIp4InArray(t, r.msg.Details.Ip4AndPorts, theirUdpAddr1, theirUdpAddr2, theirUdpAddr3, theirUdpAddr4)
// Make sure we didn't get changed
@@ -224,6 +233,18 @@ func TestLighthouse_Memory(t *testing.T) {
assertIp4InArray(t, r.msg.Details.Ip4AndPorts, good)
}
func TestLighthouse_reload(t *testing.T) {
l := test.NewLogger()
c := config.NewC(l)
c.Settings["lighthouse"] = map[interface{}]interface{}{"am_lighthouse": true}
c.Settings["listen"] = map[interface{}]interface{}{"port": 4242}
lh, err := NewLightHouseFromConfig(l, c, &net.IPNet{IP: net.IP{10, 128, 0, 1}, Mask: net.IPMask{255, 255, 255, 0}}, nil, nil)
assert.NoError(t, err)
c.Settings["static_host_map"] = map[interface{}]interface{}{"10.128.0.2": []interface{}{"1.1.1.1:4242"}}
lh.reload(c, false)
}
func newLHHostRequest(fromAddr *udp.Addr, myVpnIp, queryVpnIp iputil.VpnIp, lhh *LightHouseHandler) testLhReply {
req := &NebulaMeta{
Type: NebulaMeta_HostQuery,
@@ -237,7 +258,10 @@ func newLHHostRequest(fromAddr *udp.Addr, myVpnIp, queryVpnIp iputil.VpnIp, lhh
panic(err)
}
w := &testEncWriter{}
filter := NebulaMeta_HostQueryReply
w := &testEncWriter{
metaFilter: &filter,
}
lhh.HandleRequest(fromAddr, myVpnIp, b, w)
return w.lastReply
}
@@ -266,7 +290,7 @@ func newLHHostUpdate(fromAddr *udp.Addr, vpnIp iputil.VpnIp, addrs []*udp.Addr,
//TODO: this is a RemoteList test
//func Test_lhRemoteAllowList(t *testing.T) {
// l := NewTestLogger()
// l := NewLogger()
// c := NewConfig(l)
// c.Settings["remoteallowlist"] = map[interface{}]interface{}{
// "10.20.0.0/12": false,
@@ -344,18 +368,27 @@ type testLhReply struct {
}
type testEncWriter struct {
lastReply testLhReply
lastReply testLhReply
metaFilter *NebulaMeta_MessageType
}
func (tw *testEncWriter) SendVia(via interface{}, relay interface{}, ad, nb, out []byte, nocopy bool) {
}
func (tw *testEncWriter) Handshake(vpnIp iputil.VpnIp) {
}
func (tw *testEncWriter) SendMessageToVpnIp(t header.MessageType, st header.MessageSubType, vpnIp iputil.VpnIp, p, _, _ []byte) {
tw.lastReply = testLhReply{
nebType: t,
nebSubType: st,
vpnIp: vpnIp,
msg: &NebulaMeta{},
msg := &NebulaMeta{}
err := msg.Unmarshal(p)
if tw.metaFilter == nil || msg.Type == *tw.metaFilter {
tw.lastReply = testLhReply{
nebType: t,
nebSubType: st,
vpnIp: vpnIp,
msg: msg,
}
}
err := proto.Unmarshal(p, tw.lastReply.msg)
if err != nil {
panic(err)
}
@@ -363,7 +396,10 @@ func (tw *testEncWriter) SendMessageToVpnIp(t header.MessageType, st header.Mess
// assertIp4InArray asserts every address in want is at the same position in have and that the lengths match
func assertIp4InArray(t *testing.T, have []*Ip4AndPort, want ...*udp.Addr) {
assert.Len(t, have, len(want))
if !assert.Len(t, have, len(want)) {
return
}
for k, w := range want {
if !(have[k].Ip == uint32(iputil.Ip2VpnIp(w.IP)) && have[k].Port == uint32(w.Port)) {
assert.Fail(t, fmt.Sprintf("Response did not contain: %v:%v at %v; %v", w.IP, w.Port, k, translateV4toUdpAddr(have)))
@@ -373,7 +409,10 @@ func assertIp4InArray(t *testing.T, have []*Ip4AndPort, want ...*udp.Addr) {
// assertUdpAddrInArray asserts every address in want is at the same position in have and that the lengths match
func assertUdpAddrInArray(t *testing.T, have []*udp.Addr, want ...*udp.Addr) {
assert.Len(t, have, len(want))
if !assert.Len(t, have, len(want)) {
return
}
for k, w := range want {
if !(have[k].IP.Equal(w.IP) && have[k].Port == w.Port) {
assert.Fail(t, fmt.Sprintf("Response did not contain: %v at %v; %v", w, k, have))

View File

@@ -1,7 +1,6 @@
package nebula
import (
"errors"
"fmt"
"strings"
"time"
@@ -10,38 +9,6 @@ import (
"github.com/slackhq/nebula/config"
)
type ContextualError struct {
RealError error
Fields map[string]interface{}
Context string
}
func NewContextualError(msg string, fields map[string]interface{}, realError error) ContextualError {
return ContextualError{Context: msg, Fields: fields, RealError: realError}
}
func (ce ContextualError) Error() string {
if ce.RealError == nil {
return ce.Context
}
return ce.RealError.Error()
}
func (ce ContextualError) Unwrap() error {
if ce.RealError == nil {
return errors.New(ce.Context)
}
return ce.RealError
}
func (ce *ContextualError) Log(lr *logrus.Logger) {
if ce.RealError != nil {
lr.WithFields(ce.Fields).WithError(ce.RealError).Error(ce.Context)
} else {
lr.WithFields(ce.Fields).Error(ce.Context)
}
}
func configLogger(l *logrus.Logger, c *config.C) error {
// set up our logging level
logLevel, err := logrus.ParseLevel(strings.ToLower(c.GetString("logging.level", "info")))

183
main.go
View File

@@ -3,15 +3,17 @@ package nebula
import (
"context"
"encoding/binary"
"errors"
"fmt"
"net"
"time"
"github.com/sirupsen/logrus"
"github.com/slackhq/nebula/config"
"github.com/slackhq/nebula/iputil"
"github.com/slackhq/nebula/overlay"
"github.com/slackhq/nebula/sshd"
"github.com/slackhq/nebula/udp"
"github.com/slackhq/nebula/util"
"gopkg.in/yaml.v2"
)
@@ -44,7 +46,7 @@ func Main(c *config.C, configTest bool, buildVersion string, logger *logrus.Logg
err := configLogger(l, c)
if err != nil {
return nil, NewContextualError("Failed to configure the logger", nil, err)
return nil, util.NewContextualError("Failed to configure the logger", nil, err)
}
c.RegisterReloadCallback(func(c *config.C) {
@@ -57,33 +59,25 @@ func Main(c *config.C, configTest bool, buildVersion string, logger *logrus.Logg
caPool, err := loadCAFromConfig(l, c)
if err != nil {
//The errors coming out of loadCA are already nicely formatted
return nil, NewContextualError("Failed to load ca from config", nil, err)
return nil, util.NewContextualError("Failed to load ca from config", nil, err)
}
l.WithField("fingerprints", caPool.GetFingerprints()).Debug("Trusted CA fingerprints")
cs, err := NewCertStateFromConfig(c)
if err != nil {
//The errors coming out of NewCertStateFromConfig are already nicely formatted
return nil, NewContextualError("Failed to load certificate from config", nil, err)
return nil, util.NewContextualError("Failed to load certificate from config", nil, err)
}
l.WithField("cert", cs.certificate).Debug("Client nebula certificate")
fw, err := NewFirewallFromConfig(l, cs.certificate, c)
if err != nil {
return nil, NewContextualError("Error while loading firewall rules", nil, err)
return nil, util.NewContextualError("Error while loading firewall rules", nil, err)
}
l.WithField("firewallHash", fw.GetRuleHash()).Info("Firewall started")
// TODO: make sure mask is 4 bytes
tunCidr := cs.certificate.Details.Ips[0]
routes, err := parseRoutes(c, tunCidr)
if err != nil {
return nil, NewContextualError("Could not parse tun.routes", nil, err)
}
unsafeRoutes, err := parseUnsafeRoutes(c, tunCidr)
if err != nil {
return nil, NewContextualError("Could not parse tun.unsafe_routes", nil, err)
}
ssh, err := sshd.NewSSHServer(l.WithField("subsystem", "sshd"))
wireSSHReload(l, ssh, c)
@@ -91,7 +85,7 @@ func Main(c *config.C, configTest bool, buildVersion string, logger *logrus.Logg
if c.GetBool("sshd.enabled", false) {
sshStart, err = configSSH(l, ssh, c)
if err != nil {
return nil, NewContextualError("Error while configuring the sshd", nil, err)
return nil, util.NewContextualError("Error while configuring the sshd", nil, err)
}
}
@@ -136,46 +130,21 @@ func Main(c *config.C, configTest bool, buildVersion string, logger *logrus.Logg
l.WithField("duration", conntrackCacheTimeout).Info("Using routine-local conntrack cache")
}
var tun Inside
var tun overlay.Device
if !configTest {
c.CatchHUP(ctx)
switch {
case c.GetBool("tun.disabled", false):
tun = newDisabledTun(tunCidr, c.GetInt("tun.tx_queue", 500), c.GetBool("stats.message_metrics", false), l)
case tunFd != nil:
tun, err = newTunFromFd(
l,
*tunFd,
tunCidr,
c.GetInt("tun.mtu", DEFAULT_MTU),
routes,
unsafeRoutes,
c.GetInt("tun.tx_queue", 500),
)
default:
tun, err = newTun(
l,
c.GetString("tun.dev", ""),
tunCidr,
c.GetInt("tun.mtu", DEFAULT_MTU),
routes,
unsafeRoutes,
c.GetInt("tun.tx_queue", 500),
routines > 1,
)
}
tun, err = overlay.NewDeviceFromConfig(c, l, tunCidr, tunFd, routines)
if err != nil {
return nil, NewContextualError("Failed to get a tun/tap device", nil, err)
return nil, util.NewContextualError("Failed to get a tun/tap device", nil, err)
}
}
defer func() {
if reterr != nil {
tun.Close()
}
}()
defer func() {
if reterr != nil {
tun.Close()
}
}()
}
// set up our UDP listener
udpConns := make([]*udp.Conn, routines)
@@ -185,19 +154,10 @@ func Main(c *config.C, configTest bool, buildVersion string, logger *logrus.Logg
for i := 0; i < routines; i++ {
udpServer, err := udp.NewListener(l, c.GetString("listen.host", "0.0.0.0"), port, routines > 1, c.GetInt("listen.batch", 64))
if err != nil {
return nil, NewContextualError("Failed to open udp listener", m{"queue": i}, err)
return nil, util.NewContextualError("Failed to open udp listener", m{"queue": i}, err)
}
udpServer.ReloadConfig(c)
udpConns[i] = udpServer
// If port is dynamic, discover it
if port == 0 {
uPort, err := udpServer.LocalAddr()
if err != nil {
return nil, NewContextualError("Failed to get listening port", nil, err)
}
port = int(uPort.Port)
}
}
}
@@ -209,7 +169,7 @@ func Main(c *config.C, configTest bool, buildVersion string, logger *logrus.Logg
for _, rawPreferredRange := range rawPreferredRanges {
_, preferredRange, err := net.ParseCIDR(rawPreferredRange)
if err != nil {
return nil, NewContextualError("Failed to parse preferred ranges", nil, err)
return nil, util.NewContextualError("Failed to parse preferred ranges", nil, err)
}
preferredRanges = append(preferredRanges, preferredRange)
}
@@ -222,7 +182,7 @@ func Main(c *config.C, configTest bool, buildVersion string, logger *logrus.Logg
if rawLocalRange != "" {
_, localRange, err := net.ParseCIDR(rawLocalRange)
if err != nil {
return nil, NewContextualError("Failed to parse local_range", nil, err)
return nil, util.NewContextualError("Failed to parse local_range", nil, err)
}
// Check if the entry for local_range was already specified in
@@ -240,8 +200,6 @@ func Main(c *config.C, configTest bool, buildVersion string, logger *logrus.Logg
}
hostMap := NewHostMap(l, "main", tunCidr, preferredRanges)
hostMap.addUnsafeRoutes(&unsafeRoutes)
hostMap.metricsEnabled = c.GetBool("stats.message_metrics", false)
l.WithField("network", hostMap.vpnCIDR).WithField("preferredRanges", hostMap.preferredRanges).Info("Main HostMap created")
@@ -251,91 +209,18 @@ func Main(c *config.C, configTest bool, buildVersion string, logger *logrus.Logg
go hostMap.Promoter(config.GetInt("promoter.interval"))
*/
punchy := NewPunchyFromConfig(c)
if punchy.Punch && !configTest {
punchy := NewPunchyFromConfig(l, c)
if punchy.GetPunch() && !configTest {
l.Info("UDP hole punching enabled")
go hostMap.Punchy(ctx, udpConns[0])
}
amLighthouse := c.GetBool("lighthouse.am_lighthouse", false)
// fatal if am_lighthouse is enabled but we are using an ephemeral port
if amLighthouse && (c.GetInt("listen.port", 0) == 0) {
return nil, NewContextualError("lighthouse.am_lighthouse enabled on node but no port number is set in config", nil, nil)
}
// warn if am_lighthouse is enabled but upstream lighthouses exists
rawLighthouseHosts := c.GetStringSlice("lighthouse.hosts", []string{})
if amLighthouse && len(rawLighthouseHosts) != 0 {
l.Warn("lighthouse.am_lighthouse enabled on node but upstream lighthouses exist in config")
}
lighthouseHosts := make([]iputil.VpnIp, len(rawLighthouseHosts))
for i, host := range rawLighthouseHosts {
ip := net.ParseIP(host)
if ip == nil {
return nil, NewContextualError("Unable to parse lighthouse host entry", m{"host": host, "entry": i + 1}, nil)
}
if !tunCidr.Contains(ip) {
return nil, NewContextualError("lighthouse host is not in our subnet, invalid", m{"vpnIp": ip, "network": tunCidr.String()}, nil)
}
lighthouseHosts[i] = iputil.Ip2VpnIp(ip)
}
lightHouse := NewLightHouse(
l,
amLighthouse,
tunCidr,
lighthouseHosts,
//TODO: change to a duration
c.GetInt("lighthouse.interval", 10),
uint32(port),
udpConns[0],
punchy.Respond,
punchy.Delay,
c.GetBool("stats.lighthouse_metrics", false),
)
remoteAllowList, err := NewRemoteAllowListFromConfig(c, "lighthouse.remote_allow_list", "lighthouse.remote_allow_ranges")
if err != nil {
return nil, NewContextualError("Invalid lighthouse.remote_allow_list", nil, err)
}
lightHouse.SetRemoteAllowList(remoteAllowList)
localAllowList, err := NewLocalAllowListFromConfig(c, "lighthouse.local_allow_list")
if err != nil {
return nil, NewContextualError("Invalid lighthouse.local_allow_list", nil, err)
}
lightHouse.SetLocalAllowList(localAllowList)
//TODO: Move all of this inside functions in lighthouse.go
for k, v := range c.GetMap("static_host_map", map[interface{}]interface{}{}) {
ip := net.ParseIP(fmt.Sprintf("%v", k))
vpnIp := iputil.Ip2VpnIp(ip)
if !tunCidr.Contains(ip) {
return nil, NewContextualError("static_host_map key is not in our subnet, invalid", m{"vpnIp": vpnIp, "network": tunCidr.String()}, nil)
}
vals, ok := v.([]interface{})
if ok {
for _, v := range vals {
ip, port, err := udp.ParseIPAndPort(fmt.Sprintf("%v", v))
if err != nil {
return nil, NewContextualError("Static host address could not be parsed", m{"vpnIp": vpnIp}, err)
}
lightHouse.AddStaticRemote(vpnIp, udp.NewAddr(ip, port))
}
} else {
ip, port, err := udp.ParseIPAndPort(fmt.Sprintf("%v", v))
if err != nil {
return nil, NewContextualError("Static host address could not be parsed", m{"vpnIp": vpnIp}, err)
}
lightHouse.AddStaticRemote(vpnIp, udp.NewAddr(ip, port))
}
}
err = lightHouse.ValidateLHStaticEntries()
if err != nil {
l.WithError(err).Error("Lighthouse unreachable")
lightHouse, err := NewLightHouseFromConfig(l, c, tunCidr, udpConns[0], punchy)
switch {
case errors.As(err, &util.ContextualError{}):
return nil, err
case err != nil:
return nil, util.NewContextualError("Failed to initialize lighthouse handler", nil, err)
}
var messageMetrics *MessageMetrics
@@ -345,10 +230,13 @@ func Main(c *config.C, configTest bool, buildVersion string, logger *logrus.Logg
messageMetrics = newMessageMetricsOnlyRecvError()
}
useRelays := c.GetBool("relay.use_relays", DefaultUseRelays) && !c.GetBool("relay.am_relay", false)
handshakeConfig := HandshakeConfig{
tryInterval: c.GetDuration("handshakes.try_interval", DefaultHandshakeTryInterval),
retries: c.GetInt("handshakes.retries", DefaultHandshakeRetries),
triggerBuffer: c.GetInt("handshakes.trigger_buffer", DefaultHandshakeTriggerBuffer),
useRelays: useRelays,
messageMetrics: messageMetrics,
}
@@ -390,6 +278,7 @@ func Main(c *config.C, configTest bool, buildVersion string, logger *logrus.Logg
version: buildVersion,
caPool: caPool,
disconnectInvalid: c.GetBool("pki.disconnect_invalid", false),
relayManager: NewRelayManager(ctx, l, hostMap, c),
ConntrackCacheTimeout: conntrackCacheTimeout,
l: l,
@@ -417,6 +306,8 @@ func Main(c *config.C, configTest bool, buildVersion string, logger *logrus.Logg
ifce.RegisterConfigChangeCallbacks(c)
ifce.reloadSendRecvError(c)
go handshakeManager.Run(ctx, ifce)
go lightHouse.LhUpdateWorker(ctx, ifce)
}
@@ -426,7 +317,7 @@ func Main(c *config.C, configTest bool, buildVersion string, logger *logrus.Logg
statsStart, err := startStats(l, c, buildVersion, configTest)
if err != nil {
return nil, NewContextualError("Failed to start stats emitter", nil, err)
return nil, util.NewContextualError("Failed to start stats emitter", nil, err)
}
if configTest {
@@ -436,11 +327,11 @@ func Main(c *config.C, configTest bool, buildVersion string, logger *logrus.Logg
//TODO: check if we _should_ be emitting stats
go ifce.emitStats(ctx, c.GetDuration("stats.interval", time.Second*10))
attachCommands(l, ssh, hostMap, handshakeManager.pendingHostMap, lightHouse, ifce)
attachCommands(l, c, ssh, hostMap, handshakeManager.pendingHostMap, lightHouse, ifce)
// Start DNS server last to allow using the nebula IP as lighthouse.dns.host
var dnsStart func()
if amLighthouse && serveDns {
if lightHouse.amLighthouse && serveDns {
l.Debugln("Starting dns server")
dnsStart = dnsMain(l, hostMap, c)
}

View File

@@ -3,7 +3,7 @@ package nebula
/*
import (
proto "github.com/golang/protobuf/proto"
proto "google.golang.org/protobuf/proto"
)
func HandleMetaProto(p []byte) {

View File

@@ -96,6 +96,34 @@ func (NebulaPing_MessageType) EnumDescriptor() ([]byte, []int) {
return fileDescriptor_2d65afa7693df5ef, []int{4, 0}
}
type NebulaControl_MessageType int32
const (
NebulaControl_None NebulaControl_MessageType = 0
NebulaControl_CreateRelayRequest NebulaControl_MessageType = 1
NebulaControl_CreateRelayResponse NebulaControl_MessageType = 2
)
var NebulaControl_MessageType_name = map[int32]string{
0: "None",
1: "CreateRelayRequest",
2: "CreateRelayResponse",
}
var NebulaControl_MessageType_value = map[string]int32{
"None": 0,
"CreateRelayRequest": 1,
"CreateRelayResponse": 2,
}
func (x NebulaControl_MessageType) String() string {
return proto.EnumName(NebulaControl_MessageType_name, int32(x))
}
func (NebulaControl_MessageType) EnumDescriptor() ([]byte, []int) {
return fileDescriptor_2d65afa7693df5ef, []int{7, 0}
}
type NebulaMeta struct {
Type NebulaMeta_MessageType `protobuf:"varint,1,opt,name=Type,proto3,enum=nebula.NebulaMeta_MessageType" json:"Type,omitempty"`
Details *NebulaMetaDetails `protobuf:"bytes,2,opt,name=Details,proto3" json:"Details,omitempty"`
@@ -152,6 +180,7 @@ type NebulaMetaDetails struct {
VpnIp uint32 `protobuf:"varint,1,opt,name=VpnIp,proto3" json:"VpnIp,omitempty"`
Ip4AndPorts []*Ip4AndPort `protobuf:"bytes,2,rep,name=Ip4AndPorts,proto3" json:"Ip4AndPorts,omitempty"`
Ip6AndPorts []*Ip6AndPort `protobuf:"bytes,4,rep,name=Ip6AndPorts,proto3" json:"Ip6AndPorts,omitempty"`
RelayVpnIp []uint32 `protobuf:"varint,5,rep,packed,name=RelayVpnIp,proto3" json:"RelayVpnIp,omitempty"`
Counter uint32 `protobuf:"varint,3,opt,name=counter,proto3" json:"counter,omitempty"`
}
@@ -209,6 +238,13 @@ func (m *NebulaMetaDetails) GetIp6AndPorts() []*Ip6AndPort {
return nil
}
func (m *NebulaMetaDetails) GetRelayVpnIp() []uint32 {
if m != nil {
return m.RelayVpnIp
}
return nil
}
func (m *NebulaMetaDetails) GetCounter() uint32 {
if m != nil {
return m.Counter
@@ -508,9 +544,86 @@ func (m *NebulaHandshakeDetails) GetTime() uint64 {
return 0
}
type NebulaControl struct {
Type NebulaControl_MessageType `protobuf:"varint,1,opt,name=Type,proto3,enum=nebula.NebulaControl_MessageType" json:"Type,omitempty"`
InitiatorRelayIndex uint32 `protobuf:"varint,2,opt,name=InitiatorRelayIndex,proto3" json:"InitiatorRelayIndex,omitempty"`
ResponderRelayIndex uint32 `protobuf:"varint,3,opt,name=ResponderRelayIndex,proto3" json:"ResponderRelayIndex,omitempty"`
RelayToIp uint32 `protobuf:"varint,4,opt,name=RelayToIp,proto3" json:"RelayToIp,omitempty"`
RelayFromIp uint32 `protobuf:"varint,5,opt,name=RelayFromIp,proto3" json:"RelayFromIp,omitempty"`
}
func (m *NebulaControl) Reset() { *m = NebulaControl{} }
func (m *NebulaControl) String() string { return proto.CompactTextString(m) }
func (*NebulaControl) ProtoMessage() {}
func (*NebulaControl) Descriptor() ([]byte, []int) {
return fileDescriptor_2d65afa7693df5ef, []int{7}
}
func (m *NebulaControl) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
}
func (m *NebulaControl) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
if deterministic {
return xxx_messageInfo_NebulaControl.Marshal(b, m, deterministic)
} else {
b = b[:cap(b)]
n, err := m.MarshalToSizedBuffer(b)
if err != nil {
return nil, err
}
return b[:n], nil
}
}
func (m *NebulaControl) XXX_Merge(src proto.Message) {
xxx_messageInfo_NebulaControl.Merge(m, src)
}
func (m *NebulaControl) XXX_Size() int {
return m.Size()
}
func (m *NebulaControl) XXX_DiscardUnknown() {
xxx_messageInfo_NebulaControl.DiscardUnknown(m)
}
var xxx_messageInfo_NebulaControl proto.InternalMessageInfo
func (m *NebulaControl) GetType() NebulaControl_MessageType {
if m != nil {
return m.Type
}
return NebulaControl_None
}
func (m *NebulaControl) GetInitiatorRelayIndex() uint32 {
if m != nil {
return m.InitiatorRelayIndex
}
return 0
}
func (m *NebulaControl) GetResponderRelayIndex() uint32 {
if m != nil {
return m.ResponderRelayIndex
}
return 0
}
func (m *NebulaControl) GetRelayToIp() uint32 {
if m != nil {
return m.RelayToIp
}
return 0
}
func (m *NebulaControl) GetRelayFromIp() uint32 {
if m != nil {
return m.RelayFromIp
}
return 0
}
func init() {
proto.RegisterEnum("nebula.NebulaMeta_MessageType", NebulaMeta_MessageType_name, NebulaMeta_MessageType_value)
proto.RegisterEnum("nebula.NebulaPing_MessageType", NebulaPing_MessageType_name, NebulaPing_MessageType_value)
proto.RegisterEnum("nebula.NebulaControl_MessageType", NebulaControl_MessageType_name, NebulaControl_MessageType_value)
proto.RegisterType((*NebulaMeta)(nil), "nebula.NebulaMeta")
proto.RegisterType((*NebulaMetaDetails)(nil), "nebula.NebulaMetaDetails")
proto.RegisterType((*Ip4AndPort)(nil), "nebula.Ip4AndPort")
@@ -518,48 +631,57 @@ func init() {
proto.RegisterType((*NebulaPing)(nil), "nebula.NebulaPing")
proto.RegisterType((*NebulaHandshake)(nil), "nebula.NebulaHandshake")
proto.RegisterType((*NebulaHandshakeDetails)(nil), "nebula.NebulaHandshakeDetails")
proto.RegisterType((*NebulaControl)(nil), "nebula.NebulaControl")
}
func init() { proto.RegisterFile("nebula.proto", fileDescriptor_2d65afa7693df5ef) }
var fileDescriptor_2d65afa7693df5ef = []byte{
// 570 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x7c, 0x54, 0x41, 0x6f, 0xda, 0x4c,
0x10, 0x65, 0x8d, 0x21, 0xc9, 0x90, 0x10, 0x7f, 0xfb, 0xb5, 0x08, 0x7a, 0xb0, 0x22, 0x1f, 0x2a,
0x4e, 0xa4, 0x82, 0x08, 0xf5, 0xd8, 0x96, 0x1e, 0x40, 0x0a, 0x88, 0x5a, 0x69, 0x2b, 0xf5, 0x52,
0x2d, 0xf6, 0x16, 0xaf, 0x80, 0x5d, 0xd7, 0x5e, 0xaa, 0xf0, 0x2f, 0xfa, 0x33, 0x7a, 0xeb, 0xdf,
0xe8, 0xa1, 0x87, 0x1c, 0x7a, 0xe8, 0xb1, 0x82, 0x3f, 0x52, 0xed, 0xda, 0xd8, 0x04, 0xa2, 0xde,
0xe6, 0xcd, 0xbc, 0x37, 0x3b, 0x3c, 0x3f, 0x01, 0xa7, 0x9c, 0x4e, 0x96, 0x73, 0xd2, 0x0a, 0x23,
0x21, 0x05, 0x2e, 0x27, 0xc8, 0xf9, 0x69, 0x00, 0x8c, 0x74, 0x39, 0xa4, 0x92, 0xe0, 0x36, 0x98,
0x37, 0xab, 0x90, 0xd6, 0xd1, 0x05, 0x6a, 0x56, 0xdb, 0x76, 0x2b, 0xd5, 0xe4, 0x8c, 0xd6, 0x90,
0xc6, 0x31, 0x99, 0x52, 0xc5, 0x72, 0x35, 0x17, 0x77, 0xe0, 0xe8, 0x35, 0x95, 0x84, 0xcd, 0xe3,
0xba, 0x71, 0x81, 0x9a, 0x95, 0x76, 0xe3, 0x50, 0x96, 0x12, 0xdc, 0x2d, 0xd3, 0xf9, 0x85, 0xa0,
0xb2, 0xb3, 0x0a, 0x1f, 0x83, 0x39, 0x12, 0x9c, 0x5a, 0x05, 0x7c, 0x06, 0x27, 0x7d, 0x11, 0xcb,
0x37, 0x4b, 0x1a, 0xad, 0x2c, 0x84, 0x31, 0x54, 0x33, 0xe8, 0xd2, 0x70, 0xbe, 0xb2, 0x0c, 0xfc,
0x04, 0x6a, 0xaa, 0xf7, 0x36, 0xf4, 0x89, 0xa4, 0x23, 0x21, 0xd9, 0x27, 0xe6, 0x11, 0xc9, 0x04,
0xb7, 0x8a, 0xb8, 0x01, 0x8f, 0xd5, 0x6c, 0x28, 0xbe, 0x50, 0xff, 0xde, 0xc8, 0xdc, 0x8e, 0xc6,
0x4b, 0xee, 0x05, 0xf7, 0x46, 0x25, 0x5c, 0x05, 0x50, 0xa3, 0xf7, 0x81, 0x20, 0x0b, 0x66, 0x95,
0xf1, 0xff, 0x70, 0x9e, 0xe3, 0xe4, 0xd9, 0x23, 0x75, 0xd9, 0x98, 0xc8, 0xa0, 0x17, 0x50, 0x6f,
0x66, 0x1d, 0xab, 0xcb, 0x32, 0x98, 0x50, 0x4e, 0x9c, 0xef, 0x08, 0xfe, 0x3b, 0xf8, 0xd5, 0xf8,
0x11, 0x94, 0xde, 0x85, 0x7c, 0x10, 0x6a, 0x5b, 0xcf, 0xdc, 0x04, 0xe0, 0x2b, 0xa8, 0x0c, 0xc2,
0xab, 0x97, 0xdc, 0x1f, 0x8b, 0x48, 0x2a, 0xef, 0x8a, 0xcd, 0x4a, 0x1b, 0x6f, 0xbd, 0xcb, 0x47,
0xee, 0x2e, 0x2d, 0x51, 0x75, 0x33, 0x95, 0xb9, 0xaf, 0xea, 0xee, 0xa8, 0x32, 0x1a, 0xae, 0xc3,
0x91, 0x27, 0x96, 0x5c, 0xd2, 0xa8, 0x5e, 0xd4, 0x37, 0x6c, 0xa1, 0xf3, 0x0c, 0x20, 0x5f, 0x8f,
0xab, 0x60, 0x64, 0x67, 0x1a, 0x83, 0x10, 0x63, 0x30, 0x55, 0x5f, 0x7f, 0xd8, 0x33, 0x57, 0xd7,
0xce, 0x0b, 0xa5, 0xe8, 0xee, 0x28, 0xfa, 0x4c, 0x2b, 0x4c, 0xd7, 0xe8, 0x33, 0x85, 0xaf, 0x85,
0xe6, 0x9b, 0xae, 0x71, 0x2d, 0xb2, 0x0d, 0xc5, 0x9d, 0x0d, 0xb7, 0xdb, 0xcc, 0x8d, 0x19, 0x9f,
0xfe, 0x3b, 0x73, 0x8a, 0xf1, 0x40, 0xe6, 0x30, 0x98, 0x37, 0x6c, 0x41, 0xd3, 0x77, 0x74, 0xed,
0x38, 0x07, 0x89, 0x52, 0x62, 0xab, 0x80, 0x4f, 0xa0, 0x94, 0x7c, 0x1f, 0xe4, 0x7c, 0x84, 0xf3,
0x64, 0x6f, 0x9f, 0x70, 0x3f, 0x0e, 0xc8, 0x8c, 0xe2, 0xe7, 0x79, 0x7c, 0x91, 0x8e, 0xef, 0xde,
0x05, 0x19, 0x73, 0x3f, 0xc3, 0xea, 0x88, 0xfe, 0x82, 0x78, 0xfa, 0x88, 0x53, 0x57, 0xd7, 0xce,
0x37, 0x04, 0xb5, 0x87, 0x75, 0x8a, 0xde, 0xa3, 0x91, 0xd4, 0xaf, 0x9c, 0xba, 0xba, 0xc6, 0x4f,
0xa1, 0x3a, 0xe0, 0x4c, 0x32, 0x22, 0x45, 0x34, 0xe0, 0x3e, 0xbd, 0x4d, 0x9d, 0xde, 0xeb, 0x2a,
0x9e, 0x4b, 0xe3, 0x50, 0x70, 0x9f, 0xa6, 0xbc, 0xc4, 0xcf, 0xbd, 0x2e, 0xae, 0x41, 0xb9, 0x27,
0xc4, 0x8c, 0xd1, 0xba, 0xa9, 0x9d, 0x49, 0x51, 0xe6, 0x57, 0x29, 0xf7, 0xeb, 0x55, 0xe7, 0xc7,
0xda, 0x46, 0x77, 0x6b, 0x1b, 0xfd, 0x59, 0xdb, 0xe8, 0xeb, 0xc6, 0x2e, 0xdc, 0x6d, 0xec, 0xc2,
0xef, 0x8d, 0x5d, 0xf8, 0xd0, 0x98, 0x32, 0x19, 0x2c, 0x27, 0x2d, 0x4f, 0x2c, 0x2e, 0xe3, 0x39,
0xf1, 0x66, 0xc1, 0xe7, 0xcb, 0xc4, 0x93, 0x49, 0x59, 0xff, 0x7d, 0x74, 0xfe, 0x06, 0x00, 0x00,
0xff, 0xff, 0x20, 0x00, 0x2b, 0x46, 0x4e, 0x04, 0x00, 0x00,
// 696 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x7c, 0x54, 0xcd, 0x6e, 0xd3, 0x4a,
0x14, 0x8e, 0x1d, 0xe7, 0xef, 0xa4, 0x49, 0x7d, 0x4f, 0xef, 0xcd, 0x4d, 0xaf, 0xae, 0xac, 0xe0,
0x05, 0xca, 0x2a, 0xad, 0xd2, 0x52, 0xb1, 0x04, 0x82, 0x50, 0x52, 0xb5, 0x55, 0x18, 0x15, 0x90,
0xd8, 0xa0, 0x69, 0x32, 0xd4, 0x56, 0x12, 0x8f, 0x6b, 0x4f, 0x50, 0xf3, 0x16, 0x3c, 0x4c, 0x1f,
0x82, 0x05, 0x12, 0x5d, 0xb0, 0x60, 0x89, 0xda, 0x17, 0x41, 0x33, 0x76, 0x6c, 0x27, 0x0d, 0xec,
0xce, 0xcf, 0xf7, 0xcd, 0x7c, 0xe7, 0x9b, 0x63, 0xc3, 0x96, 0xc7, 0x2e, 0xe6, 0x53, 0xda, 0xf1,
0x03, 0x2e, 0x38, 0x16, 0xa3, 0xcc, 0xfe, 0xaa, 0x03, 0x9c, 0xa9, 0xf0, 0x94, 0x09, 0x8a, 0x5d,
0x30, 0xce, 0x17, 0x3e, 0x6b, 0x6a, 0x2d, 0xad, 0x5d, 0xef, 0x5a, 0x9d, 0x98, 0x93, 0x22, 0x3a,
0xa7, 0x2c, 0x0c, 0xe9, 0x25, 0x93, 0x28, 0xa2, 0xb0, 0x78, 0x00, 0xa5, 0x97, 0x4c, 0x50, 0x77,
0x1a, 0x36, 0xf5, 0x96, 0xd6, 0xae, 0x76, 0x77, 0x1f, 0xd2, 0x62, 0x00, 0x59, 0x22, 0xed, 0xef,
0x1a, 0x54, 0x33, 0x47, 0x61, 0x19, 0x8c, 0x33, 0xee, 0x31, 0x33, 0x87, 0x35, 0xa8, 0xf4, 0x79,
0x28, 0x5e, 0xcf, 0x59, 0xb0, 0x30, 0x35, 0x44, 0xa8, 0x27, 0x29, 0x61, 0xfe, 0x74, 0x61, 0xea,
0xf8, 0x1f, 0x34, 0x64, 0xed, 0x8d, 0x3f, 0xa6, 0x82, 0x9d, 0x71, 0xe1, 0x7e, 0x74, 0x47, 0x54,
0xb8, 0xdc, 0x33, 0xf3, 0xb8, 0x0b, 0xff, 0xc8, 0xde, 0x29, 0xff, 0xc4, 0xc6, 0x2b, 0x2d, 0x63,
0xd9, 0x1a, 0xce, 0xbd, 0x91, 0xb3, 0xd2, 0x2a, 0x60, 0x1d, 0x40, 0xb6, 0xde, 0x39, 0x9c, 0xce,
0x5c, 0xb3, 0x88, 0x3b, 0xb0, 0x9d, 0xe6, 0xd1, 0xb5, 0x25, 0xa9, 0x6c, 0x48, 0x85, 0xd3, 0x73,
0xd8, 0x68, 0x62, 0x96, 0xa5, 0xb2, 0x24, 0x8d, 0x20, 0x15, 0xfb, 0x9b, 0x06, 0x7f, 0x3d, 0x98,
0x1a, 0xff, 0x86, 0xc2, 0x5b, 0xdf, 0x1b, 0xf8, 0xca, 0xd6, 0x1a, 0x89, 0x12, 0x3c, 0x84, 0xea,
0xc0, 0x3f, 0x7c, 0xee, 0x8d, 0x87, 0x3c, 0x10, 0xd2, 0xbb, 0x7c, 0xbb, 0xda, 0xc5, 0xa5, 0x77,
0x69, 0x8b, 0x64, 0x61, 0x11, 0xeb, 0x28, 0x61, 0x19, 0xeb, 0xac, 0xa3, 0x0c, 0x2b, 0x81, 0xa1,
0x05, 0x40, 0xd8, 0x94, 0x2e, 0x22, 0x19, 0x85, 0x56, 0xbe, 0x5d, 0x23, 0x99, 0x0a, 0x36, 0xa1,
0x34, 0xe2, 0x73, 0x4f, 0xb0, 0xa0, 0x99, 0x57, 0x1a, 0x97, 0xa9, 0xbd, 0x0f, 0x90, 0x5e, 0x8f,
0x75, 0xd0, 0x93, 0x31, 0xf4, 0x81, 0x8f, 0x08, 0x86, 0xac, 0xab, 0x87, 0xaf, 0x11, 0x15, 0xdb,
0xcf, 0x24, 0xe3, 0x28, 0xc3, 0xe8, 0xbb, 0x8a, 0x61, 0x10, 0xbd, 0xef, 0xca, 0xfc, 0x84, 0x2b,
0xbc, 0x41, 0xf4, 0x13, 0x9e, 0x9c, 0x90, 0xcf, 0x9c, 0x70, 0xbd, 0xdc, 0xc9, 0xa1, 0xeb, 0x5d,
0xfe, 0x79, 0x27, 0x25, 0x62, 0xc3, 0x4e, 0x22, 0x18, 0xe7, 0xee, 0x8c, 0xc5, 0xf7, 0xa8, 0xd8,
0xb6, 0x1f, 0x6c, 0x9c, 0x24, 0x9b, 0x39, 0xac, 0x40, 0x21, 0x7a, 0x3f, 0xcd, 0xfe, 0x00, 0xdb,
0xd1, 0xb9, 0x7d, 0xea, 0x8d, 0x43, 0x87, 0x4e, 0x18, 0x3e, 0x4d, 0xd7, 0x5b, 0x53, 0xeb, 0xbd,
0xa6, 0x20, 0x41, 0xae, 0xef, 0xb8, 0x14, 0xd1, 0x9f, 0xd1, 0x91, 0x12, 0xb1, 0x45, 0x54, 0x6c,
0xdf, 0x68, 0xd0, 0xd8, 0xcc, 0x93, 0xf0, 0x1e, 0x0b, 0x84, 0xba, 0x65, 0x8b, 0xa8, 0x18, 0x1f,
0x43, 0x7d, 0xe0, 0xb9, 0xc2, 0xa5, 0x82, 0x07, 0x03, 0x6f, 0xcc, 0xae, 0x63, 0xa7, 0xd7, 0xaa,
0x12, 0x47, 0x58, 0xe8, 0x73, 0x6f, 0xcc, 0x62, 0x5c, 0xe4, 0xe7, 0x5a, 0x15, 0x1b, 0x50, 0xec,
0x71, 0x3e, 0x71, 0x59, 0xd3, 0x50, 0xce, 0xc4, 0x59, 0xe2, 0x57, 0x21, 0xf5, 0xeb, 0xd8, 0x28,
0x17, 0xcd, 0xd2, 0xb1, 0x51, 0x2e, 0x99, 0x65, 0xfb, 0x46, 0x87, 0x5a, 0x24, 0xbb, 0xc7, 0x3d,
0x11, 0xf0, 0x29, 0x3e, 0x59, 0x79, 0x95, 0x47, 0xab, 0x9e, 0xc4, 0xa0, 0x0d, 0x0f, 0xb3, 0x0f,
0x3b, 0x89, 0x74, 0xb5, 0x7f, 0xd9, 0xa9, 0x36, 0xb5, 0x24, 0x23, 0x19, 0x22, 0xc3, 0x88, 0xe6,
0xdb, 0xd4, 0xc2, 0xff, 0xa1, 0xa2, 0xb2, 0x73, 0x3e, 0xf0, 0xd5, 0x9c, 0x35, 0x92, 0x16, 0xb0,
0x05, 0x55, 0x95, 0xbc, 0x0a, 0xf8, 0x4c, 0x7d, 0x0b, 0xb2, 0x9f, 0x2d, 0xd9, 0xfd, 0xdf, 0xfd,
0x9a, 0x1a, 0x80, 0xbd, 0x80, 0x51, 0xc1, 0x14, 0x9a, 0xb0, 0xab, 0x39, 0x0b, 0x85, 0xa9, 0xe1,
0xbf, 0xb0, 0xb3, 0x52, 0x97, 0x92, 0x42, 0x66, 0xea, 0x2f, 0x0e, 0xbe, 0xdc, 0x59, 0xda, 0xed,
0x9d, 0xa5, 0xfd, 0xbc, 0xb3, 0xb4, 0xcf, 0xf7, 0x56, 0xee, 0xf6, 0xde, 0xca, 0xfd, 0xb8, 0xb7,
0x72, 0xef, 0x77, 0x2f, 0x5d, 0xe1, 0xcc, 0x2f, 0x3a, 0x23, 0x3e, 0xdb, 0x0b, 0xa7, 0x74, 0x34,
0x71, 0xae, 0xf6, 0x22, 0x0b, 0x2f, 0x8a, 0xea, 0x0f, 0x7d, 0xf0, 0x2b, 0x00, 0x00, 0xff, 0xff,
0xcd, 0xd7, 0xbe, 0xd5, 0xb1, 0x05, 0x00, 0x00,
}
func (m *NebulaMeta) Marshal() (dAtA []byte, err error) {
@@ -622,6 +744,24 @@ func (m *NebulaMetaDetails) MarshalToSizedBuffer(dAtA []byte) (int, error) {
_ = i
var l int
_ = l
if len(m.RelayVpnIp) > 0 {
dAtA3 := make([]byte, len(m.RelayVpnIp)*10)
var j2 int
for _, num := range m.RelayVpnIp {
for num >= 1<<7 {
dAtA3[j2] = uint8(uint64(num)&0x7f | 0x80)
num >>= 7
j2++
}
dAtA3[j2] = uint8(num)
j2++
}
i -= j2
copy(dAtA[i:], dAtA3[:j2])
i = encodeVarintNebula(dAtA, i, uint64(j2))
i--
dAtA[i] = 0x2a
}
if len(m.Ip6AndPorts) > 0 {
for iNdEx := len(m.Ip6AndPorts) - 1; iNdEx >= 0; iNdEx-- {
{
@@ -859,6 +999,54 @@ func (m *NebulaHandshakeDetails) MarshalToSizedBuffer(dAtA []byte) (int, error)
return len(dAtA) - i, nil
}
func (m *NebulaControl) Marshal() (dAtA []byte, err error) {
size := m.Size()
dAtA = make([]byte, size)
n, err := m.MarshalToSizedBuffer(dAtA[:size])
if err != nil {
return nil, err
}
return dAtA[:n], nil
}
func (m *NebulaControl) MarshalTo(dAtA []byte) (int, error) {
size := m.Size()
return m.MarshalToSizedBuffer(dAtA[:size])
}
func (m *NebulaControl) MarshalToSizedBuffer(dAtA []byte) (int, error) {
i := len(dAtA)
_ = i
var l int
_ = l
if m.RelayFromIp != 0 {
i = encodeVarintNebula(dAtA, i, uint64(m.RelayFromIp))
i--
dAtA[i] = 0x28
}
if m.RelayToIp != 0 {
i = encodeVarintNebula(dAtA, i, uint64(m.RelayToIp))
i--
dAtA[i] = 0x20
}
if m.ResponderRelayIndex != 0 {
i = encodeVarintNebula(dAtA, i, uint64(m.ResponderRelayIndex))
i--
dAtA[i] = 0x18
}
if m.InitiatorRelayIndex != 0 {
i = encodeVarintNebula(dAtA, i, uint64(m.InitiatorRelayIndex))
i--
dAtA[i] = 0x10
}
if m.Type != 0 {
i = encodeVarintNebula(dAtA, i, uint64(m.Type))
i--
dAtA[i] = 0x8
}
return len(dAtA) - i, nil
}
func encodeVarintNebula(dAtA []byte, offset int, v uint64) int {
offset -= sovNebula(v)
base := offset
@@ -910,6 +1098,13 @@ func (m *NebulaMetaDetails) Size() (n int) {
n += 1 + l + sovNebula(uint64(l))
}
}
if len(m.RelayVpnIp) > 0 {
l = 0
for _, e := range m.RelayVpnIp {
l += sovNebula(uint64(e))
}
n += 1 + sovNebula(uint64(l)) + l
}
return n
}
@@ -1003,6 +1198,30 @@ func (m *NebulaHandshakeDetails) Size() (n int) {
return n
}
func (m *NebulaControl) Size() (n int) {
if m == nil {
return 0
}
var l int
_ = l
if m.Type != 0 {
n += 1 + sovNebula(uint64(m.Type))
}
if m.InitiatorRelayIndex != 0 {
n += 1 + sovNebula(uint64(m.InitiatorRelayIndex))
}
if m.ResponderRelayIndex != 0 {
n += 1 + sovNebula(uint64(m.ResponderRelayIndex))
}
if m.RelayToIp != 0 {
n += 1 + sovNebula(uint64(m.RelayToIp))
}
if m.RelayFromIp != 0 {
n += 1 + sovNebula(uint64(m.RelayFromIp))
}
return n
}
func sovNebula(x uint64) (n int) {
return (math_bits.Len64(x|1) + 6) / 7
}
@@ -1249,6 +1468,82 @@ func (m *NebulaMetaDetails) Unmarshal(dAtA []byte) error {
return err
}
iNdEx = postIndex
case 5:
if wireType == 0 {
var v uint32
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowNebula
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
v |= uint32(b&0x7F) << shift
if b < 0x80 {
break
}
}
m.RelayVpnIp = append(m.RelayVpnIp, v)
} else if wireType == 2 {
var packedLen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowNebula
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
packedLen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
if packedLen < 0 {
return ErrInvalidLengthNebula
}
postIndex := iNdEx + packedLen
if postIndex < 0 {
return ErrInvalidLengthNebula
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
var elementCount int
var count int
for _, integer := range dAtA[iNdEx:postIndex] {
if integer < 128 {
count++
}
}
elementCount = count
if elementCount != 0 && len(m.RelayVpnIp) == 0 {
m.RelayVpnIp = make([]uint32, 0, elementCount)
}
for iNdEx < postIndex {
var v uint32
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowNebula
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
v |= uint32(b&0x7F) << shift
if b < 0x80 {
break
}
}
m.RelayVpnIp = append(m.RelayVpnIp, v)
}
} else {
return fmt.Errorf("proto: wrong wireType = %d for field RelayVpnIp", wireType)
}
default:
iNdEx = preIndex
skippy, err := skipNebula(dAtA[iNdEx:])
@@ -1833,6 +2128,151 @@ func (m *NebulaHandshakeDetails) Unmarshal(dAtA []byte) error {
}
return nil
}
func (m *NebulaControl) Unmarshal(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
for iNdEx < l {
preIndex := iNdEx
var wire uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowNebula
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
wire |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
fieldNum := int32(wire >> 3)
wireType := int(wire & 0x7)
if wireType == 4 {
return fmt.Errorf("proto: NebulaControl: wiretype end group for non-group")
}
if fieldNum <= 0 {
return fmt.Errorf("proto: NebulaControl: illegal tag %d (wire type %d)", fieldNum, wire)
}
switch fieldNum {
case 1:
if wireType != 0 {
return fmt.Errorf("proto: wrong wireType = %d for field Type", wireType)
}
m.Type = 0
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowNebula
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
m.Type |= NebulaControl_MessageType(b&0x7F) << shift
if b < 0x80 {
break
}
}
case 2:
if wireType != 0 {
return fmt.Errorf("proto: wrong wireType = %d for field InitiatorRelayIndex", wireType)
}
m.InitiatorRelayIndex = 0
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowNebula
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
m.InitiatorRelayIndex |= uint32(b&0x7F) << shift
if b < 0x80 {
break
}
}
case 3:
if wireType != 0 {
return fmt.Errorf("proto: wrong wireType = %d for field ResponderRelayIndex", wireType)
}
m.ResponderRelayIndex = 0
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowNebula
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
m.ResponderRelayIndex |= uint32(b&0x7F) << shift
if b < 0x80 {
break
}
}
case 4:
if wireType != 0 {
return fmt.Errorf("proto: wrong wireType = %d for field RelayToIp", wireType)
}
m.RelayToIp = 0
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowNebula
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
m.RelayToIp |= uint32(b&0x7F) << shift
if b < 0x80 {
break
}
}
case 5:
if wireType != 0 {
return fmt.Errorf("proto: wrong wireType = %d for field RelayFromIp", wireType)
}
m.RelayFromIp = 0
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowNebula
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
m.RelayFromIp |= uint32(b&0x7F) << shift
if b < 0x80 {
break
}
}
default:
iNdEx = preIndex
skippy, err := skipNebula(dAtA[iNdEx:])
if err != nil {
return err
}
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthNebula
}
if (iNdEx + skippy) > l {
return io.ErrUnexpectedEOF
}
iNdEx += skippy
}
}
if iNdEx > l {
return io.ErrUnexpectedEOF
}
return nil
}
func skipNebula(dAtA []byte) (n int, err error) {
l := len(dAtA)
iNdEx := 0

View File

@@ -15,7 +15,6 @@ message NebulaMeta {
HostWhoamiReply = 7;
PathCheck = 8;
PathCheckReply = 9;
}
MessageType Type = 1;
@@ -26,6 +25,7 @@ message NebulaMetaDetails {
uint32 VpnIp = 1;
repeated Ip4AndPort Ip4AndPorts = 2;
repeated Ip6AndPort Ip6AndPorts = 4;
repeated uint32 RelayVpnIp = 5;
uint32 counter = 3;
}
@@ -61,5 +61,20 @@ message NebulaHandshakeDetails {
uint32 ResponderIndex = 3;
uint64 Cookie = 4;
uint64 Time = 5;
// reserved for WIP multiport
reserved 6, 7;
}
message NebulaControl {
enum MessageType {
None = 0;
CreateRelayRequest = 1;
CreateRelayResponse = 2;
}
MessageType Type = 1;
uint32 InitiatorRelayIndex = 2;
uint32 ResponderRelayIndex = 3;
uint32 RelayToIp = 4;
uint32 RelayFromIp = 5;
}

View File

@@ -25,6 +25,14 @@ func NewNebulaCipherState(s *noise.CipherState) *NebulaCipherState {
}
// EncryptDanger encrypts and authenticates a given payload.
//
// out is a destination slice to hold the output of the EncryptDanger operation.
// - ad is additional data, which will be authenticated and appended to out, but not encrypted.
// - plaintext is encrypted, authenticated and appended to out.
// - n is a nonce value which must never be re-used with this key.
// - nb is a buffer used for temporary storage in the implementation of this call, which should
// be re-used by callers to minimize garbage collection.
func (s *NebulaCipherState) EncryptDanger(out, ad, plaintext []byte, n uint64, nb []byte) ([]byte, error) {
if s != nil {
// TODO: Is this okay now that we have made messageCounter atomic?
@@ -58,3 +66,10 @@ func (s *NebulaCipherState) DecryptDanger(out, ad, ciphertext []byte, n uint64,
return []byte{}, nil
}
}
func (s *NebulaCipherState) Overhead() int {
if s != nil {
return s.c.(cipher.AEAD).Overhead()
}
return 0
}

View File

@@ -7,7 +7,6 @@ import (
"time"
"github.com/flynn/noise"
"github.com/golang/protobuf/proto"
"github.com/sirupsen/logrus"
"github.com/slackhq/nebula/cert"
"github.com/slackhq/nebula/firewall"
@@ -15,13 +14,14 @@ import (
"github.com/slackhq/nebula/iputil"
"github.com/slackhq/nebula/udp"
"golang.org/x/net/ipv4"
"google.golang.org/protobuf/proto"
)
const (
minFwPacketLen = 4
)
func (f *Interface) readOutsidePackets(addr *udp.Addr, out []byte, packet []byte, h *header.H, fwPacket *firewall.Packet, lhf udp.LightHouseHandlerFunc, nb []byte, q int, localCache firewall.ConntrackCache) {
func (f *Interface) readOutsidePackets(addr *udp.Addr, via interface{}, out []byte, packet []byte, h *header.H, fwPacket *firewall.Packet, lhf udp.LightHouseHandlerFunc, nb []byte, q int, localCache firewall.ConntrackCache) {
err := h.Parse(packet)
if err != nil {
// TODO: best if we return this and let caller log
@@ -34,24 +34,107 @@ func (f *Interface) readOutsidePackets(addr *udp.Addr, out []byte, packet []byte
}
//l.Error("in packet ", header, packet[HeaderLen:])
if addr != nil {
if ip4 := addr.IP.To4(); ip4 != nil {
if ipMaskContains(f.lightHouse.myVpnIp, f.lightHouse.myVpnZeros, iputil.VpnIp(binary.BigEndian.Uint32(ip4))) {
if f.l.Level >= logrus.DebugLevel {
f.l.WithField("udpAddr", addr).Debug("Refusing to process double encrypted packet")
}
return
}
}
}
var hostinfo *HostInfo
// verify if we've seen this index before, otherwise respond to the handshake initiation
hostinfo, err := f.hostMap.QueryIndex(h.RemoteIndex)
if h.Type == header.Message && h.Subtype == header.MessageRelay {
hostinfo, _ = f.hostMap.QueryRelayIndex(h.RemoteIndex)
} else {
hostinfo, _ = f.hostMap.QueryIndex(h.RemoteIndex)
}
var ci *ConnectionState
if err == nil {
if hostinfo != nil {
ci = hostinfo.ConnectionState
}
switch h.Type {
case header.Message:
// TODO handleEncrypted sends directly to addr on error. Handle this in the tunneling case.
if !f.handleEncrypted(ci, addr, h) {
return
}
f.decryptToTun(hostinfo, h.MessageCounter, out, packet, fwPacket, nb, q, localCache)
switch h.Subtype {
case header.MessageNone:
f.decryptToTun(hostinfo, h.MessageCounter, out, packet, fwPacket, nb, q, localCache)
case header.MessageRelay:
// The entire body is sent as AD, not encrypted.
// The packet consists of a 16-byte parsed Nebula header, Associated Data-protected payload, and a trailing 16-byte AEAD signature value.
// The packet is guaranteed to be at least 16 bytes at this point, b/c it got past the h.Parse() call above. If it's
// otherwise malformed (meaning, there is no trailing 16 byte AEAD value), then this will result in at worst a 0-length slice
// which will gracefully fail in the DecryptDanger call.
signedPayload := packet[:len(packet)-hostinfo.ConnectionState.dKey.Overhead()]
signatureValue := packet[len(packet)-hostinfo.ConnectionState.dKey.Overhead():]
out, err = hostinfo.ConnectionState.dKey.DecryptDanger(out, signedPayload, signatureValue, h.MessageCounter, nb)
if err != nil {
return
}
// Successfully validated the thing. Get rid of the Relay header.
signedPayload = signedPayload[header.Len:]
// Pull the Roaming parts up here, and return in all call paths.
f.handleHostRoaming(hostinfo, addr)
f.connectionManager.In(hostinfo.vpnIp)
// Fallthrough to the bottom to record incoming traffic
relay, ok := hostinfo.relayState.QueryRelayForByIdx(h.RemoteIndex)
if !ok {
// The only way this happens is if hostmap has an index to the correct HostInfo, but the HostInfo is missing
// its internal mapping. This shouldn't happen!
hostinfo.logger(f.l).WithField("hostinfo", hostinfo.vpnIp).WithField("remoteIndex", h.RemoteIndex).Errorf("HostInfo missing remote index")
// Delete my local index from the hostmap
f.hostMap.DeleteRelayIdx(h.RemoteIndex)
// When the peer doesn't recieve any return traffic, its connection_manager will eventually clean up
// the broken relay when it cleans up the associated HostInfo object.
return
}
switch relay.Type {
case TerminalType:
// If I am the target of this relay, process the unwrapped packet
// From this recursive point, all these variables are 'burned'. We shouldn't rely on them again.
f.readOutsidePackets(nil, &ViaSender{relayHI: hostinfo, remoteIdx: relay.RemoteIndex, relay: relay}, out[:0], signedPayload, h, fwPacket, lhf, nb, q, localCache)
return
case ForwardingType:
// Find the target HostInfo relay object
targetHI, err := f.hostMap.QueryVpnIp(relay.PeerIp)
if err != nil {
hostinfo.logger(f.l).WithField("peerIp", relay.PeerIp).WithError(err).Info("Failed to find target host info by ip")
return
}
// find the target Relay info object
targetRelay, ok := targetHI.relayState.QueryRelayForByIp(hostinfo.vpnIp)
if !ok {
hostinfo.logger(f.l).WithField("peerIp", relay.PeerIp).Info("Failed to find relay in hostinfo")
return
}
// If that relay is Established, forward the payload through it
if targetRelay.State == Established {
switch targetRelay.Type {
case ForwardingType:
// Forward this packet through the relay tunnel
// Find the target HostInfo
f.SendVia(targetHI, targetRelay, signedPayload, nb, out, false)
return
case TerminalType:
hostinfo.logger(f.l).Error("Unexpected Relay Type of Terminal")
}
} else {
hostinfo.logger(f.l).WithField("targetRelayState", targetRelay.State).Info("Unexpected target relay state")
return
}
}
}
case header.LightHouse:
f.messageMetrics.Rx(h.Type, h.Subtype, 1)
@@ -95,7 +178,7 @@ func (f *Interface) readOutsidePackets(addr *udp.Addr, out []byte, packet []byte
// This testRequest might be from TryPromoteBest, so we should roam
// to the new IP address before responding
f.handleHostRoaming(hostinfo, addr)
f.send(header.Test, header.TestReply, ci, hostinfo, hostinfo.remote, d, nb, out)
f.send(header.Test, header.TestReply, ci, hostinfo, d, nb, out)
}
// Fallthrough to the bottom to record incoming traffic
@@ -105,7 +188,7 @@ func (f *Interface) readOutsidePackets(addr *udp.Addr, out []byte, packet []byte
case header.Handshake:
f.messageMetrics.Rx(h.Type, h.Subtype, 1)
HandleIncomingHandshake(f, addr, packet, h, hostinfo)
HandleIncomingHandshake(f, addr, via, packet, h, hostinfo)
return
case header.RecvError:
@@ -122,9 +205,30 @@ func (f *Interface) readOutsidePackets(addr *udp.Addr, out []byte, packet []byte
hostinfo.logger(f.l).WithField("udpAddr", addr).
Info("Close tunnel received, tearing down.")
f.closeTunnel(hostinfo, false)
f.closeTunnel(hostinfo)
return
case header.Control:
if !f.handleEncrypted(ci, addr, h) {
return
}
d, err := f.decrypt(hostinfo, h.MessageCounter, out, packet, h, nb)
if err != nil {
hostinfo.logger(f.l).WithError(err).WithField("udpAddr", addr).
WithField("packet", packet).
Error("Failed to decrypt Control packet")
return
}
m := &NebulaControl{}
err = m.Unmarshal(d)
if err != nil {
hostinfo.logger(f.l).WithError(err).Error("Failed to unmarshal control message")
break
}
f.relayManager.HandleControlMsg(hostinfo, m, f)
default:
f.messageMetrics.Rx(h.Type, h.Subtype, 1)
hostinfo.logger(f.l).Debugf("Unexpected packet received from %s", addr)
@@ -137,27 +241,23 @@ func (f *Interface) readOutsidePackets(addr *udp.Addr, out []byte, packet []byte
}
// closeTunnel closes a tunnel locally, it does not send a closeTunnel packet to the remote
func (f *Interface) closeTunnel(hostInfo *HostInfo, hasHostMapLock bool) {
func (f *Interface) closeTunnel(hostInfo *HostInfo) {
//TODO: this would be better as a single function in ConnectionManager that handled locks appropriately
f.connectionManager.ClearIP(hostInfo.vpnIp)
f.connectionManager.ClearPendingDeletion(hostInfo.vpnIp)
f.lightHouse.DeleteVpnIp(hostInfo.vpnIp)
if hasHostMapLock {
f.hostMap.unlockedDeleteHostInfo(hostInfo)
} else {
f.hostMap.DeleteHostInfo(hostInfo)
}
f.hostMap.DeleteHostInfo(hostInfo)
}
// sendCloseTunnel is a helper function to send a proper close tunnel packet to a remote
func (f *Interface) sendCloseTunnel(h *HostInfo) {
f.send(header.CloseTunnel, 0, h.ConnectionState, h, h.remote, []byte{}, make([]byte, 12, 12), make([]byte, mtu))
f.send(header.CloseTunnel, 0, h.ConnectionState, h, []byte{}, make([]byte, 12, 12), make([]byte, mtu))
}
func (f *Interface) handleHostRoaming(hostinfo *HostInfo, addr *udp.Addr) {
if !hostinfo.remote.Equals(addr) {
if !f.lightHouse.remoteAllowList.Allow(hostinfo.vpnIp, addr.IP) {
if addr != nil && !hostinfo.remote.Equals(addr) {
if !f.lightHouse.GetRemoteAllowList().Allow(hostinfo.vpnIp, addr.IP) {
hostinfo.logger(f.l).WithField("newAddr", addr).Debug("lighthouse.remote_allow_list denied roaming")
return
}
@@ -172,8 +272,7 @@ func (f *Interface) handleHostRoaming(hostinfo *HostInfo, addr *udp.Addr) {
hostinfo.logger(f.l).WithField("udpAddr", hostinfo.remote).WithField("newAddr", addr).
Info("Host roamed to new udp ip/port.")
hostinfo.lastRoam = time.Now()
remoteCopy := *hostinfo.remote
hostinfo.lastRoamRemote = &remoteCopy
hostinfo.lastRoamRemote = hostinfo.remote
hostinfo.SetRemote(addr)
}
@@ -183,8 +282,12 @@ func (f *Interface) handleEncrypted(ci *ConnectionState, addr *udp.Addr, h *head
// If connectionstate exists and the replay protector allows, process packet
// Else, send recv errors for 300 seconds after a restart to allow fast reconnection.
if ci == nil || !ci.window.Check(f.l, h.MessageCounter) {
f.sendRecvError(addr, h.RemoteIndex)
return false
if addr != nil {
f.maybeSendRecvError(addr, h.RemoteIndex)
return false
} else {
return false
}
}
return true
@@ -309,6 +412,12 @@ func (f *Interface) decryptToTun(hostinfo *HostInfo, messageCounter uint64, out
}
}
func (f *Interface) maybeSendRecvError(endpoint *udp.Addr, index uint32) {
if f.sendRecvErrorConfig.ShouldSendRecvError(endpoint.IP) {
f.sendRecvError(endpoint, index)
}
}
func (f *Interface) sendRecvError(endpoint *udp.Addr, index uint32) {
f.messageMetrics.Tx(header.RecvError, 0, 1)
@@ -349,12 +458,9 @@ func (f *Interface) handleRecvError(addr *udp.Addr, h *header.H) {
return
}
// We delete this host from the main hostmap
f.hostMap.DeleteHostInfo(hostinfo)
// We also delete it from pending to allow for
// fast reconnect. We must null the connectionstate
// or a counter reuse may happen
hostinfo.ConnectionState = nil
f.closeTunnel(hostinfo)
// We also delete it from pending hostmap to allow for
// fast reconnect.
f.handshakeManager.DeleteHostInfo(hostinfo)
}

17
overlay/device.go Normal file
View File

@@ -0,0 +1,17 @@
package overlay
import (
"io"
"net"
"github.com/slackhq/nebula/iputil"
)
type Device interface {
io.ReadWriteCloser
Activate() error
Cidr() *net.IPNet
Name() string
RouteFor(iputil.VpnIp) iputil.VpnIp
NewMultiQueueReader() (io.ReadWriteCloser, error)
}

View File

@@ -1,29 +1,45 @@
package nebula
package overlay
import (
"fmt"
"math"
"net"
"runtime"
"strconv"
"github.com/sirupsen/logrus"
"github.com/slackhq/nebula/cidr"
"github.com/slackhq/nebula/config"
"github.com/slackhq/nebula/iputil"
)
const DEFAULT_MTU = 1300
type route struct {
mtu int
metric int
route *net.IPNet
via *net.IP
type Route struct {
MTU int
Metric int
Cidr *net.IPNet
Via *iputil.VpnIp
}
func parseRoutes(c *config.C, network *net.IPNet) ([]route, error) {
func makeRouteTree(l *logrus.Logger, routes []Route, allowMTU bool) (*cidr.Tree4, error) {
routeTree := cidr.NewTree4()
for _, r := range routes {
if !allowMTU && r.MTU > 0 {
l.WithField("route", r).Warnf("route MTU is not supported in %s", runtime.GOOS)
}
if r.Via != nil {
routeTree.AddCIDR(r.Cidr, *r.Via)
}
}
return routeTree, nil
}
func parseRoutes(c *config.C, network *net.IPNet) ([]Route, error) {
var err error
r := c.Get("tun.routes")
if r == nil {
return []route{}, nil
return []Route{}, nil
}
rawRoutes, ok := r.([]interface{})
@@ -32,10 +48,10 @@ func parseRoutes(c *config.C, network *net.IPNet) ([]route, error) {
}
if len(rawRoutes) < 1 {
return []route{}, nil
return []Route{}, nil
}
routes := make([]route, len(rawRoutes))
routes := make([]Route, len(rawRoutes))
for i, r := range rawRoutes {
m, ok := r.(map[interface{}]interface{})
if !ok {
@@ -64,20 +80,20 @@ func parseRoutes(c *config.C, network *net.IPNet) ([]route, error) {
return nil, fmt.Errorf("entry %v.route in tun.routes is not present", i+1)
}
r := route{
mtu: mtu,
r := Route{
MTU: mtu,
}
_, r.route, err = net.ParseCIDR(fmt.Sprintf("%v", rRoute))
_, r.Cidr, err = net.ParseCIDR(fmt.Sprintf("%v", rRoute))
if err != nil {
return nil, fmt.Errorf("entry %v.route in tun.routes failed to parse: %v", i+1, err)
}
if !ipWithin(network, r.route) {
if !ipWithin(network, r.Cidr) {
return nil, fmt.Errorf(
"entry %v.route in tun.routes is not contained within the network attached to the certificate; route: %v, network: %v",
i+1,
r.route.String(),
r.Cidr.String(),
network.String(),
)
}
@@ -88,12 +104,12 @@ func parseRoutes(c *config.C, network *net.IPNet) ([]route, error) {
return routes, nil
}
func parseUnsafeRoutes(c *config.C, network *net.IPNet) ([]route, error) {
func parseUnsafeRoutes(c *config.C, network *net.IPNet) ([]Route, error) {
var err error
r := c.Get("tun.unsafe_routes")
if r == nil {
return []route{}, nil
return []Route{}, nil
}
rawRoutes, ok := r.([]interface{})
@@ -102,31 +118,29 @@ func parseUnsafeRoutes(c *config.C, network *net.IPNet) ([]route, error) {
}
if len(rawRoutes) < 1 {
return []route{}, nil
return []Route{}, nil
}
routes := make([]route, len(rawRoutes))
routes := make([]Route, len(rawRoutes))
for i, r := range rawRoutes {
m, ok := r.(map[interface{}]interface{})
if !ok {
return nil, fmt.Errorf("entry %v in tun.unsafe_routes is invalid", i+1)
}
rMtu, ok := m["mtu"]
if !ok {
rMtu = c.GetInt("tun.mtu", DEFAULT_MTU)
}
mtu, ok := rMtu.(int)
if !ok {
mtu, err = strconv.Atoi(rMtu.(string))
if err != nil {
return nil, fmt.Errorf("entry %v.mtu in tun.unsafe_routes is not an integer: %v", i+1, err)
var mtu int
if rMtu, ok := m["mtu"]; ok {
mtu, ok = rMtu.(int)
if !ok {
mtu, err = strconv.Atoi(rMtu.(string))
if err != nil {
return nil, fmt.Errorf("entry %v.mtu in tun.unsafe_routes is not an integer: %v", i+1, err)
}
}
}
if mtu < 500 {
return nil, fmt.Errorf("entry %v.mtu in tun.unsafe_routes is below 500: %v", i+1, mtu)
if mtu != 0 && mtu < 500 {
return nil, fmt.Errorf("entry %v.mtu in tun.unsafe_routes is below 500: %v", i+1, mtu)
}
}
rMetric, ok := m["metric"]
@@ -166,22 +180,24 @@ func parseUnsafeRoutes(c *config.C, network *net.IPNet) ([]route, error) {
return nil, fmt.Errorf("entry %v.route in tun.unsafe_routes is not present", i+1)
}
r := route{
via: &nVia,
mtu: mtu,
metric: metric,
viaVpnIp := iputil.Ip2VpnIp(nVia)
r := Route{
Via: &viaVpnIp,
MTU: mtu,
Metric: metric,
}
_, r.route, err = net.ParseCIDR(fmt.Sprintf("%v", rRoute))
_, r.Cidr, err = net.ParseCIDR(fmt.Sprintf("%v", rRoute))
if err != nil {
return nil, fmt.Errorf("entry %v.route in tun.unsafe_routes failed to parse: %v", i+1, err)
}
if ipWithin(network, r.route) {
if ipWithin(network, r.Cidr) {
return nil, fmt.Errorf(
"entry %v.route in tun.unsafe_routes is contained within the network attached to the certificate; route: %v, network: %v",
i+1,
r.route.String(),
r.Cidr.String(),
network.String(),
)
}

View File

@@ -1,4 +1,4 @@
package nebula
package overlay
import (
"fmt"
@@ -6,12 +6,13 @@ import (
"testing"
"github.com/slackhq/nebula/config"
"github.com/slackhq/nebula/util"
"github.com/slackhq/nebula/iputil"
"github.com/slackhq/nebula/test"
"github.com/stretchr/testify/assert"
)
func Test_parseRoutes(t *testing.T) {
l := util.NewTestLogger()
l := test.NewLogger()
c := config.NewC(l)
_, n, _ := net.ParseCIDR("10.0.0.0/24")
@@ -91,12 +92,12 @@ func Test_parseRoutes(t *testing.T) {
tested := 0
for _, r := range routes {
if r.mtu == 8000 {
assert.Equal(t, "10.0.0.1/32", r.route.String())
if r.MTU == 8000 {
assert.Equal(t, "10.0.0.1/32", r.Cidr.String())
tested++
} else {
assert.Equal(t, 9000, r.mtu)
assert.Equal(t, "10.0.0.0/29", r.route.String())
assert.Equal(t, 9000, r.MTU)
assert.Equal(t, "10.0.0.0/29", r.Cidr.String())
tested++
}
}
@@ -107,7 +108,7 @@ func Test_parseRoutes(t *testing.T) {
}
func Test_parseUnsafeRoutes(t *testing.T) {
l := util.NewTestLogger()
l := test.NewLogger()
c := config.NewC(l)
_, n, _ := net.ParseCIDR("10.0.0.0/24")
@@ -190,7 +191,7 @@ func Test_parseUnsafeRoutes(t *testing.T) {
c.Settings["tun"] = map[interface{}]interface{}{"unsafe_routes": []interface{}{map[interface{}]interface{}{"via": "127.0.0.1", "route": "1.0.0.0/8"}}}
routes, err = parseUnsafeRoutes(c, n)
assert.Len(t, routes, 1)
assert.Equal(t, DEFAULT_MTU, routes[0].mtu)
assert.Equal(t, 0, routes[0].MTU)
// bad mtu
c.Settings["tun"] = map[interface{}]interface{}{"unsafe_routes": []interface{}{map[interface{}]interface{}{"via": "127.0.0.1", "mtu": "nope"}}}
@@ -216,17 +217,17 @@ func Test_parseUnsafeRoutes(t *testing.T) {
tested := 0
for _, r := range routes {
if r.mtu == 8000 {
assert.Equal(t, "1.0.0.1/32", r.route.String())
if r.MTU == 8000 {
assert.Equal(t, "1.0.0.1/32", r.Cidr.String())
tested++
} else if r.mtu == 9000 {
assert.Equal(t, 9000, r.mtu)
assert.Equal(t, "1.0.0.0/29", r.route.String())
} else if r.MTU == 9000 {
assert.Equal(t, 9000, r.MTU)
assert.Equal(t, "1.0.0.0/29", r.Cidr.String())
tested++
} else {
assert.Equal(t, 1500, r.mtu)
assert.Equal(t, 1234, r.metric)
assert.Equal(t, "1.0.0.2/32", r.route.String())
assert.Equal(t, 1500, r.MTU)
assert.Equal(t, 1234, r.Metric)
assert.Equal(t, "1.0.0.2/32", r.Cidr.String())
tested++
}
}
@@ -235,3 +236,35 @@ func Test_parseUnsafeRoutes(t *testing.T) {
t.Fatal("Did not see both unsafe_routes")
}
}
func Test_makeRouteTree(t *testing.T) {
l := test.NewLogger()
c := config.NewC(l)
_, n, _ := net.ParseCIDR("10.0.0.0/24")
c.Settings["tun"] = map[interface{}]interface{}{"unsafe_routes": []interface{}{
map[interface{}]interface{}{"via": "192.168.0.1", "route": "1.0.0.0/28"},
map[interface{}]interface{}{"via": "192.168.0.2", "route": "1.0.0.1/32"},
}}
routes, err := parseUnsafeRoutes(c, n)
assert.NoError(t, err)
assert.Len(t, routes, 2)
routeTree, err := makeRouteTree(l, routes, true)
assert.NoError(t, err)
ip := iputil.Ip2VpnIp(net.ParseIP("1.0.0.2"))
r := routeTree.MostSpecificContains(ip)
assert.NotNil(t, r)
assert.IsType(t, iputil.VpnIp(0), r)
assert.EqualValues(t, iputil.Ip2VpnIp(net.ParseIP("192.168.0.1")), r)
ip = iputil.Ip2VpnIp(net.ParseIP("1.0.0.1"))
r = routeTree.MostSpecificContains(ip)
assert.NotNil(t, r)
assert.IsType(t, iputil.VpnIp(0), r)
assert.EqualValues(t, iputil.Ip2VpnIp(net.ParseIP("192.168.0.2")), r)
ip = iputil.Ip2VpnIp(net.ParseIP("1.1.0.1"))
r = routeTree.MostSpecificContains(ip)
assert.Nil(t, r)
}

51
overlay/tun.go Normal file
View File

@@ -0,0 +1,51 @@
package overlay
import (
"net"
"github.com/sirupsen/logrus"
"github.com/slackhq/nebula/config"
"github.com/slackhq/nebula/util"
)
const DefaultMTU = 1300
func NewDeviceFromConfig(c *config.C, l *logrus.Logger, tunCidr *net.IPNet, fd *int, routines int) (Device, error) {
routes, err := parseRoutes(c, tunCidr)
if err != nil {
return nil, util.NewContextualError("Could not parse tun.routes", nil, err)
}
unsafeRoutes, err := parseUnsafeRoutes(c, tunCidr)
if err != nil {
return nil, util.NewContextualError("Could not parse tun.unsafe_routes", nil, err)
}
routes = append(routes, unsafeRoutes...)
switch {
case c.GetBool("tun.disabled", false):
tun := newDisabledTun(tunCidr, c.GetInt("tun.tx_queue", 500), c.GetBool("stats.message_metrics", false), l)
return tun, nil
case fd != nil:
return newTunFromFd(
l,
*fd,
tunCidr,
c.GetInt("tun.mtu", DefaultMTU),
routes,
c.GetInt("tun.tx_queue", 500),
)
default:
return newTun(
l,
c.GetString("tun.dev", ""),
tunCidr,
c.GetInt("tun.mtu", DefaultMTU),
routes,
c.GetInt("tun.tx_queue", 500),
routines > 1,
)
}
}

69
overlay/tun_android.go Normal file
View File

@@ -0,0 +1,69 @@
//go:build !e2e_testing
// +build !e2e_testing
package overlay
import (
"fmt"
"io"
"net"
"os"
"github.com/sirupsen/logrus"
"github.com/slackhq/nebula/cidr"
"github.com/slackhq/nebula/iputil"
)
type tun struct {
io.ReadWriteCloser
fd int
cidr *net.IPNet
routeTree *cidr.Tree4
l *logrus.Logger
}
func newTunFromFd(l *logrus.Logger, deviceFd int, cidr *net.IPNet, _ int, routes []Route, _ int) (*tun, error) {
routeTree, err := makeRouteTree(l, routes, false)
if err != nil {
return nil, err
}
file := os.NewFile(uintptr(deviceFd), "/dev/net/tun")
return &tun{
ReadWriteCloser: file,
fd: int(file.Fd()),
cidr: cidr,
l: l,
routeTree: routeTree,
}, nil
}
func newTun(_ *logrus.Logger, _ string, _ *net.IPNet, _ int, _ []Route, _ int, _ bool) (*tun, error) {
return nil, fmt.Errorf("newTun not supported in Android")
}
func (t *tun) RouteFor(ip iputil.VpnIp) iputil.VpnIp {
r := t.routeTree.MostSpecificContains(ip)
if r != nil {
return r.(iputil.VpnIp)
}
return 0
}
func (t tun) Activate() error {
return nil
}
func (t *tun) Cidr() *net.IPNet {
return t.cidr
}
func (t *tun) Name() string {
return "android"
}
func (t *tun) NewMultiQueueReader() (io.ReadWriteCloser, error) {
return nil, fmt.Errorf("TODO: multiqueue not implemented for android")
}

View File

@@ -1,9 +1,10 @@
//go:build !ios && !e2e_testing
// +build !ios,!e2e_testing
package nebula
package overlay
import (
"errors"
"fmt"
"io"
"net"
@@ -12,18 +13,20 @@ import (
"unsafe"
"github.com/sirupsen/logrus"
"github.com/slackhq/nebula/cidr"
"github.com/slackhq/nebula/iputil"
netroute "golang.org/x/net/route"
"golang.org/x/sys/unix"
)
type Tun struct {
type tun struct {
io.ReadWriteCloser
Device string
Cidr *net.IPNet
DefaultMTU int
TXQueueLen int
UnsafeRoutes []route
l *logrus.Logger
Device string
cidr *net.IPNet
DefaultMTU int
Routes []Route
routeTree *cidr.Tree4
l *logrus.Logger
// cache out buffer since we need to prepend 4 bytes for tun metadata
out []byte
@@ -74,15 +77,10 @@ type ifreqMTU struct {
pad [8]byte
}
type ifreqQLEN struct {
Name [16]byte
Value int32
pad [8]byte
}
func newTun(l *logrus.Logger, name string, cidr *net.IPNet, defaultMTU int, routes []route, unsafeRoutes []route, txQueueLen int, multiqueue bool) (ifce *Tun, err error) {
if len(routes) > 0 {
return nil, fmt.Errorf("route MTU not supported in Darwin")
func newTun(l *logrus.Logger, name string, cidr *net.IPNet, defaultMTU int, routes []Route, _ int, _ bool) (*tun, error) {
routeTree, err := makeRouteTree(l, routes, false)
if err != nil {
return nil, err
}
ifIndex := -1
@@ -106,7 +104,7 @@ func newTun(l *logrus.Logger, name string, cidr *net.IPNet, defaultMTU int, rout
ctlName [96]byte
}{}
copy(ctlInfo.ctlName[:], []byte(utunControlName))
copy(ctlInfo.ctlName[:], utunControlName)
err = ioctl(uintptr(fd), uintptr(_CTLIOCGINFO), uintptr(unsafe.Pointer(ctlInfo)))
if err != nil {
@@ -125,7 +123,7 @@ func newTun(l *logrus.Logger, name string, cidr *net.IPNet, defaultMTU int, rout
unix.SYS_CONNECT,
uintptr(fd),
uintptr(unsafe.Pointer(&sc)),
uintptr(sockaddrCtlSize),
sockaddrCtlSize,
)
if errno != 0 {
return nil, fmt.Errorf("SYS_CONNECT: %v", errno)
@@ -152,44 +150,44 @@ func newTun(l *logrus.Logger, name string, cidr *net.IPNet, defaultMTU int, rout
file := os.NewFile(uintptr(fd), "")
tun := &Tun{
tun := &tun{
ReadWriteCloser: file,
Device: name,
Cidr: cidr,
cidr: cidr,
DefaultMTU: defaultMTU,
TXQueueLen: txQueueLen,
UnsafeRoutes: unsafeRoutes,
Routes: routes,
routeTree: routeTree,
l: l,
}
return tun, nil
}
func (t *Tun) deviceBytes() (o [16]byte) {
func (t *tun) deviceBytes() (o [16]byte) {
for i, c := range t.Device {
o[i] = byte(c)
}
return
}
func newTunFromFd(l *logrus.Logger, deviceFd int, cidr *net.IPNet, defaultMTU int, routes []route, unsafeRoutes []route, txQueueLen int) (ifce *Tun, err error) {
func newTunFromFd(_ *logrus.Logger, _ int, _ *net.IPNet, _ int, _ []Route, _ int) (*tun, error) {
return nil, fmt.Errorf("newTunFromFd not supported in Darwin")
}
func (c *Tun) Close() error {
if c.ReadWriteCloser != nil {
return c.ReadWriteCloser.Close()
func (t *tun) Close() error {
if t.ReadWriteCloser != nil {
return t.ReadWriteCloser.Close()
}
return nil
}
func (t *Tun) Activate() error {
func (t *tun) Activate() error {
devName := t.deviceBytes()
var addr, mask [4]byte
copy(addr[:], t.Cidr.IP.To4())
copy(mask[:], t.Cidr.Mask)
copy(addr[:], t.cidr.IP.To4())
copy(mask[:], t.cidr.Mask)
s, err := unix.Socket(
unix.AF_INET,
@@ -231,7 +229,7 @@ func (t *Tun) Activate() error {
// Set the MTU on the device
ifm := ifreqMTU{Name: devName, MTU: int32(t.DefaultMTU)}
if err = ioctl(fd, unix.SIOCSIFMTU, uintptr(unsafe.Pointer(&ifm))); err != nil {
return fmt.Errorf("Failed to set tun mtu: %v", err)
return fmt.Errorf("failed to set tun mtu: %v", err)
}
/*
@@ -275,6 +273,9 @@ func (t *Tun) Activate() error {
copy(maskAddr.IP[:], mask[:])
err = addRoute(routeSock, routeAddr, maskAddr, linkAddr)
if err != nil {
if errors.Is(err, unix.EEXIST) {
err = fmt.Errorf("unable to add tun route, identical route already exists: %s", t.cidr)
}
return err
}
@@ -285,13 +286,23 @@ func (t *Tun) Activate() error {
}
// Unsafe path routes
for _, r := range t.UnsafeRoutes {
copy(routeAddr.IP[:], r.route.IP.To4())
copy(maskAddr.IP[:], net.IP(r.route.Mask).To4())
for _, r := range t.Routes {
if r.Via == nil {
// We don't allow route MTUs so only install routes with a via
continue
}
copy(routeAddr.IP[:], r.Cidr.IP.To4())
copy(maskAddr.IP[:], net.IP(r.Cidr.Mask).To4())
err = addRoute(routeSock, routeAddr, maskAddr, linkAddr)
if err != nil {
return err
if errors.Is(err, unix.EEXIST) {
t.l.WithField("route", r.Cidr).
Warnf("unable to add unsafe_route, identical route already exists")
} else {
return err
}
}
// TODO how to set metric
@@ -300,6 +311,15 @@ func (t *Tun) Activate() error {
return nil
}
func (t *tun) RouteFor(ip iputil.VpnIp) iputil.VpnIp {
r := t.routeTree.MostSpecificContains(ip)
if r != nil {
return r.(iputil.VpnIp)
}
return 0
}
// Get the LinkAddr for the interface of the given name
// TODO: Is there an easier way to fetch this when we create the interface?
// Maybe SIOCGIFINDEX? but this doesn't appear to exist in the darwin headers.
@@ -343,19 +363,17 @@ func addRoute(sock int, addr, mask *netroute.Inet4Addr, link *netroute.LinkAddr)
data, err := r.Marshal()
if err != nil {
return fmt.Errorf("failed to create route.RouteMessage: %v", err)
return fmt.Errorf("failed to create route.RouteMessage: %w", err)
}
_, err = unix.Write(sock, data[:])
if err != nil {
return fmt.Errorf("failed to write route.RouteMessage to socket: %v", err)
return fmt.Errorf("failed to write route.RouteMessage to socket: %w", err)
}
return nil
}
var _ io.ReadWriteCloser = (*Tun)(nil)
func (t *Tun) Read(to []byte) (int, error) {
func (t *tun) Read(to []byte) (int, error) {
buf := make([]byte, len(to)+4)
@@ -366,7 +384,7 @@ func (t *Tun) Read(to []byte) (int, error) {
}
// Write is only valid for single threaded use
func (t *Tun) Write(from []byte) (int, error) {
func (t *tun) Write(from []byte) (int, error) {
buf := t.out
if cap(buf) < len(from)+4 {
buf = make([]byte, len(from)+4)
@@ -385,7 +403,7 @@ func (t *Tun) Write(from []byte) (int, error) {
} else if ipVer == 6 {
buf[3] = syscall.AF_INET6
} else {
return 0, fmt.Errorf("Unable to determine IP version from packet")
return 0, fmt.Errorf("unable to determine IP version from packet")
}
copy(buf[4:], from)
@@ -394,19 +412,14 @@ func (t *Tun) Write(from []byte) (int, error) {
return n - 4, err
}
func (c *Tun) CidrNet() *net.IPNet {
return c.Cidr
func (t *tun) Cidr() *net.IPNet {
return t.cidr
}
func (c *Tun) DeviceName() string {
return c.Device
func (t *tun) Name() string {
return t.Device
}
func (c *Tun) WriteRaw(b []byte) error {
_, err := c.Write(b)
return err
}
func (t *Tun) NewMultiQueueReader() (io.ReadWriteCloser, error) {
func (t *tun) NewMultiQueueReader() (io.ReadWriteCloser, error) {
return nil, fmt.Errorf("TODO: multiqueue not implemented for darwin")
}

View File

@@ -1,4 +1,4 @@
package nebula
package overlay
import (
"encoding/binary"
@@ -9,6 +9,7 @@ import (
"github.com/rcrowley/go-metrics"
"github.com/sirupsen/logrus"
"github.com/slackhq/nebula/iputil"
)
type disabledTun struct {
@@ -43,11 +44,15 @@ func (*disabledTun) Activate() error {
return nil
}
func (t *disabledTun) CidrNet() *net.IPNet {
func (*disabledTun) RouteFor(iputil.VpnIp) iputil.VpnIp {
return 0
}
func (t *disabledTun) Cidr() *net.IPNet {
return t.cidr
}
func (*disabledTun) DeviceName() string {
func (*disabledTun) Name() string {
return "disabled"
}
@@ -71,7 +76,8 @@ func (t *disabledTun) Read(b []byte) (int, error) {
func (t *disabledTun) handleICMPEchoRequest(b []byte) bool {
// Return early if this is not a simple ICMP Echo Request
if !(len(b) >= 28 && len(b) <= mtu && b[0] == 0x45 && b[9] == 0x01 && b[20] == 0x08) {
//TODO: make constants out of these
if !(len(b) >= 28 && len(b) <= 9001 && b[0] == 0x45 && b[9] == 0x01 && b[20] == 0x08) {
return false
}
@@ -122,11 +128,6 @@ func (t *disabledTun) Write(b []byte) (int, error) {
return len(b), nil
}
func (t *disabledTun) WriteRaw(b []byte) error {
_, err := t.Write(b)
return err
}
func (t *disabledTun) NewMultiQueueReader() (io.ReadWriteCloser, error) {
return t, nil
}

122
overlay/tun_freebsd.go Normal file
View File

@@ -0,0 +1,122 @@
//go:build !e2e_testing
// +build !e2e_testing
package overlay
import (
"fmt"
"io"
"net"
"os"
"os/exec"
"regexp"
"strconv"
"strings"
"github.com/sirupsen/logrus"
"github.com/slackhq/nebula/cidr"
"github.com/slackhq/nebula/iputil"
)
var deviceNameRE = regexp.MustCompile(`^tun[0-9]+$`)
type tun struct {
Device string
cidr *net.IPNet
MTU int
Routes []Route
routeTree *cidr.Tree4
l *logrus.Logger
io.ReadWriteCloser
}
func (t *tun) Close() error {
if t.ReadWriteCloser != nil {
return t.ReadWriteCloser.Close()
}
return nil
}
func newTunFromFd(_ *logrus.Logger, _ int, _ *net.IPNet, _ int, _ []Route, _ int) (*tun, error) {
return nil, fmt.Errorf("newTunFromFd not supported in FreeBSD")
}
func newTun(l *logrus.Logger, deviceName string, cidr *net.IPNet, defaultMTU int, routes []Route, _ int, _ bool) (*tun, error) {
routeTree, err := makeRouteTree(l, routes, false)
if err != nil {
return nil, err
}
if strings.HasPrefix(deviceName, "/dev/") {
deviceName = strings.TrimPrefix(deviceName, "/dev/")
}
if !deviceNameRE.MatchString(deviceName) {
return nil, fmt.Errorf("tun.dev must match `tun[0-9]+`")
}
return &tun{
Device: deviceName,
cidr: cidr,
MTU: defaultMTU,
Routes: routes,
routeTree: routeTree,
l: l,
}, nil
}
func (t *tun) Activate() error {
var err error
t.ReadWriteCloser, err = os.OpenFile("/dev/"+t.Device, os.O_RDWR, 0)
if err != nil {
return fmt.Errorf("activate failed: %v", err)
}
// TODO use syscalls instead of exec.Command
t.l.Debug("command: ifconfig", t.Device, t.cidr.String(), t.cidr.IP.String())
if err = exec.Command("/sbin/ifconfig", t.Device, t.cidr.String(), t.cidr.IP.String()).Run(); err != nil {
return fmt.Errorf("failed to run 'ifconfig': %s", err)
}
t.l.Debug("command: route", "-n", "add", "-net", t.cidr.String(), "-interface", t.Device)
if err = exec.Command("/sbin/route", "-n", "add", "-net", t.cidr.String(), "-interface", t.Device).Run(); err != nil {
return fmt.Errorf("failed to run 'route add': %s", err)
}
t.l.Debug("command: ifconfig", t.Device, "mtu", strconv.Itoa(t.MTU))
if err = exec.Command("/sbin/ifconfig", t.Device, "mtu", strconv.Itoa(t.MTU)).Run(); err != nil {
return fmt.Errorf("failed to run 'ifconfig': %s", err)
}
// Unsafe path routes
for _, r := range t.Routes {
if r.Via == nil {
// We don't allow route MTUs so only install routes with a via
continue
}
t.l.Debug("command: route", "-n", "add", "-net", r.Cidr.String(), "-interface", t.Device)
if err = exec.Command("/sbin/route", "-n", "add", "-net", r.Cidr.String(), "-interface", t.Device).Run(); err != nil {
return fmt.Errorf("failed to run 'route add' for unsafe_route %s: %s", r.Cidr.String(), err)
}
}
return nil
}
func (t *tun) RouteFor(ip iputil.VpnIp) iputil.VpnIp {
r := t.routeTree.MostSpecificContains(ip)
if r != nil {
return r.(iputil.VpnIp)
}
return 0
}
func (t *tun) Cidr() *net.IPNet {
return t.cidr
}
func (t *tun) Name() string {
return t.Device
}
func (t *tun) NewMultiQueueReader() (io.ReadWriteCloser, error) {
return nil, fmt.Errorf("TODO: multiqueue not implemented for freebsd")
}

125
overlay/tun_ios.go Normal file
View File

@@ -0,0 +1,125 @@
//go:build ios && !e2e_testing
// +build ios,!e2e_testing
package overlay
import (
"errors"
"fmt"
"io"
"net"
"os"
"sync"
"syscall"
"github.com/sirupsen/logrus"
"github.com/slackhq/nebula/cidr"
"github.com/slackhq/nebula/iputil"
)
type tun struct {
io.ReadWriteCloser
cidr *net.IPNet
routeTree *cidr.Tree4
}
func newTun(_ *logrus.Logger, _ string, _ *net.IPNet, _ int, _ []Route, _ int, _ bool) (*tun, error) {
return nil, fmt.Errorf("newTun not supported in iOS")
}
func newTunFromFd(l *logrus.Logger, deviceFd int, cidr *net.IPNet, _ int, routes []Route, _ int) (*tun, error) {
routeTree, err := makeRouteTree(l, routes, false)
if err != nil {
return nil, err
}
file := os.NewFile(uintptr(deviceFd), "/dev/tun")
return &tun{
cidr: cidr,
ReadWriteCloser: &tunReadCloser{f: file},
routeTree: routeTree,
}, nil
}
func (t *tun) Activate() error {
return nil
}
func (t *tun) RouteFor(ip iputil.VpnIp) iputil.VpnIp {
r := t.routeTree.MostSpecificContains(ip)
if r != nil {
return r.(iputil.VpnIp)
}
return 0
}
// The following is hoisted up from water, we do this so we can inject our own fd on iOS
type tunReadCloser struct {
f io.ReadWriteCloser
rMu sync.Mutex
rBuf []byte
wMu sync.Mutex
wBuf []byte
}
func (tr *tunReadCloser) Read(to []byte) (int, error) {
tr.rMu.Lock()
defer tr.rMu.Unlock()
if cap(tr.rBuf) < len(to)+4 {
tr.rBuf = make([]byte, len(to)+4)
}
tr.rBuf = tr.rBuf[:len(to)+4]
n, err := tr.f.Read(tr.rBuf)
copy(to, tr.rBuf[4:])
return n - 4, err
}
func (tr *tunReadCloser) Write(from []byte) (int, error) {
if len(from) == 0 {
return 0, syscall.EIO
}
tr.wMu.Lock()
defer tr.wMu.Unlock()
if cap(tr.wBuf) < len(from)+4 {
tr.wBuf = make([]byte, len(from)+4)
}
tr.wBuf = tr.wBuf[:len(from)+4]
// Determine the IP Family for the NULL L2 Header
ipVer := from[0] >> 4
if ipVer == 4 {
tr.wBuf[3] = syscall.AF_INET
} else if ipVer == 6 {
tr.wBuf[3] = syscall.AF_INET6
} else {
return 0, errors.New("unable to determine IP version from packet")
}
copy(tr.wBuf[4:], from)
n, err := tr.f.Write(tr.wBuf)
return n - 4, err
}
func (tr *tunReadCloser) Close() error {
return tr.f.Close()
}
func (t *tun) Cidr() *net.IPNet {
return t.cidr
}
func (t *tun) Name() string {
return "iOS"
}
func (t *tun) NewMultiQueueReader() (io.ReadWriteCloser, error) {
return nil, fmt.Errorf("TODO: multiqueue not implemented for ios")
}

View File

@@ -1,7 +1,7 @@
//go:build !android && !e2e_testing
// +build !android,!e2e_testing
package nebula
package overlay
import (
"fmt"
@@ -12,21 +12,23 @@ import (
"unsafe"
"github.com/sirupsen/logrus"
"github.com/slackhq/nebula/cidr"
"github.com/slackhq/nebula/iputil"
"github.com/vishvananda/netlink"
"golang.org/x/sys/unix"
)
type Tun struct {
type tun struct {
io.ReadWriteCloser
fd int
Device string
Cidr *net.IPNet
MaxMTU int
DefaultMTU int
TXQueueLen int
Routes []route
UnsafeRoutes []route
l *logrus.Logger
fd int
Device string
cidr *net.IPNet
MaxMTU int
DefaultMTU int
TXQueueLen int
Routes []Route
routeTree *cidr.Tree4
l *logrus.Logger
}
type ifReq struct {
@@ -43,26 +45,6 @@ func ioctl(a1, a2, a3 uintptr) error {
return nil
}
/*
func ipv4(addr string) (o [4]byte, err error) {
ip := net.ParseIP(addr).To4()
if ip == nil {
err = fmt.Errorf("failed to parse addr %s", addr)
return
}
for i, b := range ip {
o[i] = b
}
return
}
*/
const (
cIFF_TUN = 0x0001
cIFF_NO_PI = 0x1000
cIFF_MULTI_QUEUE = 0x0100
)
type ifreqAddr struct {
Name [16]byte
Addr unix.RawSockaddrInet4
@@ -81,34 +63,37 @@ type ifreqQLEN struct {
pad [8]byte
}
func newTunFromFd(l *logrus.Logger, deviceFd int, cidr *net.IPNet, defaultMTU int, routes []route, unsafeRoutes []route, txQueueLen int) (ifce *Tun, err error) {
func newTunFromFd(l *logrus.Logger, deviceFd int, cidr *net.IPNet, defaultMTU int, routes []Route, txQueueLen int) (*tun, error) {
routeTree, err := makeRouteTree(l, routes, true)
if err != nil {
return nil, err
}
file := os.NewFile(uintptr(deviceFd), "/dev/net/tun")
ifce = &Tun{
return &tun{
ReadWriteCloser: file,
fd: int(file.Fd()),
Device: "tun0",
Cidr: cidr,
cidr: cidr,
DefaultMTU: defaultMTU,
TXQueueLen: txQueueLen,
Routes: routes,
UnsafeRoutes: unsafeRoutes,
routeTree: routeTree,
l: l,
}
return
}, nil
}
func newTun(l *logrus.Logger, deviceName string, cidr *net.IPNet, defaultMTU int, routes []route, unsafeRoutes []route, txQueueLen int, multiqueue bool) (ifce *Tun, err error) {
func newTun(l *logrus.Logger, deviceName string, cidr *net.IPNet, defaultMTU int, routes []Route, txQueueLen int, multiqueue bool) (*tun, error) {
fd, err := unix.Open("/dev/net/tun", os.O_RDWR, 0)
if err != nil {
return nil, err
}
var req ifReq
req.Flags = uint16(cIFF_TUN | cIFF_NO_PI)
req.Flags = uint16(unix.IFF_TUN | unix.IFF_NO_PI)
if multiqueue {
req.Flags |= cIFF_MULTI_QUEUE
req.Flags |= unix.IFF_MULTI_QUEUE
}
copy(req.Name[:], deviceName)
if err = ioctl(uintptr(fd), uintptr(unix.TUNSETIFF), uintptr(unsafe.Pointer(&req))); err != nil {
@@ -120,35 +105,43 @@ func newTun(l *logrus.Logger, deviceName string, cidr *net.IPNet, defaultMTU int
maxMTU := defaultMTU
for _, r := range routes {
if r.mtu > maxMTU {
maxMTU = r.mtu
if r.MTU == 0 {
r.MTU = defaultMTU
}
if r.MTU > maxMTU {
maxMTU = r.MTU
}
}
ifce = &Tun{
routeTree, err := makeRouteTree(l, routes, true)
if err != nil {
return nil, err
}
return &tun{
ReadWriteCloser: file,
fd: int(file.Fd()),
Device: name,
Cidr: cidr,
cidr: cidr,
MaxMTU: maxMTU,
DefaultMTU: defaultMTU,
TXQueueLen: txQueueLen,
Routes: routes,
UnsafeRoutes: unsafeRoutes,
routeTree: routeTree,
l: l,
}
return
}, nil
}
func (c *Tun) NewMultiQueueReader() (io.ReadWriteCloser, error) {
func (t *tun) NewMultiQueueReader() (io.ReadWriteCloser, error) {
fd, err := unix.Open("/dev/net/tun", os.O_RDWR, 0)
if err != nil {
return nil, err
}
var req ifReq
req.Flags = uint16(cIFF_TUN | cIFF_NO_PI | cIFF_MULTI_QUEUE)
copy(req.Name[:], c.Device)
req.Flags = uint16(unix.IFF_TUN | unix.IFF_NO_PI | unix.IFF_MULTI_QUEUE)
copy(req.Name[:], t.Device)
if err = ioctl(uintptr(fd), uintptr(unix.TUNSETIFF), uintptr(unsafe.Pointer(&req))); err != nil {
return nil, err
}
@@ -158,46 +151,52 @@ func (c *Tun) NewMultiQueueReader() (io.ReadWriteCloser, error) {
return file, nil
}
func (c *Tun) WriteRaw(b []byte) error {
func (t *tun) RouteFor(ip iputil.VpnIp) iputil.VpnIp {
r := t.routeTree.MostSpecificContains(ip)
if r != nil {
return r.(iputil.VpnIp)
}
return 0
}
func (t *tun) Write(b []byte) (int, error) {
var nn int
max := len(b)
for {
max := len(b)
n, err := unix.Write(c.fd, b[nn:max])
n, err := unix.Write(t.fd, b[nn:max])
if n > 0 {
nn += n
}
if nn == len(b) {
return err
return nn, err
}
if err != nil {
return err
return nn, err
}
if n == 0 {
return io.ErrUnexpectedEOF
return nn, io.ErrUnexpectedEOF
}
}
}
func (c *Tun) Write(b []byte) (int, error) {
return len(b), c.WriteRaw(b)
}
func (c Tun) deviceBytes() (o [16]byte) {
for i, c := range c.Device {
func (t tun) deviceBytes() (o [16]byte) {
for i, c := range t.Device {
o[i] = byte(c)
}
return
}
func (c Tun) Activate() error {
devName := c.deviceBytes()
func (t tun) Activate() error {
devName := t.deviceBytes()
var addr, mask [4]byte
copy(addr[:], c.Cidr.IP.To4())
copy(mask[:], c.Cidr.Mask)
copy(addr[:], t.cidr.IP.To4())
copy(mask[:], t.cidr.Mask)
s, err := unix.Socket(
unix.AF_INET,
@@ -235,17 +234,17 @@ func (c Tun) Activate() error {
}
// Set the MTU on the device
ifm := ifreqMTU{Name: devName, MTU: int32(c.MaxMTU)}
ifm := ifreqMTU{Name: devName, MTU: int32(t.MaxMTU)}
if err = ioctl(fd, unix.SIOCSIFMTU, uintptr(unsafe.Pointer(&ifm))); err != nil {
// This is currently a non fatal condition because the route table must have the MTU set appropriately as well
c.l.WithError(err).Error("Failed to set tun mtu")
t.l.WithError(err).Error("Failed to set tun mtu")
}
// Set the transmit queue length
ifrq := ifreqQLEN{Name: devName, Value: int32(c.TXQueueLen)}
ifrq := ifreqQLEN{Name: devName, Value: int32(t.TXQueueLen)}
if err = ioctl(fd, unix.SIOCSIFTXQLEN, uintptr(unsafe.Pointer(&ifrq))); err != nil {
// If we can't set the queue length nebula will still work but it may lead to packet loss
c.l.WithError(err).Error("Failed to set tun tx queue length")
t.l.WithError(err).Error("Failed to set tun tx queue length")
}
// Bring up the interface
@@ -255,59 +254,46 @@ func (c Tun) Activate() error {
}
// Set the routes
link, err := netlink.LinkByName(c.Device)
link, err := netlink.LinkByName(t.Device)
if err != nil {
return fmt.Errorf("failed to get tun device link: %s", err)
}
// Default route
dr := &net.IPNet{IP: c.Cidr.IP.Mask(c.Cidr.Mask), Mask: c.Cidr.Mask}
dr := &net.IPNet{IP: t.cidr.IP.Mask(t.cidr.Mask), Mask: t.cidr.Mask}
nr := netlink.Route{
LinkIndex: link.Attrs().Index,
Dst: dr,
MTU: c.DefaultMTU,
AdvMSS: c.advMSS(route{}),
MTU: t.DefaultMTU,
AdvMSS: t.advMSS(Route{}),
Scope: unix.RT_SCOPE_LINK,
Src: c.Cidr.IP,
Src: t.cidr.IP,
Protocol: unix.RTPROT_KERNEL,
Table: unix.RT_TABLE_MAIN,
Type: unix.RTN_UNICAST,
}
err = netlink.RouteReplace(&nr)
if err != nil {
return fmt.Errorf("failed to set mtu %v on the default route %v; %v", c.DefaultMTU, dr, err)
return fmt.Errorf("failed to set mtu %v on the default route %v; %v", t.DefaultMTU, dr, err)
}
// Path routes
for _, r := range c.Routes {
for _, r := range t.Routes {
nr := netlink.Route{
LinkIndex: link.Attrs().Index,
Dst: r.route,
MTU: r.mtu,
AdvMSS: c.advMSS(r),
Dst: r.Cidr,
MTU: r.MTU,
AdvMSS: t.advMSS(r),
Scope: unix.RT_SCOPE_LINK,
}
if r.Metric > 0 {
nr.Priority = r.Metric
}
err = netlink.RouteAdd(&nr)
if err != nil {
return fmt.Errorf("failed to set mtu %v on route %v; %v", r.mtu, r.route, err)
}
}
// Unsafe path routes
for _, r := range c.UnsafeRoutes {
nr := netlink.Route{
LinkIndex: link.Attrs().Index,
Dst: r.route,
MTU: r.mtu,
Priority: r.metric,
AdvMSS: c.advMSS(r),
Scope: unix.RT_SCOPE_LINK,
}
err = netlink.RouteAdd(&nr)
if err != nil {
return fmt.Errorf("failed to set mtu %v on route %v; %v", r.mtu, r.route, err)
return fmt.Errorf("failed to set mtu %v on route %v; %v", r.MTU, r.Cidr, err)
}
}
@@ -320,22 +306,22 @@ func (c Tun) Activate() error {
return nil
}
func (c *Tun) CidrNet() *net.IPNet {
return c.Cidr
func (t *tun) Cidr() *net.IPNet {
return t.cidr
}
func (c *Tun) DeviceName() string {
return c.Device
func (t *tun) Name() string {
return t.Device
}
func (c Tun) advMSS(r route) int {
mtu := r.mtu
if r.mtu == 0 {
mtu = c.DefaultMTU
func (t tun) advMSS(r Route) int {
mtu := r.MTU
if r.MTU == 0 {
mtu = t.DefaultMTU
}
// We only need to set advmss if the route MTU does not match the device MTU
if mtu != c.MaxMTU {
if mtu != t.MaxMTU {
return mtu - 40
}
return 0

View File

@@ -1,25 +1,25 @@
//go:build !e2e_testing
// +build !e2e_testing
package nebula
package overlay
import "testing"
var runAdvMSSTests = []struct {
name string
tun Tun
r route
tun tun
r Route
expected int
}{
// Standard case, default MTU is the device max MTU
{"default", Tun{DefaultMTU: 1440, MaxMTU: 1440}, route{}, 0},
{"default-min", Tun{DefaultMTU: 1440, MaxMTU: 1440}, route{mtu: 1440}, 0},
{"default-low", Tun{DefaultMTU: 1440, MaxMTU: 1440}, route{mtu: 1200}, 1160},
{"default", tun{DefaultMTU: 1440, MaxMTU: 1440}, Route{}, 0},
{"default-min", tun{DefaultMTU: 1440, MaxMTU: 1440}, Route{MTU: 1440}, 0},
{"default-low", tun{DefaultMTU: 1440, MaxMTU: 1440}, Route{MTU: 1200}, 1160},
// Case where we have a route MTU set higher than the default
{"route", Tun{DefaultMTU: 1440, MaxMTU: 8941}, route{}, 1400},
{"route-min", Tun{DefaultMTU: 1440, MaxMTU: 8941}, route{mtu: 1440}, 1400},
{"route-high", Tun{DefaultMTU: 1440, MaxMTU: 8941}, route{mtu: 8941}, 0},
{"route", tun{DefaultMTU: 1440, MaxMTU: 8941}, Route{}, 1400},
{"route-min", tun{DefaultMTU: 1440, MaxMTU: 8941}, Route{MTU: 1440}, 1400},
{"route-high", tun{DefaultMTU: 1440, MaxMTU: 8941}, Route{MTU: 8941}, 0},
}
func TestTunAdvMSS(t *testing.T) {

123
overlay/tun_tester.go Normal file
View File

@@ -0,0 +1,123 @@
//go:build e2e_testing
// +build e2e_testing
package overlay
import (
"fmt"
"io"
"net"
"os"
"github.com/sirupsen/logrus"
"github.com/slackhq/nebula/cidr"
"github.com/slackhq/nebula/iputil"
)
type TestTun struct {
Device string
cidr *net.IPNet
Routes []Route
routeTree *cidr.Tree4
l *logrus.Logger
rxPackets chan []byte // Packets to receive into nebula
TxPackets chan []byte // Packets transmitted outside by nebula
}
func newTun(l *logrus.Logger, deviceName string, cidr *net.IPNet, _ int, routes []Route, _ int, _ bool) (*TestTun, error) {
routeTree, err := makeRouteTree(l, routes, false)
if err != nil {
return nil, err
}
return &TestTun{
Device: deviceName,
cidr: cidr,
Routes: routes,
routeTree: routeTree,
l: l,
rxPackets: make(chan []byte, 10),
TxPackets: make(chan []byte, 10),
}, nil
}
func newTunFromFd(_ *logrus.Logger, _ int, _ *net.IPNet, _ int, _ []Route, _ int) (*TestTun, error) {
return nil, fmt.Errorf("newTunFromFd not supported")
}
// Send will place a byte array onto the receive queue for nebula to consume
// These are unencrypted ip layer frames destined for another nebula node.
// packets should exit the udp side, capture them with udpConn.Get
func (t *TestTun) Send(packet []byte) {
if t.l.Level >= logrus.InfoLevel {
t.l.WithField("dataLen", len(packet)).Info("Tun receiving injected packet")
}
t.rxPackets <- packet
}
// Get will pull an unencrypted ip layer frame from the transmit queue
// nebula meant to send this message to some application on the local system
// packets were ingested from the udp side, you can send them with udpConn.Send
func (t *TestTun) Get(block bool) []byte {
if block {
return <-t.TxPackets
}
select {
case p := <-t.TxPackets:
return p
default:
return nil
}
}
//********************************************************************************************************************//
// Below this is boilerplate implementation to make nebula actually work
//********************************************************************************************************************//
func (t *TestTun) RouteFor(ip iputil.VpnIp) iputil.VpnIp {
r := t.routeTree.MostSpecificContains(ip)
if r != nil {
return r.(iputil.VpnIp)
}
return 0
}
func (t *TestTun) Activate() error {
return nil
}
func (t *TestTun) Cidr() *net.IPNet {
return t.cidr
}
func (t *TestTun) Name() string {
return t.Device
}
func (t *TestTun) Write(b []byte) (n int, err error) {
packet := make([]byte, len(b), len(b))
copy(packet, b)
t.TxPackets <- packet
return len(b), nil
}
func (t *TestTun) Close() error {
close(t.rxPackets)
return nil
}
func (t *TestTun) Read(b []byte) (int, error) {
p, ok := <-t.rxPackets
if !ok {
return 0, os.ErrClosed
}
copy(b, p)
return len(p), nil
}
func (t *TestTun) NewMultiQueueReader() (io.ReadWriteCloser, error) {
return nil, fmt.Errorf("TODO: multiqueue not implemented")
}

View File

@@ -0,0 +1,126 @@
package overlay
import (
"fmt"
"io"
"net"
"os/exec"
"strconv"
"github.com/sirupsen/logrus"
"github.com/slackhq/nebula/cidr"
"github.com/slackhq/nebula/iputil"
"github.com/songgao/water"
)
type waterTun struct {
Device string
cidr *net.IPNet
MTU int
Routes []Route
routeTree *cidr.Tree4
*water.Interface
}
func newWaterTun(l *logrus.Logger, cidr *net.IPNet, defaultMTU int, routes []Route) (*waterTun, error) {
routeTree, err := makeRouteTree(l, routes, false)
if err != nil {
return nil, err
}
// NOTE: You cannot set the deviceName under Windows, so you must check tun.Device after calling .Activate()
return &waterTun{
cidr: cidr,
MTU: defaultMTU,
Routes: routes,
routeTree: routeTree,
}, nil
}
func (t *waterTun) Activate() error {
var err error
t.Interface, err = water.New(water.Config{
DeviceType: water.TUN,
PlatformSpecificParams: water.PlatformSpecificParams{
ComponentID: "tap0901",
Network: t.cidr.String(),
},
})
if err != nil {
return fmt.Errorf("activate failed: %v", err)
}
t.Device = t.Interface.Name()
// TODO use syscalls instead of exec.Command
err = exec.Command(
`C:\Windows\System32\netsh.exe`, "interface", "ipv4", "set", "address",
fmt.Sprintf("name=%s", t.Device),
"source=static",
fmt.Sprintf("addr=%s", t.cidr.IP),
fmt.Sprintf("mask=%s", net.IP(t.cidr.Mask)),
"gateway=none",
).Run()
if err != nil {
return fmt.Errorf("failed to run 'netsh' to set address: %s", err)
}
err = exec.Command(
`C:\Windows\System32\netsh.exe`, "interface", "ipv4", "set", "interface",
t.Device,
fmt.Sprintf("mtu=%d", t.MTU),
).Run()
if err != nil {
return fmt.Errorf("failed to run 'netsh' to set MTU: %s", err)
}
iface, err := net.InterfaceByName(t.Device)
if err != nil {
return fmt.Errorf("failed to find interface named %s: %v", t.Device, err)
}
for _, r := range t.Routes {
if r.Via == nil {
// We don't allow route MTUs so only install routes with a via
continue
}
err = exec.Command(
"C:\\Windows\\System32\\route.exe", "add", r.Cidr.String(), r.Via.String(), "IF", strconv.Itoa(iface.Index), "METRIC", strconv.Itoa(r.Metric),
).Run()
if err != nil {
return fmt.Errorf("failed to add the unsafe_route %s: %v", r.Cidr.String(), err)
}
}
return nil
}
func (t *waterTun) RouteFor(ip iputil.VpnIp) iputil.VpnIp {
r := t.routeTree.MostSpecificContains(ip)
if r != nil {
return r.(iputil.VpnIp)
}
return 0
}
func (t *waterTun) Cidr() *net.IPNet {
return t.cidr
}
func (t *waterTun) Name() string {
return t.Device
}
func (t *waterTun) Close() error {
if t.Interface == nil {
return nil
}
return t.Interface.Close()
}
func (t *waterTun) NewMultiQueueReader() (io.ReadWriteCloser, error) {
return nil, fmt.Errorf("TODO: multiqueue not implemented for windows")
}

58
overlay/tun_windows.go Normal file
View File

@@ -0,0 +1,58 @@
//go:build !e2e_testing
// +build !e2e_testing
package overlay
import (
"fmt"
"net"
"os"
"path/filepath"
"runtime"
"syscall"
"github.com/sirupsen/logrus"
)
func newTunFromFd(_ *logrus.Logger, _ int, _ *net.IPNet, _ int, _ []Route, _ int) (Device, error) {
return nil, fmt.Errorf("newTunFromFd not supported in Windows")
}
func newTun(l *logrus.Logger, deviceName string, cidr *net.IPNet, defaultMTU int, routes []Route, _ int, _ bool) (Device, error) {
useWintun := true
if err := checkWinTunExists(); err != nil {
l.WithError(err).Warn("Check Wintun driver failed, fallback to wintap driver")
useWintun = false
}
if useWintun {
device, err := newWinTun(l, deviceName, cidr, defaultMTU, routes)
if err != nil {
return nil, fmt.Errorf("create Wintun interface failed, %w", err)
}
return device, nil
}
device, err := newWaterTun(l, cidr, defaultMTU, routes)
if err != nil {
return nil, fmt.Errorf("create wintap driver failed, %w", err)
}
return device, nil
}
func checkWinTunExists() error {
myPath, err := os.Executable()
if err != nil {
return err
}
arch := runtime.GOARCH
switch arch {
case "386":
//NOTE: wintun bundles 386 as x86
arch = "x86"
}
_, err = syscall.LoadDLL(filepath.Join(filepath.Dir(myPath), "dist", "windows", "wintun", "bin", arch, "wintun.dll"))
return err
}

View File

@@ -0,0 +1,183 @@
package overlay
import (
"crypto"
"fmt"
"io"
"net"
"net/netip"
"unsafe"
"github.com/sirupsen/logrus"
"github.com/slackhq/nebula/cidr"
"github.com/slackhq/nebula/iputil"
"github.com/slackhq/nebula/wintun"
"golang.org/x/sys/windows"
"golang.zx2c4.com/wireguard/windows/tunnel/winipcfg"
)
const tunGUIDLabel = "Fixed Nebula Windows GUID v1"
type winTun struct {
Device string
cidr *net.IPNet
prefix netip.Prefix
MTU int
Routes []Route
routeTree *cidr.Tree4
tun *wintun.NativeTun
}
func generateGUIDByDeviceName(name string) (*windows.GUID, error) {
// GUID is 128 bit
hash := crypto.MD5.New()
_, err := hash.Write([]byte(tunGUIDLabel))
if err != nil {
return nil, err
}
_, err = hash.Write([]byte(name))
if err != nil {
return nil, err
}
sum := hash.Sum(nil)
return (*windows.GUID)(unsafe.Pointer(&sum[0])), nil
}
func newWinTun(l *logrus.Logger, deviceName string, cidr *net.IPNet, defaultMTU int, routes []Route) (*winTun, error) {
guid, err := generateGUIDByDeviceName(deviceName)
if err != nil {
return nil, fmt.Errorf("generate GUID failed: %w", err)
}
tunDevice, err := wintun.CreateTUNWithRequestedGUID(deviceName, guid, defaultMTU)
if err != nil {
return nil, fmt.Errorf("create TUN device failed: %w", err)
}
routeTree, err := makeRouteTree(l, routes, false)
if err != nil {
return nil, err
}
prefix, err := iputil.ToNetIpPrefix(*cidr)
if err != nil {
return nil, err
}
return &winTun{
Device: deviceName,
cidr: cidr,
prefix: prefix,
MTU: defaultMTU,
Routes: routes,
routeTree: routeTree,
tun: tunDevice.(*wintun.NativeTun),
}, nil
}
func (t *winTun) Activate() error {
luid := winipcfg.LUID(t.tun.LUID())
if err := luid.SetIPAddresses([]netip.Prefix{t.prefix}); err != nil {
return fmt.Errorf("failed to set address: %w", err)
}
foundDefault4 := false
routes := make([]*winipcfg.RouteData, 0, len(t.Routes)+1)
for _, r := range t.Routes {
if r.Via == nil {
// We don't allow route MTUs so only install routes with a via
continue
}
if !foundDefault4 {
if ones, bits := r.Cidr.Mask.Size(); ones == 0 && bits != 0 {
foundDefault4 = true
}
}
prefix, err := iputil.ToNetIpPrefix(*r.Cidr)
if err != nil {
return err
}
// Add our unsafe route
routes = append(routes, &winipcfg.RouteData{
Destination: prefix,
NextHop: r.Via.ToNetIpAddr(),
Metric: uint32(r.Metric),
})
}
if err := luid.AddRoutes(routes); err != nil {
return fmt.Errorf("failed to add routes: %w", err)
}
ipif, err := luid.IPInterface(windows.AF_INET)
if err != nil {
return fmt.Errorf("failed to get ip interface: %w", err)
}
ipif.NLMTU = uint32(t.MTU)
if foundDefault4 {
ipif.UseAutomaticMetric = false
ipif.Metric = 0
}
if err := ipif.Set(); err != nil {
return fmt.Errorf("failed to set ip interface: %w", err)
}
return nil
}
func (t *winTun) RouteFor(ip iputil.VpnIp) iputil.VpnIp {
r := t.routeTree.MostSpecificContains(ip)
if r != nil {
return r.(iputil.VpnIp)
}
return 0
}
func (t *winTun) Cidr() *net.IPNet {
return t.cidr
}
func (t *winTun) Name() string {
return t.Device
}
func (t *winTun) Read(b []byte) (int, error) {
return t.tun.Read(b, 0)
}
func (t *winTun) Write(b []byte) (int, error) {
return t.tun.Write(b, 0)
}
func (t *winTun) NewMultiQueueReader() (io.ReadWriteCloser, error) {
return nil, fmt.Errorf("TODO: multiqueue not implemented for windows")
}
func (t *winTun) Close() error {
// It seems that the Windows networking stack doesn't like it when we destroy interfaces that have active routes,
// so to be certain, just remove everything before destroying.
luid := winipcfg.LUID(t.tun.LUID())
_ = luid.FlushRoutes(windows.AF_INET)
_ = luid.FlushIPAddresses(windows.AF_INET)
/* We don't support IPV6 yet
_ = luid.FlushRoutes(windows.AF_INET6)
_ = luid.FlushIPAddresses(windows.AF_INET6)
*/
_ = luid.FlushDNS(windows.AF_INET)
return t.tun.Close()
}

View File

@@ -1,34 +1,89 @@
package nebula
import (
"sync/atomic"
"time"
"github.com/sirupsen/logrus"
"github.com/slackhq/nebula/config"
)
type Punchy struct {
Punch bool
Respond bool
Delay time.Duration
atomicPunch int32
atomicRespond int32
atomicDelay time.Duration
l *logrus.Logger
}
func NewPunchyFromConfig(c *config.C) *Punchy {
p := &Punchy{}
func NewPunchyFromConfig(l *logrus.Logger, c *config.C) *Punchy {
p := &Punchy{l: l}
if c.IsSet("punchy.punch") {
p.Punch = c.GetBool("punchy.punch", false)
} else {
// Deprecated fallback
p.Punch = c.GetBool("punchy", false)
}
p.reload(c, true)
c.RegisterReloadCallback(func(c *config.C) {
p.reload(c, false)
})
if c.IsSet("punchy.respond") {
p.Respond = c.GetBool("punchy.respond", false)
} else {
// Deprecated fallback
p.Respond = c.GetBool("punch_back", false)
}
p.Delay = c.GetDuration("punchy.delay", time.Second)
return p
}
func (p *Punchy) reload(c *config.C, initial bool) {
if initial {
var yes bool
if c.IsSet("punchy.punch") {
yes = c.GetBool("punchy.punch", false)
} else {
// Deprecated fallback
yes = c.GetBool("punchy", false)
}
if yes {
atomic.StoreInt32(&p.atomicPunch, 1)
} else {
atomic.StoreInt32(&p.atomicPunch, 0)
}
} else if c.HasChanged("punchy.punch") || c.HasChanged("punchy") {
//TODO: it should be relatively easy to support this, just need to be able to cancel the goroutine and boot it up from here
p.l.Warn("Changing punchy.punch with reload is not supported, ignoring.")
}
if initial || c.HasChanged("punchy.respond") || c.HasChanged("punch_back") {
var yes bool
if c.IsSet("punchy.respond") {
yes = c.GetBool("punchy.respond", false)
} else {
// Deprecated fallback
yes = c.GetBool("punch_back", false)
}
if yes {
atomic.StoreInt32(&p.atomicRespond, 1)
} else {
atomic.StoreInt32(&p.atomicRespond, 0)
}
if !initial {
p.l.Infof("punchy.respond changed to %v", p.GetRespond())
}
}
//NOTE: this will not apply to any in progress operations, only the next one
if initial || c.HasChanged("punchy.delay") {
atomic.StoreInt64((*int64)(&p.atomicDelay), (int64)(c.GetDuration("punchy.delay", time.Second)))
if !initial {
p.l.Infof("punchy.delay changed to %s", p.GetDelay())
}
}
}
func (p *Punchy) GetPunch() bool {
return atomic.LoadInt32(&p.atomicPunch) == 1
}
func (p *Punchy) GetRespond() bool {
return atomic.LoadInt32(&p.atomicRespond) == 1
}
func (p *Punchy) GetDelay() time.Duration {
return (time.Duration)(atomic.LoadInt64((*int64)(&p.atomicDelay)))
}

View File

@@ -5,43 +5,67 @@ import (
"time"
"github.com/slackhq/nebula/config"
"github.com/slackhq/nebula/util"
"github.com/slackhq/nebula/test"
"github.com/stretchr/testify/assert"
)
func TestNewPunchyFromConfig(t *testing.T) {
l := util.NewTestLogger()
l := test.NewLogger()
c := config.NewC(l)
// Test defaults
p := NewPunchyFromConfig(c)
assert.Equal(t, false, p.Punch)
assert.Equal(t, false, p.Respond)
assert.Equal(t, time.Second, p.Delay)
p := NewPunchyFromConfig(l, c)
assert.Equal(t, false, p.GetPunch())
assert.Equal(t, false, p.GetRespond())
assert.Equal(t, time.Second, p.GetDelay())
// punchy deprecation
c.Settings["punchy"] = true
p = NewPunchyFromConfig(c)
assert.Equal(t, true, p.Punch)
p = NewPunchyFromConfig(l, c)
assert.Equal(t, true, p.GetPunch())
// punchy.punch
c.Settings["punchy"] = map[interface{}]interface{}{"punch": true}
p = NewPunchyFromConfig(c)
assert.Equal(t, true, p.Punch)
p = NewPunchyFromConfig(l, c)
assert.Equal(t, true, p.GetPunch())
// punch_back deprecation
c.Settings["punch_back"] = true
p = NewPunchyFromConfig(c)
assert.Equal(t, true, p.Respond)
p = NewPunchyFromConfig(l, c)
assert.Equal(t, true, p.GetRespond())
// punchy.respond
c.Settings["punchy"] = map[interface{}]interface{}{"respond": true}
c.Settings["punch_back"] = false
p = NewPunchyFromConfig(c)
assert.Equal(t, true, p.Respond)
p = NewPunchyFromConfig(l, c)
assert.Equal(t, true, p.GetRespond())
// punchy.delay
c.Settings["punchy"] = map[interface{}]interface{}{"delay": "1m"}
p = NewPunchyFromConfig(c)
assert.Equal(t, time.Minute, p.Delay)
p = NewPunchyFromConfig(l, c)
assert.Equal(t, time.Minute, p.GetDelay())
}
func TestPunchy_reload(t *testing.T) {
l := test.NewLogger()
c := config.NewC(l)
delay, _ := time.ParseDuration("1m")
assert.NoError(t, c.LoadString(`
punchy:
delay: 1m
respond: false
`))
p := NewPunchyFromConfig(l, c)
assert.Equal(t, delay, p.GetDelay())
assert.Equal(t, false, p.GetRespond())
newDelay, _ := time.ParseDuration("10m")
assert.NoError(t, c.ReloadConfigString(`
punchy:
delay: 10m
respond: true
`))
p.reload(c, false)
assert.Equal(t, newDelay, p.GetDelay())
assert.Equal(t, true, p.GetRespond())
}

315
relay_manager.go Normal file
View File

@@ -0,0 +1,315 @@
package nebula
import (
"context"
"errors"
"fmt"
"sync/atomic"
"github.com/sirupsen/logrus"
"github.com/slackhq/nebula/config"
"github.com/slackhq/nebula/header"
"github.com/slackhq/nebula/iputil"
)
type relayManager struct {
l *logrus.Logger
hostmap *HostMap
atomicAmRelay int32
}
func NewRelayManager(ctx context.Context, l *logrus.Logger, hostmap *HostMap, c *config.C) *relayManager {
rm := &relayManager{
l: l,
hostmap: hostmap,
}
rm.reload(c, true)
c.RegisterReloadCallback(func(c *config.C) {
err := rm.reload(c, false)
if err != nil {
l.WithError(err).Error("Failed to reload relay_manager")
}
})
return rm
}
func (rm *relayManager) reload(c *config.C, initial bool) error {
if initial || c.HasChanged("relay.am_relay") {
rm.setAmRelay(c.GetBool("relay.am_relay", false))
}
return nil
}
func (rm *relayManager) GetAmRelay() bool {
return atomic.LoadInt32(&rm.atomicAmRelay) == 1
}
func (rm *relayManager) setAmRelay(v bool) {
var val int32
switch v {
case true:
val = 1
case false:
val = 0
}
atomic.StoreInt32(&rm.atomicAmRelay, val)
}
// AddRelay finds an available relay index on the hostmap, and associates the relay info with it.
// relayHostInfo is the Nebula peer which can be used as a relay to access the target vpnIp.
func AddRelay(l *logrus.Logger, relayHostInfo *HostInfo, hm *HostMap, vpnIp iputil.VpnIp, remoteIdx *uint32, relayType int, state int) (uint32, error) {
hm.Lock()
defer hm.Unlock()
for i := 0; i < 32; i++ {
index, err := generateIndex(l)
if err != nil {
return 0, err
}
_, inRelays := hm.Relays[index]
if !inRelays {
hm.Relays[index] = relayHostInfo
newRelay := Relay{
Type: relayType,
State: state,
LocalIndex: index,
PeerIp: vpnIp,
}
if remoteIdx != nil {
newRelay.RemoteIndex = *remoteIdx
}
relayHostInfo.relayState.InsertRelay(vpnIp, index, &newRelay)
return index, nil
}
}
return 0, errors.New("failed to generate unique localIndexId")
}
// EstablishRelay updates a Requested Relay to become an Established Relay, which can pass traffic.
func (rm *relayManager) EstablishRelay(relayHostInfo *HostInfo, m *NebulaControl) (*Relay, error) {
relay, ok := relayHostInfo.relayState.QueryRelayForByIdx(m.InitiatorRelayIndex)
if !ok {
rm.l.WithFields(logrus.Fields{"relayHostInfo": relayHostInfo.vpnIp,
"initiatorRelayIndex": m.InitiatorRelayIndex,
"relayFrom": m.RelayFromIp,
"relayTo": m.RelayToIp}).Info("relayManager EstablishRelay relayForByIdx not found")
return nil, fmt.Errorf("unknown relay")
}
// relay deserves some synchronization
relay.RemoteIndex = m.ResponderRelayIndex
relay.State = Established
return relay, nil
}
func (rm *relayManager) HandleControlMsg(h *HostInfo, m *NebulaControl, f *Interface) {
switch m.Type {
case NebulaControl_CreateRelayRequest:
rm.handleCreateRelayRequest(h, f, m)
case NebulaControl_CreateRelayResponse:
rm.handleCreateRelayResponse(h, f, m)
}
}
func (rm *relayManager) handleCreateRelayResponse(h *HostInfo, f *Interface, m *NebulaControl) {
rm.l.WithFields(logrus.Fields{
"relayFrom": iputil.VpnIp(m.RelayFromIp),
"relayTarget": iputil.VpnIp(m.RelayToIp),
"initiatorIdx": m.InitiatorRelayIndex,
"responderIdx": m.ResponderRelayIndex,
"hostInfo": h.vpnIp}).
Info("handleCreateRelayResponse")
target := iputil.VpnIp(m.RelayToIp)
relay, err := rm.EstablishRelay(h, m)
if err != nil {
rm.l.WithError(err).WithField("target", target.String()).Error("Failed to update relay for target")
return
}
// Do I need to complete the relays now?
if relay.Type == TerminalType {
return
}
// I'm the middle man. Let the initiator know that the I've established the relay they requested.
peerHostInfo, err := rm.hostmap.QueryVpnIp(relay.PeerIp)
if err != nil {
rm.l.WithError(err).WithField("relayPeerIp", relay.PeerIp).Error("Can't find a HostInfo for peer IP")
return
}
peerRelay, ok := peerHostInfo.relayState.QueryRelayForByIp(target)
if !ok {
rm.l.WithField("peerIp", peerHostInfo.vpnIp).WithField("target", target.String()).Error("peerRelay does not have Relay state for target IP", peerHostInfo.vpnIp.String(), target.String())
return
}
peerRelay.State = Established
resp := NebulaControl{
Type: NebulaControl_CreateRelayResponse,
ResponderRelayIndex: peerRelay.LocalIndex,
InitiatorRelayIndex: peerRelay.RemoteIndex,
RelayFromIp: uint32(peerHostInfo.vpnIp),
RelayToIp: uint32(target),
}
msg, err := resp.Marshal()
if err != nil {
rm.l.
WithError(err).Error("relayManager Failed to marhsal Control CreateRelayResponse message to create relay")
} else {
f.SendMessageToVpnIp(header.Control, 0, peerHostInfo.vpnIp, msg, make([]byte, 12), make([]byte, mtu))
}
}
func (rm *relayManager) handleCreateRelayRequest(h *HostInfo, f *Interface, m *NebulaControl) {
rm.l.WithFields(logrus.Fields{
"relayFrom": iputil.VpnIp(m.RelayFromIp),
"relayTarget": iputil.VpnIp(m.RelayToIp),
"initiatorIdx": m.InitiatorRelayIndex,
"hostInfo": h.vpnIp}).
Info("handleCreateRelayRequest")
from := iputil.VpnIp(m.RelayFromIp)
target := iputil.VpnIp(m.RelayToIp)
// Is the target of the relay me?
if target == f.myVpnIp {
existingRelay, ok := h.relayState.QueryRelayForByIp(from)
addRelay := !ok
if ok {
// Clean up existing relay, if this is a new request.
if existingRelay.RemoteIndex != m.InitiatorRelayIndex {
// We got a brand new Relay request, because its index is different than what we saw before.
// Clean up the existing Relay state, and get ready to record new Relay state.
rm.hostmap.RemoveRelay(existingRelay.LocalIndex)
addRelay = true
}
}
if addRelay {
_, err := AddRelay(rm.l, h, f.hostMap, from, &m.InitiatorRelayIndex, TerminalType, Established)
if err != nil {
return
}
}
relay, ok := h.relayState.QueryRelayForByIp(from)
if ok && m.InitiatorRelayIndex != relay.RemoteIndex {
// Do something, Something happened.
}
resp := NebulaControl{
Type: NebulaControl_CreateRelayResponse,
ResponderRelayIndex: relay.LocalIndex,
InitiatorRelayIndex: relay.RemoteIndex,
RelayFromIp: uint32(from),
RelayToIp: uint32(target),
}
msg, err := resp.Marshal()
if err != nil {
rm.l.
WithError(err).Error("relayManager Failed to marshal Control CreateRelayResponse message to create relay")
} else {
f.SendMessageToVpnIp(header.Control, 0, h.vpnIp, msg, make([]byte, 12), make([]byte, mtu))
}
return
} else {
// the target is not me. Create a relay to the target, from me.
if rm.GetAmRelay() == false {
return
}
peer, err := rm.hostmap.QueryVpnIp(target)
if err != nil {
// Try to establish a connection to this host. If we get a future relay request,
// we'll be ready!
f.getOrHandshake(target)
return
}
if peer.remote == nil {
// Only create relays to peers for whom I have a direct connection
return
}
sendCreateRequest := false
var index uint32
targetRelay, ok := peer.relayState.QueryRelayForByIp(from)
if ok {
index = targetRelay.LocalIndex
if targetRelay.State == Requested {
sendCreateRequest = true
}
} else {
// Allocate an index in the hostMap for this relay peer
index, err = AddRelay(rm.l, peer, f.hostMap, from, nil, ForwardingType, Requested)
if err != nil {
return
}
sendCreateRequest = true
}
if sendCreateRequest {
// Send a CreateRelayRequest to the peer.
req := NebulaControl{
Type: NebulaControl_CreateRelayRequest,
InitiatorRelayIndex: index,
RelayFromIp: uint32(h.vpnIp),
RelayToIp: uint32(target),
}
msg, err := req.Marshal()
if err != nil {
rm.l.
WithError(err).Error("relayManager Failed to marshal Control message to create relay")
} else {
f.SendMessageToVpnIp(header.Control, 0, target, msg, make([]byte, 12), make([]byte, mtu))
}
}
// Also track the half-created Relay state just received
relay, ok := h.relayState.QueryRelayForByIp(target)
if !ok {
// Add the relay
state := Requested
if targetRelay != nil && targetRelay.State == Established {
state = Established
}
_, err := AddRelay(rm.l, h, f.hostMap, target, &m.InitiatorRelayIndex, ForwardingType, state)
if err != nil {
rm.l.
WithError(err).Error("relayManager Failed to allocate a local index for relay")
return
}
} else {
if relay.RemoteIndex != m.InitiatorRelayIndex {
// This is a stale Relay entry for the same tunnel targets.
// Clean up the existing stuff.
rm.RemoveRelay(relay.LocalIndex)
// Add the new relay
_, err := AddRelay(rm.l, h, f.hostMap, target, &m.InitiatorRelayIndex, ForwardingType, Requested)
if err != nil {
return
}
relay, _ = h.relayState.QueryRelayForByIp(target)
}
switch relay.State {
case Established:
resp := NebulaControl{
Type: NebulaControl_CreateRelayResponse,
ResponderRelayIndex: relay.LocalIndex,
InitiatorRelayIndex: relay.RemoteIndex,
RelayFromIp: uint32(h.vpnIp),
RelayToIp: uint32(target),
}
msg, err := resp.Marshal()
if err != nil {
rm.l.
WithError(err).Error("relayManager Failed to marshal Control CreateRelayResponse message to create relay")
} else {
f.SendMessageToVpnIp(header.Control, 0, h.vpnIp, msg, make([]byte, 12), make([]byte, mtu))
}
case Requested:
// Keep waiting for the other relay to complete
}
}
}
}
func (rm *relayManager) RemoveRelay(localIdx uint32) {
rm.hostmap.RemoveRelay(localIdx)
}

View File

@@ -26,6 +26,7 @@ type CacheMap map[string]*Cache
type Cache struct {
Learned []*udp.Addr `json:"learned,omitempty"`
Reported []*udp.Addr `json:"reported,omitempty"`
Relay []*net.IP `json:"relay"`
}
//TODO: Seems like we should plop static host entries in here too since the are protected by the lighthouse from deletion
@@ -33,8 +34,13 @@ type Cache struct {
// cache is an internal struct that splits v4 and v6 addresses inside the cache map
type cache struct {
v4 *cacheV4
v6 *cacheV6
v4 *cacheV4
v6 *cacheV6
relay *cacheRelay
}
type cacheRelay struct {
relay []uint32
}
// cacheV4 stores learned and reported ipv4 records under cache
@@ -58,6 +64,9 @@ type RemoteList struct {
// A deduplicated set of addresses. Any accessor should lock beforehand.
addrs []*udp.Addr
// A set of relay addresses. VpnIp addresses that the remote identified as relays.
relays []*iputil.VpnIp
// These are maps to store v4 and v6 addresses per lighthouse
// Map key is the vpnIp of the person that told us about this the cached entries underneath.
// For learned addresses, this is the vpnIp that sent the packet
@@ -74,8 +83,9 @@ type RemoteList struct {
// NewRemoteList creates a new empty RemoteList
func NewRemoteList() *RemoteList {
return &RemoteList{
addrs: make([]*udp.Addr, 0),
cache: make(map[iputil.VpnIp]*cache),
addrs: make([]*udp.Addr, 0),
relays: make([]*iputil.VpnIp, 0),
cache: make(map[iputil.VpnIp]*cache),
}
}
@@ -144,6 +154,7 @@ func (r *RemoteList) CopyCache() *CacheMap {
c = &Cache{
Learned: make([]*udp.Addr, 0),
Reported: make([]*udp.Addr, 0),
Relay: make([]*net.IP, 0),
}
cm[vpnIp] = c
}
@@ -172,6 +183,13 @@ func (r *RemoteList) CopyCache() *CacheMap {
c.Reported = append(c.Reported, NewUDPAddrFromLH6(a))
}
}
if mc.relay != nil {
for _, a := range mc.relay.relay {
nip := iputil.VpnIp(a).ToIP()
c.Relay = append(c.Relay, &nip)
}
}
}
return &cm
@@ -179,6 +197,10 @@ func (r *RemoteList) CopyCache() *CacheMap {
// BlockRemote locks and records the address as bad, it will be excluded from the deduplicated address list
func (r *RemoteList) BlockRemote(bad *udp.Addr) {
if bad == nil {
// relays can have nil udp Addrs
return
}
r.Lock()
defer r.Unlock()
@@ -264,6 +286,17 @@ func (r *RemoteList) unlockedSetV4(ownerVpnIp iputil.VpnIp, vpnIp iputil.VpnIp,
}
}
func (r *RemoteList) unlockedSetRelay(ownerVpnIp iputil.VpnIp, vpnIp iputil.VpnIp, to []uint32) {
r.shouldRebuild = true
c := r.unlockedGetOrMakeRelay(ownerVpnIp)
// Reset the slice
c.relay = c.relay[:0]
// We can't take their array but we can take their pointers
c.relay = append(c.relay, to[:minInt(len(to), MaxRemotes)]...)
}
// unlockedPrependV4 assumes you have the write lock and prepends the address in the reported list for this owner
// This is only useful for establishing static hosts
func (r *RemoteList) unlockedPrependV4(ownerVpnIp iputil.VpnIp, to *Ip4AndPort) {
@@ -314,6 +347,19 @@ func (r *RemoteList) unlockedPrependV6(ownerVpnIp iputil.VpnIp, to *Ip6AndPort)
}
}
func (r *RemoteList) unlockedGetOrMakeRelay(ownerVpnIp iputil.VpnIp) *cacheRelay {
am := r.cache[ownerVpnIp]
if am == nil {
am = &cache{}
r.cache[ownerVpnIp] = am
}
// Avoid occupying memory for relay if we never have any
if am.relay == nil {
am.relay = &cacheRelay{}
}
return am.relay
}
// unlockedGetOrMakeV4 assumes you have the write lock and builds the cache and owner entry. Only the v4 pointer is established.
// The caller must dirty the learned address cache if required
func (r *RemoteList) unlockedGetOrMakeV4(ownerVpnIp iputil.VpnIp) *cacheV4 {
@@ -348,6 +394,7 @@ func (r *RemoteList) unlockedGetOrMakeV6(ownerVpnIp iputil.VpnIp) *cacheV6 {
// The result of this function can contain duplicates. unlockedSort handles cleaning it.
func (r *RemoteList) unlockedCollect() {
addrs := r.addrs[:0]
relays := r.relays[:0]
for _, c := range r.cache {
if c.v4 != nil {
@@ -381,9 +428,18 @@ func (r *RemoteList) unlockedCollect() {
}
}
}
if c.relay != nil {
for _, v := range c.relay.relay {
ip := iputil.VpnIp(v)
relays = append(relays, &ip)
}
}
}
r.addrs = addrs
r.relays = relays
}
// unlockedSort assumes you have the write lock and performs the deduping and sorting of the address list

138
ssh.go
View File

@@ -12,7 +12,6 @@ import (
"runtime/pprof"
"sort"
"strings"
"syscall"
"github.com/sirupsen/logrus"
"github.com/slackhq/nebula/config"
@@ -166,7 +165,7 @@ func configSSH(l *logrus.Logger, ssh *sshd.SSHServer, c *config.C) (func(), erro
return runner, nil
}
func attachCommands(l *logrus.Logger, ssh *sshd.SSHServer, hostMap *HostMap, pendingHostMap *HostMap, lightHouse *LightHouse, ifce *Interface) {
func attachCommands(l *logrus.Logger, c *config.C, ssh *sshd.SSHServer, hostMap *HostMap, pendingHostMap *HostMap, lightHouse *LightHouse, ifce *Interface) {
ssh.RegisterCommand(&sshd.Command{
Name: "list-hostmap",
ShortDescription: "List all known previously connected hosts",
@@ -215,7 +214,9 @@ func attachCommands(l *logrus.Logger, ssh *sshd.SSHServer, hostMap *HostMap, pen
ssh.RegisterCommand(&sshd.Command{
Name: "reload",
ShortDescription: "Reloads configuration from disk, same as sending HUP to the process",
Callback: sshReload,
Callback: func(fs interface{}, a []string, w sshd.StringWriter) error {
return sshReload(c, w)
},
})
ssh.RegisterCommand(&sshd.Command{
@@ -293,6 +294,20 @@ func attachCommands(l *logrus.Logger, ssh *sshd.SSHServer, hostMap *HostMap, pen
},
})
ssh.RegisterCommand(&sshd.Command{
Name: "print-relays",
ShortDescription: "Prints json details about all relay info",
Flags: func() (*flag.FlagSet, interface{}) {
fl := flag.NewFlagSet("", flag.ContinueOnError)
s := sshPrintTunnelFlags{}
fl.BoolVar(&s.Pretty, "pretty", false, "pretty prints json")
return fl, &s
},
Callback: func(fs interface{}, a []string, w sshd.StringWriter) error {
return sshPrintRelays(ifce, fs, a, w)
},
})
ssh.RegisterCommand(&sshd.Command{
Name: "change-remote",
ShortDescription: "Changes the remote address used in the tunnel for the provided vpn ip",
@@ -519,14 +534,13 @@ func sshCloseTunnel(ifce *Interface, fs interface{}, a []string, w sshd.StringWr
0,
hostInfo.ConnectionState,
hostInfo,
hostInfo.remote,
[]byte{},
make([]byte, 12, 12),
make([]byte, mtu),
)
}
ifce.closeTunnel(hostInfo, false)
ifce.closeTunnel(hostInfo)
return w.WriteLine("Closed")
}
@@ -730,6 +744,104 @@ func sshPrintCert(ifce *Interface, fs interface{}, a []string, w sshd.StringWrit
return w.WriteLine(cert.String())
}
func sshPrintRelays(ifce *Interface, fs interface{}, a []string, w sshd.StringWriter) error {
args, ok := fs.(*sshPrintTunnelFlags)
if !ok {
//TODO: error
w.WriteLine(fmt.Sprintf("sshPrintRelays failed to convert args type"))
return nil
}
relays := map[uint32]*HostInfo{}
ifce.hostMap.Lock()
for k, v := range ifce.hostMap.Relays {
relays[k] = v
}
ifce.hostMap.Unlock()
type RelayFor struct {
Error error
Type string
State string
PeerIp iputil.VpnIp
LocalIndex uint32
RemoteIndex uint32
RelayedThrough []iputil.VpnIp
}
type RelayOutput struct {
NebulaIp iputil.VpnIp
RelayForIps []RelayFor
}
type CmdOutput struct {
Relays []*RelayOutput
}
co := CmdOutput{}
enc := json.NewEncoder(w.GetWriter())
if args.Pretty {
enc.SetIndent("", " ")
}
for k, v := range relays {
ro := RelayOutput{NebulaIp: v.vpnIp}
co.Relays = append(co.Relays, &ro)
relayHI, err := ifce.hostMap.QueryVpnIp(v.vpnIp)
if err != nil {
ro.RelayForIps = append(ro.RelayForIps, RelayFor{Error: err})
continue
}
for _, vpnIp := range relayHI.relayState.CopyRelayForIps() {
rf := RelayFor{Error: nil}
r, ok := relayHI.relayState.GetRelayForByIp(vpnIp)
if ok {
t := ""
switch r.Type {
case ForwardingType:
t = "forwarding"
case TerminalType:
t = "terminal"
default:
t = "unkown"
}
s := ""
switch r.State {
case Requested:
s = "requested"
case Established:
s = "established"
default:
s = "unknown"
}
rf.LocalIndex = r.LocalIndex
rf.RemoteIndex = r.RemoteIndex
rf.PeerIp = r.PeerIp
rf.Type = t
rf.State = s
if rf.LocalIndex != k {
rf.Error = fmt.Errorf("hostmap LocalIndex '%v' does not match RelayState LocalIndex", k)
}
}
relayedHI, err := ifce.hostMap.QueryVpnIp(vpnIp)
if err == nil {
rf.RelayedThrough = append(rf.RelayedThrough, relayedHI.relayState.CopyRelayIps()...)
}
ro.RelayForIps = append(ro.RelayForIps, rf)
}
}
err := enc.Encode(co)
if err != nil {
return err
}
return nil
}
func sshPrintTunnel(ifce *Interface, fs interface{}, a []string, w sshd.StringWriter) error {
args, ok := fs.(*sshPrintTunnelFlags)
if !ok {
@@ -764,16 +876,8 @@ func sshPrintTunnel(ifce *Interface, fs interface{}, a []string, w sshd.StringWr
return enc.Encode(copyHostInfo(hostInfo, ifce.hostMap.preferredRanges))
}
func sshReload(fs interface{}, a []string, w sshd.StringWriter) error {
p, err := os.FindProcess(os.Getpid())
if err != nil {
return w.WriteLine(err.Error())
//TODO
}
err = p.Signal(syscall.SIGHUP)
if err != nil {
return w.WriteLine(err.Error())
//TODO
}
return w.WriteLine("HUP sent")
func sshReload(c *config.C, w sshd.StringWriter) error {
err := w.WriteLine("Reloading config")
c.ReloadConfig()
return err
}

View File

@@ -1,4 +1,4 @@
package util
package test
import (
"fmt"

View File

@@ -1,4 +1,4 @@
package util
package test
import (
"io/ioutil"
@@ -7,7 +7,7 @@ import (
"github.com/sirupsen/logrus"
)
func NewTestLogger() *logrus.Logger {
func NewLogger() *logrus.Logger {
l := logrus.New()
v := os.Getenv("TEST_LOGS")

43
test/tun.go Normal file
View File

@@ -0,0 +1,43 @@
package test
import (
"errors"
"io"
"net"
"github.com/slackhq/nebula/iputil"
)
type NoopTun struct{}
func (NoopTun) RouteFor(iputil.VpnIp) iputil.VpnIp {
return 0
}
func (NoopTun) Activate() error {
return nil
}
func (NoopTun) Cidr() *net.IPNet {
return nil
}
func (NoopTun) Name() string {
return "noop"
}
func (NoopTun) Read([]byte) (int, error) {
return 0, nil
}
func (NoopTun) Write([]byte) (int, error) {
return 0, nil
}
func (NoopTun) NewMultiQueueReader() (io.ReadWriteCloser, error) {
return nil, errors.New("unsupported")
}
func (NoopTun) Close() error {
return nil
}

View File

@@ -1,86 +0,0 @@
//go:build !e2e_testing
// +build !e2e_testing
package nebula
import (
"fmt"
"io"
"net"
"os"
"github.com/sirupsen/logrus"
"golang.org/x/sys/unix"
)
type Tun struct {
io.ReadWriteCloser
fd int
Device string
Cidr *net.IPNet
MaxMTU int
DefaultMTU int
TXQueueLen int
Routes []route
UnsafeRoutes []route
l *logrus.Logger
}
func newTunFromFd(l *logrus.Logger, deviceFd int, cidr *net.IPNet, defaultMTU int, routes []route, unsafeRoutes []route, txQueueLen int) (ifce *Tun, err error) {
file := os.NewFile(uintptr(deviceFd), "/dev/net/tun")
ifce = &Tun{
ReadWriteCloser: file,
fd: int(file.Fd()),
Device: "android",
Cidr: cidr,
DefaultMTU: defaultMTU,
TXQueueLen: txQueueLen,
Routes: routes,
UnsafeRoutes: unsafeRoutes,
l: l,
}
return
}
func newTun(l *logrus.Logger, deviceName string, cidr *net.IPNet, defaultMTU int, routes []route, unsafeRoutes []route, txQueueLen int, multiqueue bool) (ifce *Tun, err error) {
return nil, fmt.Errorf("newTun not supported in Android")
}
func (c *Tun) WriteRaw(b []byte) error {
var nn int
for {
max := len(b)
n, err := unix.Write(c.fd, b[nn:max])
if n > 0 {
nn += n
}
if nn == len(b) {
return err
}
if err != nil {
return err
}
if n == 0 {
return io.ErrUnexpectedEOF
}
}
}
func (c Tun) Activate() error {
return nil
}
func (c *Tun) CidrNet() *net.IPNet {
return c.Cidr
}
func (c *Tun) DeviceName() string {
return c.Device
}
func (t *Tun) NewMultiQueueReader() (io.ReadWriteCloser, error) {
return nil, fmt.Errorf("TODO: multiqueue not implemented for android")
}

View File

@@ -1,107 +0,0 @@
//go:build !e2e_testing
// +build !e2e_testing
package nebula
import (
"fmt"
"io"
"net"
"os"
"os/exec"
"regexp"
"strconv"
"strings"
"github.com/sirupsen/logrus"
)
var deviceNameRE = regexp.MustCompile(`^tun[0-9]+$`)
type Tun struct {
Device string
Cidr *net.IPNet
MTU int
UnsafeRoutes []route
l *logrus.Logger
io.ReadWriteCloser
}
func (c *Tun) Close() error {
if c.ReadWriteCloser != nil {
return c.ReadWriteCloser.Close()
}
return nil
}
func newTunFromFd(l *logrus.Logger, deviceFd int, cidr *net.IPNet, defaultMTU int, routes []route, unsafeRoutes []route, txQueueLen int) (ifce *Tun, err error) {
return nil, fmt.Errorf("newTunFromFd not supported in FreeBSD")
}
func newTun(l *logrus.Logger, deviceName string, cidr *net.IPNet, defaultMTU int, routes []route, unsafeRoutes []route, txQueueLen int, multiqueue bool) (ifce *Tun, err error) {
if len(routes) > 0 {
return nil, fmt.Errorf("Route MTU not supported in FreeBSD")
}
if strings.HasPrefix(deviceName, "/dev/") {
deviceName = strings.TrimPrefix(deviceName, "/dev/")
}
if !deviceNameRE.MatchString(deviceName) {
return nil, fmt.Errorf("tun.dev must match `tun[0-9]+`")
}
return &Tun{
Device: deviceName,
Cidr: cidr,
MTU: defaultMTU,
UnsafeRoutes: unsafeRoutes,
l: l,
}, nil
}
func (c *Tun) Activate() error {
var err error
c.ReadWriteCloser, err = os.OpenFile("/dev/"+c.Device, os.O_RDWR, 0)
if err != nil {
return fmt.Errorf("Activate failed: %v", err)
}
// TODO use syscalls instead of exec.Command
c.l.Debug("command: ifconfig", c.Device, c.Cidr.String(), c.Cidr.IP.String())
if err = exec.Command("/sbin/ifconfig", c.Device, c.Cidr.String(), c.Cidr.IP.String()).Run(); err != nil {
return fmt.Errorf("failed to run 'ifconfig': %s", err)
}
c.l.Debug("command: route", "-n", "add", "-net", c.Cidr.String(), "-interface", c.Device)
if err = exec.Command("/sbin/route", "-n", "add", "-net", c.Cidr.String(), "-interface", c.Device).Run(); err != nil {
return fmt.Errorf("failed to run 'route add': %s", err)
}
c.l.Debug("command: ifconfig", c.Device, "mtu", strconv.Itoa(c.MTU))
if err = exec.Command("/sbin/ifconfig", c.Device, "mtu", strconv.Itoa(c.MTU)).Run(); err != nil {
return fmt.Errorf("failed to run 'ifconfig': %s", err)
}
// Unsafe path routes
for _, r := range c.UnsafeRoutes {
c.l.Debug("command: route", "-n", "add", "-net", r.route.String(), "-interface", c.Device)
if err = exec.Command("/sbin/route", "-n", "add", "-net", r.route.String(), "-interface", c.Device).Run(); err != nil {
return fmt.Errorf("failed to run 'route add' for unsafe_route %s: %s", r.route.String(), err)
}
}
return nil
}
func (c *Tun) CidrNet() *net.IPNet {
return c.Cidr
}
func (c *Tun) DeviceName() string {
return c.Device
}
func (c *Tun) WriteRaw(b []byte) error {
_, err := c.Write(b)
return err
}
func (t *Tun) NewMultiQueueReader() (io.ReadWriteCloser, error) {
return nil, fmt.Errorf("TODO: multiqueue not implemented for freebsd")
}

View File

@@ -1,120 +0,0 @@
//go:build ios && !e2e_testing
// +build ios,!e2e_testing
package nebula
import (
"errors"
"fmt"
"io"
"net"
"os"
"sync"
"syscall"
"github.com/sirupsen/logrus"
)
type Tun struct {
io.ReadWriteCloser
Device string
Cidr *net.IPNet
}
func newTun(l *logrus.Logger, deviceName string, cidr *net.IPNet, defaultMTU int, routes []route, unsafeRoutes []route, txQueueLen int, multiqueue bool) (ifce *Tun, err error) {
return nil, fmt.Errorf("newTun not supported in iOS")
}
func newTunFromFd(l *logrus.Logger, deviceFd int, cidr *net.IPNet, defaultMTU int, routes []route, unsafeRoutes []route, txQueueLen int) (ifce *Tun, err error) {
if len(routes) > 0 {
return nil, fmt.Errorf("route MTU not supported in Darwin")
}
file := os.NewFile(uintptr(deviceFd), "/dev/tun")
ifce = &Tun{
Cidr: cidr,
Device: "iOS",
ReadWriteCloser: &tunReadCloser{f: file},
}
return
}
func (c *Tun) Activate() error {
return nil
}
func (c *Tun) WriteRaw(b []byte) error {
_, err := c.Write(b)
return err
}
// The following is hoisted up from water, we do this so we can inject our own fd on iOS
type tunReadCloser struct {
f io.ReadWriteCloser
rMu sync.Mutex
rBuf []byte
wMu sync.Mutex
wBuf []byte
}
func (t *tunReadCloser) Read(to []byte) (int, error) {
t.rMu.Lock()
defer t.rMu.Unlock()
if cap(t.rBuf) < len(to)+4 {
t.rBuf = make([]byte, len(to)+4)
}
t.rBuf = t.rBuf[:len(to)+4]
n, err := t.f.Read(t.rBuf)
copy(to, t.rBuf[4:])
return n - 4, err
}
func (t *tunReadCloser) Write(from []byte) (int, error) {
if len(from) == 0 {
return 0, syscall.EIO
}
t.wMu.Lock()
defer t.wMu.Unlock()
if cap(t.wBuf) < len(from)+4 {
t.wBuf = make([]byte, len(from)+4)
}
t.wBuf = t.wBuf[:len(from)+4]
// Determine the IP Family for the NULL L2 Header
ipVer := from[0] >> 4
if ipVer == 4 {
t.wBuf[3] = syscall.AF_INET
} else if ipVer == 6 {
t.wBuf[3] = syscall.AF_INET6
} else {
return 0, errors.New("unable to determine IP version from packet")
}
copy(t.wBuf[4:], from)
n, err := t.f.Write(t.wBuf)
return n - 4, err
}
func (t *tunReadCloser) Close() error {
return t.f.Close()
}
func (c *Tun) CidrNet() *net.IPNet {
return c.Cidr
}
func (c *Tun) DeviceName() string {
return c.Device
}
func (t *Tun) NewMultiQueueReader() (io.ReadWriteCloser, error) {
return nil, fmt.Errorf("TODO: multiqueue not implemented for ios")
}

View File

@@ -1,105 +0,0 @@
//go:build e2e_testing
// +build e2e_testing
package nebula
import (
"fmt"
"io"
"net"
"github.com/sirupsen/logrus"
)
type Tun struct {
Device string
Cidr *net.IPNet
MTU int
UnsafeRoutes []route
l *logrus.Logger
rxPackets chan []byte // Packets to receive into nebula
txPackets chan []byte // Packets transmitted outside by nebula
}
func newTun(l *logrus.Logger, deviceName string, cidr *net.IPNet, defaultMTU int, _ []route, unsafeRoutes []route, _ int, _ bool) (ifce *Tun, err error) {
return &Tun{
Device: deviceName,
Cidr: cidr,
MTU: defaultMTU,
UnsafeRoutes: unsafeRoutes,
l: l,
rxPackets: make(chan []byte, 1),
txPackets: make(chan []byte, 1),
}, nil
}
func newTunFromFd(_ *logrus.Logger, _ int, _ *net.IPNet, _ int, _ []route, _ []route, _ int) (ifce *Tun, err error) {
return nil, fmt.Errorf("newTunFromFd not supported")
}
// Send will place a byte array onto the receive queue for nebula to consume
// These are unencrypted ip layer frames destined for another nebula node.
// packets should exit the udp side, capture them with udpConn.Get
func (c *Tun) Send(packet []byte) {
c.l.WithField("dataLen", len(packet)).Info("Tun receiving injected packet")
c.rxPackets <- packet
}
// Get will pull an unencrypted ip layer frame from the transmit queue
// nebula meant to send this message to some application on the local system
// packets were ingested from the udp side, you can send them with udpConn.Send
func (c *Tun) Get(block bool) []byte {
if block {
return <-c.txPackets
}
select {
case p := <-c.txPackets:
return p
default:
return nil
}
}
//********************************************************************************************************************//
// Below this is boilerplate implementation to make nebula actually work
//********************************************************************************************************************//
func (c *Tun) Activate() error {
return nil
}
func (c *Tun) CidrNet() *net.IPNet {
return c.Cidr
}
func (c *Tun) DeviceName() string {
return c.Device
}
func (c *Tun) Write(b []byte) (n int, err error) {
return len(b), c.WriteRaw(b)
}
func (c *Tun) Close() error {
close(c.rxPackets)
return nil
}
func (c *Tun) WriteRaw(b []byte) error {
packet := make([]byte, len(b), len(b))
copy(packet, b)
c.txPackets <- packet
return nil
}
func (c *Tun) Read(b []byte) (int, error) {
p := <-c.rxPackets
copy(b, p)
return len(p), nil
}
func (c *Tun) NewMultiQueueReader() (io.ReadWriteCloser, error) {
return nil, fmt.Errorf("TODO: multiqueue not implemented")
}

View File

@@ -1,107 +0,0 @@
package nebula
import (
"fmt"
"io"
"net"
"os/exec"
"strconv"
"github.com/songgao/water"
)
type WindowsWaterTun struct {
Device string
Cidr *net.IPNet
MTU int
UnsafeRoutes []route
*water.Interface
}
func newWindowsWaterTun(deviceName string, cidr *net.IPNet, defaultMTU int, unsafeRoutes []route, txQueueLen int) (ifce *WindowsWaterTun, err error) {
// NOTE: You cannot set the deviceName under Windows, so you must check tun.Device after calling .Activate()
return &WindowsWaterTun{
Cidr: cidr,
MTU: defaultMTU,
UnsafeRoutes: unsafeRoutes,
}, nil
}
func (c *WindowsWaterTun) Activate() error {
var err error
c.Interface, err = water.New(water.Config{
DeviceType: water.TUN,
PlatformSpecificParams: water.PlatformSpecificParams{
ComponentID: "tap0901",
Network: c.Cidr.String(),
},
})
if err != nil {
return fmt.Errorf("Activate failed: %v", err)
}
c.Device = c.Interface.Name()
// TODO use syscalls instead of exec.Command
err = exec.Command(
`C:\Windows\System32\netsh.exe`, "interface", "ipv4", "set", "address",
fmt.Sprintf("name=%s", c.Device),
"source=static",
fmt.Sprintf("addr=%s", c.Cidr.IP),
fmt.Sprintf("mask=%s", net.IP(c.Cidr.Mask)),
"gateway=none",
).Run()
if err != nil {
return fmt.Errorf("failed to run 'netsh' to set address: %s", err)
}
err = exec.Command(
`C:\Windows\System32\netsh.exe`, "interface", "ipv4", "set", "interface",
c.Device,
fmt.Sprintf("mtu=%d", c.MTU),
).Run()
if err != nil {
return fmt.Errorf("failed to run 'netsh' to set MTU: %s", err)
}
iface, err := net.InterfaceByName(c.Device)
if err != nil {
return fmt.Errorf("failed to find interface named %s: %v", c.Device, err)
}
for _, r := range c.UnsafeRoutes {
err = exec.Command(
"C:\\Windows\\System32\\route.exe", "add", r.route.String(), r.via.String(), "IF", strconv.Itoa(iface.Index), "METRIC", strconv.Itoa(r.metric),
).Run()
if err != nil {
return fmt.Errorf("failed to add the unsafe_route %s: %v", r.route.String(), err)
}
}
return nil
}
func (c *WindowsWaterTun) CidrNet() *net.IPNet {
return c.Cidr
}
func (c *WindowsWaterTun) DeviceName() string {
return c.Device
}
func (c *WindowsWaterTun) WriteRaw(b []byte) error {
_, err := c.Write(b)
return err
}
func (c *WindowsWaterTun) Close() error {
if c.Interface == nil {
return nil
}
return c.Interface.Close()
}
func (t *WindowsWaterTun) NewMultiQueueReader() (io.ReadWriteCloser, error) {
return nil, fmt.Errorf("TODO: multiqueue not implemented for windows")
}

View File

@@ -1,74 +0,0 @@
//go:build !e2e_testing
// +build !e2e_testing
package nebula
import (
"fmt"
"io"
"net"
"os"
"path/filepath"
"runtime"
"syscall"
"github.com/sirupsen/logrus"
)
type Tun struct {
Inside
}
func newTunFromFd(l *logrus.Logger, deviceFd int, cidr *net.IPNet, defaultMTU int, routes []route, unsafeRoutes []route, txQueueLen int) (ifce *Tun, err error) {
return nil, fmt.Errorf("newTunFromFd not supported in Windows")
}
func newTun(l *logrus.Logger, deviceName string, cidr *net.IPNet, defaultMTU int, routes []route, unsafeRoutes []route, txQueueLen int, multiqueue bool) (ifce *Tun, err error) {
if len(routes) > 0 {
return nil, fmt.Errorf("route MTU not supported in Windows")
}
useWintun := true
if err = checkWinTunExists(); err != nil {
l.WithError(err).Warn("Check Wintun driver failed, fallback to wintap driver")
useWintun = false
}
var inside Inside
if useWintun {
inside, err = newWinTun(deviceName, cidr, defaultMTU, unsafeRoutes, txQueueLen)
if err != nil {
return nil, fmt.Errorf("Create Wintun interface failed, %w", err)
}
} else {
inside, err = newWindowsWaterTun(deviceName, cidr, defaultMTU, unsafeRoutes, txQueueLen)
if err != nil {
return nil, fmt.Errorf("Create wintap driver failed, %w", err)
}
}
return &Tun{
Inside: inside,
}, nil
}
func checkWinTunExists() error {
myPath, err := os.Executable()
if err != nil {
return err
}
arch := runtime.GOARCH
switch arch {
case "386":
//NOTE: wintun bundles 386 as x86
arch = "x86"
}
_, err = syscall.LoadDLL(filepath.Join(filepath.Dir(myPath), "dist", "windows", "wintun", "bin", arch, "wintun.dll"))
return err
}
func (t *Tun) NewMultiQueueReader() (io.ReadWriteCloser, error) {
return nil, fmt.Errorf("TODO: multiqueue not implemented for windows")
}

View File

@@ -1,153 +0,0 @@
package nebula
import (
"crypto"
"fmt"
"io"
"net"
"unsafe"
"github.com/slackhq/nebula/wintun"
"golang.org/x/sys/windows"
"golang.zx2c4.com/wireguard/windows/tunnel/winipcfg"
)
const tunGUIDLabel = "Fixed Nebula Windows GUID v1"
type WinTun struct {
Device string
Cidr *net.IPNet
MTU int
UnsafeRoutes []route
tun *wintun.NativeTun
}
func generateGUIDByDeviceName(name string) (*windows.GUID, error) {
// GUID is 128 bit
hash := crypto.MD5.New()
_, err := hash.Write([]byte(tunGUIDLabel))
if err != nil {
return nil, err
}
_, err = hash.Write([]byte(name))
if err != nil {
return nil, err
}
sum := hash.Sum(nil)
return (*windows.GUID)(unsafe.Pointer(&sum[0])), nil
}
func newWinTun(deviceName string, cidr *net.IPNet, defaultMTU int, unsafeRoutes []route, txQueueLen int) (ifce *WinTun, err error) {
guid, err := generateGUIDByDeviceName(deviceName)
if err != nil {
return nil, fmt.Errorf("Generate GUID failed: %w", err)
}
tunDevice, err := wintun.CreateTUNWithRequestedGUID(deviceName, guid, defaultMTU)
if err != nil {
return nil, fmt.Errorf("Create TUN device failed: %w", err)
}
ifce = &WinTun{
Device: deviceName,
Cidr: cidr,
MTU: defaultMTU,
UnsafeRoutes: unsafeRoutes,
tun: tunDevice.(*wintun.NativeTun),
}
return ifce, nil
}
func (c *WinTun) Activate() error {
luid := winipcfg.LUID(c.tun.LUID())
if err := luid.SetIPAddresses([]net.IPNet{*c.Cidr}); err != nil {
return fmt.Errorf("failed to set address: %w", err)
}
foundDefault4 := false
routes := make([]*winipcfg.RouteData, 0, len(c.UnsafeRoutes)+1)
for _, r := range c.UnsafeRoutes {
if !foundDefault4 {
if cidr, bits := r.route.Mask.Size(); cidr == 0 && bits != 0 {
foundDefault4 = true
}
}
// Add our unsafe route
routes = append(routes, &winipcfg.RouteData{
Destination: *r.route,
NextHop: *r.via,
Metric: uint32(r.metric),
})
}
if err := luid.AddRoutes(routes); err != nil {
return fmt.Errorf("failed to add routes: %w", err)
}
ipif, err := luid.IPInterface(windows.AF_INET)
if err != nil {
return fmt.Errorf("failed to get ip interface: %w", err)
}
ipif.NLMTU = uint32(c.MTU)
if foundDefault4 {
ipif.UseAutomaticMetric = false
ipif.Metric = 0
}
if err := ipif.Set(); err != nil {
return fmt.Errorf("failed to set ip interface: %w", err)
}
return nil
}
func (c *WinTun) CidrNet() *net.IPNet {
return c.Cidr
}
func (c *WinTun) DeviceName() string {
return c.Device
}
func (c *WinTun) Read(b []byte) (int, error) {
return c.tun.Read(b, 0)
}
func (c *WinTun) Write(b []byte) (int, error) {
return c.tun.Write(b, 0)
}
func (c *WinTun) WriteRaw(b []byte) error {
_, err := c.Write(b)
return err
}
func (c *WinTun) NewMultiQueueReader() (io.ReadWriteCloser, error) {
return nil, fmt.Errorf("TODO: multiqueue not implemented for windows")
}
func (c *WinTun) Close() error {
// It seems that the Windows networking stack doesn't like it when we destroy interfaces that have active routes,
// so to be certain, just remove everything before destroying.
luid := winipcfg.LUID(c.tun.LUID())
_ = luid.FlushRoutes(windows.AF_INET)
_ = luid.FlushIPAddresses(windows.AF_INET)
/* We don't support IPV6 yet
_ = luid.FlushRoutes(windows.AF_INET6)
_ = luid.FlushIPAddresses(windows.AF_INET6)
*/
_ = luid.FlushDNS(windows.AF_INET)
return c.tun.Close()
}

View File

@@ -9,6 +9,7 @@ const MTU = 9001
type EncReader func(
addr *Addr,
via interface{},
out []byte,
packet []byte,
header *header.H,

View File

@@ -5,10 +5,18 @@ import (
"github.com/slackhq/nebula/iputil"
)
//TODO: The items in this file belong in their own packages but doing that in a single PR is a nightmare
type EncWriter interface {
SendVia(via interface{},
relay interface{},
ad,
nb,
out []byte,
nocopy bool,
)
SendMessageToVpnIp(t header.MessageType, st header.MessageSubType, vpnIp iputil.VpnIp, p, nb, out []byte)
Handshake(vpnIp iputil.VpnIp)
}
//TODO: The items in this file belong in their own packages but doing that in a single PR is a nightmare
type LightHouseHandlerFunc func(rAddr *Addr, vpnIp iputil.VpnIp, p []byte, w EncWriter)

View File

@@ -86,6 +86,6 @@ func (u *Conn) ListenOut(r EncReader, lhf LightHouseHandlerFunc, cache *firewall
udpAddr.IP = rua.IP
udpAddr.Port = uint16(rua.Port)
r(udpAddr, plaintext[:0], buffer[:n], h, fwPacket, lhf, nb, q, cache.Get(u.l))
r(udpAddr, nil, plaintext[:0], buffer[:n], h, fwPacket, lhf, nb, q, cache.Get(u.l))
}
}

View File

@@ -145,7 +145,7 @@ func (u *Conn) ListenOut(r EncReader, lhf LightHouseHandlerFunc, cache *firewall
for i := 0; i < n; i++ {
udpAddr.IP = names[i][8:24]
udpAddr.Port = binary.BigEndian.Uint16(names[i][2:4])
r(udpAddr, plaintext[:0], buffers[i][:msgs[i].Len], h, fwPacket, lhf, nb, q, cache.Get(u.l))
r(udpAddr, nil, plaintext[:0], buffers[i][:msgs[i].Len], h, fwPacket, lhf, nb, q, cache.Get(u.l))
}
}
}

Some files were not shown because too many files have changed in this diff Show More