* Support NIST curve P256
This change adds support for NIST curve P256. When you use `nebula-cert ca`
or `nebula-cert keygen`, you can specify `-curve P256` to enable it. The
curve to use is based on the curve defined in your CA certificate.
Internally, we use ECDSA P256 to sign certificates, and ECDH P256 to do
Noise handshakes. P256 is not supported natively in Noise Protocol, so
we define `DHP256` in the `noiseutil` package to implement support for
it.
You cannot have a mixed network of Curve25519 and P256 certificates,
since the Noise protocol will only attempt to parse using the Curve
defined in the host's certificate.
* verify the curves match in VerifyPrivateKey
This would have failed anyways once we tried to actually use the bytes
in the private key, but its better to detect the issue up front with
a better error message.
* add cert.Curve argument to Sign method
* fix mismerge
* use crypto/ecdh
This is the preferred method for doing ECDH functions now, and also has
a boringcrypto specific codepath.
* remove other ecdh uses of crypto/elliptic
use crypto/ecdh instead
* build with go1.20
This has been out for a bit and is up to go1.20.4. We have been using
go1.20 for the Slack builds and have seen no issues.
* need the quotes
* use go install
These new helpers make the code a lot cleaner. I confirmed that the
simple helpers like `atomic.Int64` don't add any extra overhead as they
get inlined by the compiler. `atomic.Pointer` adds an extra method call
as it no longer gets inlined, but we aren't using these on the hot path
so it is probably okay.
The goal of this work is to send packets between two hosts using more than one
5-tuple. When running on networks like AWS where the underlying network driver
and overlay fabric makes routing, load balancing, and failover decisions based
on the flow hash, this enables more than one flow between pairs of hosts.
Multiport spreads outgoing UDP packets across multiple UDP send ports,
which allows nebula to work around any issues on the underlay network.
Some example issues this could work around:
- UDP rate limits on a per flow basis.
- Partial underlay network failure in which some flows work and some don't
Agreement is done during the handshake to decide if multiport mode will
be used for a given tunnel (one side must have tx_enabled set, the other
side must have rx_enabled set)
NOTE: you cannot use multiport on a host if you are relying on UDP hole
punching to get through a NAT or firewall.
NOTE: Linux only (uses raw sockets to send). Also currently only works
with IPv4 underlay network remotes.
This is implemented by opening a raw socket and sending packets with
a source port that is based on a hash of the overlay source/destiation
port. For ICMP and Nebula metadata packets, we use a random source port.
Example configuration:
multiport:
# This host support sending via multiple UDP ports.
tx_enabled: false
# This host supports receiving packets sent from multiple UDP ports.
rx_enabled: false
# How many UDP ports to use when sending. The lowest source port will be
# listen.port and go up to (but not including) listen.port + tx_ports.
tx_ports: 100
# NOTE: All of your hosts must be running a version of Nebula that supports
# multiport if you want to enable this feature. Older versions of Nebula
# will be confused by these multiport handshakes.
#
# If handshakes are not getting a response, attempt to transmit handshakes
# using random UDP source ports (to get around partial underlay network
# failures).
tx_handshake: false
# How many unresponded handshakes we should send before we attempt to
# send multiport handshakes.
tx_handshake_delay: 2
This makes it easier to use the docker container smoke test that
GitHub actions runs. There is also `make smoke-docker-race` that runs the
smoke test with `-race` enabled.
Test that basic inbound / outbound firewall rules work during the smoke
test. This change sets an inbound firewall rule on host3, and a new
host4 with outbound firewall rules. It also tests that conntrack allows
packets once the connection has been established.
This PR does two things:
- Only run the tests when relevant files change.
- Cache the Go Modules directory between runs, so they don't have to redownload everything everytime (go.sum is the cache key). Pretty much straight from the examples: https://github.com/actions/cache/blob/master/examples.md#go---modules
This change adds a new Github Action, a 3 node smoke test. It starts
three docker containers (one lighthouse and two standard nodes) and
tests that they can all ping each other. This should hopefully detect
any basic runtime failures in PRs.