WHAT
This commit is contained in:
commit
86f60da6ec
10
.gitignore
vendored
Normal file
10
.gitignore
vendored
Normal file
@ -0,0 +1,10 @@
|
||||
.obsidian
|
||||
venv
|
||||
__pycache__
|
||||
*.log
|
||||
.idea/*
|
||||
*/.idea
|
||||
*.idea
|
||||
/.idea
|
||||
.idea/
|
||||
2024-bsc-sebastian-lenzlinger.iml
|
||||
28
LICENSE
Normal file
28
LICENSE
Normal file
@ -0,0 +1,28 @@
|
||||
BSD 3-Clause License
|
||||
|
||||
Copyright (c) 2024, Sebastian Lenzlinger
|
||||
|
||||
Redistribution and use in source and binary forms, with or without
|
||||
modification, are permitted provided that the following conditions are met:
|
||||
|
||||
1. Redistributions of source code must retain the above copyright notice, this
|
||||
list of conditions and the following disclaimer.
|
||||
|
||||
2. Redistributions in binary form must reproduce the above copyright notice,
|
||||
this list of conditions and the following disclaimer in the documentation
|
||||
and/or other materials provided with the distribution.
|
||||
|
||||
3. Neither the name of the copyright holder nor the names of its
|
||||
contributors may be used to endorse or promote products derived from
|
||||
this software without specific prior written permission.
|
||||
|
||||
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
|
||||
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
||||
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
|
||||
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
|
||||
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
|
||||
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
|
||||
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
|
||||
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
|
||||
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
34
README.md
Normal file
34
README.md
Normal file
@ -0,0 +1,34 @@
|
||||
# Your Project Name
|
||||
|
||||
Hello! This is the README file that accompanies the Gitlab repository for your Bachelor or Master thesis. You'll need to update this README as you work on your thesis to reflect relevant information about your thesis.
|
||||
|
||||
[[_TOC_]]
|
||||
|
||||
## Organization of the repository
|
||||
- **code** folder: holds source code
|
||||
- **data** folder: holds (input) data required for the project. If your input data files are larger than 100MB, create a sample data file smaller than 100MB and commit the sample instead of the full data file. Include a note explaining how the full data can be retrieved.
|
||||
- **results** folder: holds results files generated as part of the project
|
||||
- **thesis** folder: contains the latex sources + PDF of the final thesis. You can use the [basilea-latex template](https://github.com/ivangiangreco/basilea-latex) as a starting point.
|
||||
- **presentation** folder: contains the sources of the presentation (e.g., latex or PPT)
|
||||
- **literature** folder: contains any research paper that the student needs to read or finds interesting
|
||||
- **notes** folder: holds minutes of meetings
|
||||
|
||||
## Useful resources
|
||||
- [Efficient Reading of Papers in Science and Technology](https://www.cs.columbia.edu/~hgs/netbib/efficientReading.pdf)
|
||||
- [Heilmeier's catechism](https://en.wikipedia.org/wiki/George_H._Heilmeier#Heilmeier%27s_Catechism)
|
||||
|
||||
## Description
|
||||
Let people know what your project can do specifically. Provide context and add a link to any reference visitors might be unfamiliar with. A list of Features or a Background subsection can also be added here. If there are alternatives to your project, this is a good place to list differentiating factors.
|
||||
|
||||
## Visuals
|
||||
Depending on what you are making, it can be a good idea to include screenshots or even a video (you'll frequently see GIFs rather than actual videos). Tools like ttygif can help, but check out Asciinema for a more sophisticated method.
|
||||
|
||||
## Installation
|
||||
Within a particular ecosystem, there may be a common way of installing things, such as using Yarn, NuGet, or Homebrew. However, consider the possibility that whoever is reading your README is a novice and would like more guidance. Listing specific steps helps remove ambiguity and gets people to using your project as quickly as possible. If it only runs in a specific context like a particular programming language version or operating system or has dependencies that have to be installed manually, also add a Requirements subsection.
|
||||
|
||||
## Usage
|
||||
Use examples liberally, and show the expected output if you can. It's helpful to have inline the smallest example of usage that you can demonstrate, while providing links to more sophisticated examples if they are too long to reasonably include in the README.
|
||||
|
||||
## License
|
||||
To allow further development and use during public events of the implemented system through the University of Basel, the system is expected to be well documented and provided to the university under a license that allows such reuse, e.g., the [BSD 3-clause license](https://opensource.org/license/bsd-3-clause/). The student agrees that all code produced during the project may be released open-source in the context of the PET group's projects.
|
||||
|
||||
38
archive/capture_metadata_utils.py
Normal file
38
archive/capture_metadata_utils.py
Normal file
@ -0,0 +1,38 @@
|
||||
import json
|
||||
from pathlib import Path
|
||||
|
||||
from iottb.definitions import ReturnCodes
|
||||
|
||||
|
||||
def set_device_ip_address(ip_addr: str, file_path: Path):
|
||||
assert ip_addr is not None
|
||||
assert file_path.is_file()
|
||||
with file_path.open('r') as f:
|
||||
data = json.load(f)
|
||||
current_ip = data['device_ip_address']
|
||||
if current_ip is not None:
|
||||
print(f'Device IP Address is set to {current_ip}')
|
||||
response = input(f'Do you want to change the recorded IP address to {ip_addr}? [Y/N] ')
|
||||
if response.upper() == 'N':
|
||||
print('Aborting change to device IP address')
|
||||
return ReturnCodes.ABORTED
|
||||
with file_path.open('w') as f:
|
||||
json.dump(data, f)
|
||||
return ReturnCodes.SUCCESS
|
||||
|
||||
|
||||
def set_device_mac_address(mac_addr: str, file_path: Path):
|
||||
assert mac_addr is not None
|
||||
assert file_path.is_file()
|
||||
with file_path.open('r') as f:
|
||||
data = json.load(f)
|
||||
current_mac = data['device_mac_address']
|
||||
if current_mac is not None:
|
||||
print(f'Device MAC Address is set to {current_mac}')
|
||||
response = input(f'Do you want to change the recorded MAC address to {mac_addr}? [Y/N] ')
|
||||
if response.upper() == 'N':
|
||||
print('Aborting change to device MAC address')
|
||||
return ReturnCodes.ABORTED
|
||||
with file_path.open('w') as f:
|
||||
json.dump(data, f)
|
||||
return ReturnCodes.SUCCESS
|
||||
51
archive/device_metadata_utils.py
Normal file
51
archive/device_metadata_utils.py
Normal file
@ -0,0 +1,51 @@
|
||||
import json
|
||||
from datetime import datetime
|
||||
from pathlib import Path
|
||||
|
||||
from iottb.definitions import ReturnCodes
|
||||
|
||||
|
||||
def update_firmware_version(version: str, file_path: Path):
|
||||
assert file_path.is_file()
|
||||
with file_path.open('r') as file:
|
||||
metadata = json.load(file)
|
||||
metadata['device_firmware_version'] = version
|
||||
metadata['date_updated'] = datetime.now().strftime('%d-%m-%YT%H:%M:%S').lower()
|
||||
with file_path.open('w') as file:
|
||||
json.dump(metadata, file)
|
||||
return ReturnCodes.SUCCESS
|
||||
|
||||
|
||||
def add_capture_file_reference(capture_file_reference: str, file_path: Path):
|
||||
assert file_path.is_file()
|
||||
with file_path.open('r') as file:
|
||||
metadata = json.load(file)
|
||||
metadata['capture_files'] = capture_file_reference
|
||||
metadata['date_updated'] = datetime.now().strftime('%d-%m-%YT%H:%M:%S').lower()
|
||||
with file_path.open('w') as file:
|
||||
json.dump(metadata, file)
|
||||
return ReturnCodes.SUCCESS
|
||||
|
||||
|
||||
def update_device_serial_number(device_id: str, file_path: Path):
|
||||
assert file_path.is_file()
|
||||
with file_path.open('r') as file:
|
||||
metadata = json.load(file)
|
||||
metadata['device_id'] = device_id
|
||||
metadata['date_updated'] = datetime.now().strftime('%d-%m-%YT%H:%M:%S').lower()
|
||||
with file_path.open('w') as file:
|
||||
json.dump(metadata, file)
|
||||
return ReturnCodes.SUCCESS
|
||||
|
||||
|
||||
def update_device_type(device_type: str, file_path: Path):
|
||||
assert file_path.is_file()
|
||||
with file_path.open('r') as file:
|
||||
metadata = json.load(file)
|
||||
metadata['device_type'] = device_type
|
||||
metadata['date_updated'] = datetime.now().strftime('%d-%m-%YT%H:%M:%S').lower()
|
||||
with file_path.open('w') as file:
|
||||
json.dump(metadata, file)
|
||||
return ReturnCodes.SUCCESS
|
||||
|
||||
|
||||
32
archive/functions_dump.py
Normal file
32
archive/functions_dump.py
Normal file
@ -0,0 +1,32 @@
|
||||
def setup_sniff_tcpdump_parser(parser_sniff):
|
||||
# arguments which will be passed to tcpdump
|
||||
parser_sniff_tcpdump = parser_sniff.add_argument_group('tcpdump arguments')
|
||||
# TODO: tcpdump_parser.add_argument('-c', '--count', re)
|
||||
parser_sniff_tcpdump.add_argument('-a', '--ip-address=', help='IP address of the device to sniff', dest='device_ip')
|
||||
parser_sniff_tcpdump.add_argument('-i', '--interface=', help='Interface of the capture device.', dest='capture_interface',default='')
|
||||
parser_sniff_tcpdump.add_argument('-I', '--monitor-mode', help='Put interface into monitor mode',
|
||||
action='store_true')
|
||||
parser_sniff_tcpdump.add_argument('-n', help='Deactivate name resolution. Option is set by default.',
|
||||
action='store_true')
|
||||
parser_sniff_tcpdump.add_argument('-#', '--number',
|
||||
help='Print packet number at beginning of line. Set by default.',
|
||||
action='store_true')
|
||||
parser_sniff_tcpdump.add_argument('-e', help='Print link layer headers. Option is set by default.',
|
||||
action='store_true')
|
||||
parser_sniff_tcpdump.add_argument('-t', action='count', default=0,
|
||||
help='Please see tcpdump manual for details. Unused by default.')
|
||||
|
||||
|
||||
def setup_sniff_parser(subparsers):
|
||||
# create parser for 'sniff' command
|
||||
parser_sniff = subparsers.add_parser('sniff', help='Start tcpdump capture.')
|
||||
setup_sniff_tcpdump_parser(parser_sniff)
|
||||
setup_pcap_filter_parser(parser_sniff)
|
||||
cap_size_group = parser_sniff.add_mutually_exclusive_group(required=True)
|
||||
cap_size_group.add_argument('-c', '--count', type=int, help='Number of packets to capture.', default=0)
|
||||
cap_size_group.add_argument('--mins', type=int, help='Time in minutes to capture.', default=60)
|
||||
|
||||
|
||||
def setup_pcap_filter_parser(parser_sniff):
|
||||
parser_pcap_filter = parser_sniff.add_argument_parser('pcap-filter expression')
|
||||
pass
|
||||
19
archive/metadata.py
Normal file
19
archive/metadata.py
Normal file
@ -0,0 +1,19 @@
|
||||
import json
|
||||
import uuid
|
||||
from datetime import datetime
|
||||
|
||||
|
||||
class Metadata:
|
||||
def __init__(self, name):
|
||||
self.device = name
|
||||
self.timestamp = datetime.now().timestamp()
|
||||
self.capture_id = uuid.uuid4().hex
|
||||
self.capture_mode = ... # TODO: eg. promiscuous/monitor/other
|
||||
self.host_ip = ...
|
||||
self.host_mac = ...
|
||||
self.protocol = ...
|
||||
|
||||
|
||||
def create_metadata(filename, unique_id, device_details):
|
||||
date_string = datetime.datetime.now().strftime('%Y-%m-%d-%H-%M-%S')
|
||||
meta_filename = f'meta_{date_string}_{unique_id}.json'
|
||||
69
archive/metadata_utils.py
Normal file
69
archive/metadata_utils.py
Normal file
@ -0,0 +1,69 @@
|
||||
import json
|
||||
from pathlib import Path
|
||||
|
||||
from pydantic import BaseModel
|
||||
|
||||
from iottb.models.device_metadata_model import DeviceMetadata
|
||||
from iottb.definitions import DEVICE_METADATA_FILE
|
||||
|
||||
|
||||
def write_device_metadata_to_file(metadata: DeviceMetadata, device_path: Path):
|
||||
'''Write the device metadata to a JSON file in the specified directory.'''
|
||||
meta_file_path = device_path / 'meta.json'
|
||||
meta_file_path.write_text(metadata.json(indent=2))
|
||||
|
||||
|
||||
def confirm_device_metadata(metadata: DeviceMetadata) -> bool:
|
||||
'''Display device metadata for user confirmation.'''
|
||||
print(metadata.json(indent=2))
|
||||
return input('Confirm device metadata? (y/n): ').strip().lower() == 'y'
|
||||
|
||||
|
||||
def get_device_metadata_from_user() -> DeviceMetadata:
|
||||
'''Prompt the user to enter device details and return a populated DeviceMetadata object.'''
|
||||
device_name = input('Device name: ')
|
||||
device_short_name = device_name.lower().replace(' ', '-')
|
||||
return DeviceMetadata(device_name=device_name, device_short_name=device_short_name)
|
||||
|
||||
|
||||
def initialize_device_root_dir(device_name: str) -> Path:
|
||||
'''Create and return the path for the device directory.'''
|
||||
device_path = Path.cwd() / device_name
|
||||
device_path.mkdir(exist_ok=True)
|
||||
return device_path
|
||||
|
||||
|
||||
def write_metadata(metadata: BaseModel, device_name: str):
|
||||
'''Write device metadata to a JSON file.'''
|
||||
meta_path = Path.cwd() / device_name / DEVICE_METADATA_FILE
|
||||
meta_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
with meta_path.open('w') as f:
|
||||
json.dump(metadata.dict(), f, indent=4)
|
||||
|
||||
|
||||
def get_device_metadata(file_path: Path) -> DeviceMetadata | None:
|
||||
'''Fetch device metadata from a JSON file.'''
|
||||
|
||||
if dev_metadata_exists(file_path):
|
||||
with file_path.open('r') as f:
|
||||
device_metadata_json = json.load(f)
|
||||
try:
|
||||
device_metadata = DeviceMetadata.from_json(device_metadata_json)
|
||||
return device_metadata
|
||||
except ValueError as e:
|
||||
print(f'Validation error for device metadata: {e}')
|
||||
else:
|
||||
# TODO Decide what to do (e.g. search for file etc)
|
||||
print(f'No device metadata at {file_path}')
|
||||
return None
|
||||
|
||||
|
||||
def search_device_metadata(filename=DEVICE_METADATA_FILE, start_dir=Path.cwd(), max_parents=3) -> Path:
|
||||
pass # TODO
|
||||
|
||||
|
||||
def dev_metadata_exists(file_path: Path) -> bool:
|
||||
if file_path.is_file():
|
||||
return True
|
||||
else:
|
||||
return False
|
||||
35
code/iottb-project/.gitignore
vendored
Normal file
35
code/iottb-project/.gitignore
vendored
Normal file
@ -0,0 +1,35 @@
|
||||
__pycache__
|
||||
.venv
|
||||
iottb.egg-info
|
||||
.idea
|
||||
*.log
|
||||
logs/
|
||||
*.pyc
|
||||
.obsidian
|
||||
|
||||
# Covers JetBrains IDEs: IntelliJ, RubyMine, PhpStorm, AppCode, PyCharm, CLion, Android Studio, WebStorm and Rider
|
||||
# Reference: https://intellij-support.jetbrains.com/hc/en-us/articles/206544839
|
||||
|
||||
# User-specific stuff
|
||||
.idea/**/workspace.xml
|
||||
.idea/**/tasks.xml
|
||||
.idea/**/usage.statistics.xml
|
||||
.idea/**/dictionaries
|
||||
.idea/**/shelf
|
||||
|
||||
# AWS User-specific
|
||||
.idea/**/aws.xml
|
||||
|
||||
# Generated files
|
||||
.idea/**/contentModel.xml
|
||||
|
||||
# Sensitive or high-churn files
|
||||
.idea/**/dataSources/
|
||||
.idea/**/dataSources.ids
|
||||
.idea/**/dataSources.local.xml
|
||||
.idea/**/sqlDataSources.xml
|
||||
.idea/**/dynamic.xml
|
||||
.idea/**/uiDesigner.xml
|
||||
.idea/**/dbnavigator.xml
|
||||
|
||||
.private/
|
||||
9
code/iottb-project/README.md
Normal file
9
code/iottb-project/README.md
Normal file
@ -0,0 +1,9 @@
|
||||
# Iottb
|
||||
## Basic Invocation
|
||||
|
||||
## Configuration
|
||||
### Env Vars
|
||||
- IOTTB_CONF_HOME
|
||||
|
||||
By setting this variable you control where the basic iottb application
|
||||
configuration should be looked for
|
||||
11
code/iottb-project/iottb/__init__.py
Normal file
11
code/iottb-project/iottb/__init__.py
Normal file
@ -0,0 +1,11 @@
|
||||
from iottb import definitions
|
||||
import logging
|
||||
from iottb.utils.user_interaction import tb_echo
|
||||
import click
|
||||
|
||||
click.echo = tb_echo # This is very hacky
|
||||
logging.basicConfig(level=definitions.LOGLEVEL)
|
||||
log_dir = definitions.LOGDIR
|
||||
# Ensure logs dir exists before new handlers are registered in main.py
|
||||
if not log_dir.is_dir():
|
||||
log_dir.mkdir()
|
||||
0
code/iottb-project/iottb/commands/__init__.py
Normal file
0
code/iottb-project/iottb/commands/__init__.py
Normal file
89
code/iottb-project/iottb/commands/add_device.py
Normal file
89
code/iottb-project/iottb/commands/add_device.py
Normal file
@ -0,0 +1,89 @@
|
||||
import json
|
||||
|
||||
import click
|
||||
from pathlib import Path
|
||||
import logging
|
||||
import re
|
||||
|
||||
from iottb import definitions
|
||||
from iottb.models.device_metadata import DeviceMetadata
|
||||
from iottb.models.iottb_config import IottbConfig
|
||||
from iottb.definitions import CFG_FILE_PATH, TB_ECHO_STYLES
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def add_device_guided(ctx, cn, db):
|
||||
click.echo('TODO: Implement')
|
||||
logger.info('Adding device interactively')
|
||||
#logger.debug(f'Parameters: {params}. value: {value}')
|
||||
|
||||
|
||||
@click.command('add-device', help='Add a device to a database')
|
||||
@click.option('--dev', '--device-name', type=str, required=True,
|
||||
help='The name of the device to be added. If this string contains spaces or other special characters \
|
||||
normalization is performed to derive a canonical name')
|
||||
@click.option('--db', '--database', type=click.Path(exists=True, file_okay=False, dir_okay=True, path_type=Path),
|
||||
envvar='IOTTB_DB', show_envvar=True,
|
||||
help='Database in which to add this device. If not specified use default from config.')
|
||||
@click.option('--guided', is_flag=True, default=False, show_default=True, envvar='IOTTB_GUIDED_ADD', show_envvar=True,
|
||||
help='Add device interactively')
|
||||
def add_device(dev, db, guided):
|
||||
"""Add a new device to a database
|
||||
|
||||
Device name must be supplied unless in an interactive setup. Database is taken from config by default.
|
||||
"""
|
||||
logger.info('add-device invoked')
|
||||
|
||||
# Step 1: Load Config
|
||||
# Dependency: Config file must exist
|
||||
config = IottbConfig(Path(CFG_FILE_PATH))
|
||||
logger.debug(f'Config loaded: {config}')
|
||||
|
||||
# Step 2: Load database
|
||||
# dependency: Database folder must exist
|
||||
if db:
|
||||
database = db
|
||||
path = config.db_path_dict
|
||||
logger.debug(f'Resolved (path, db) {path}, {database}')
|
||||
else:
|
||||
path = config.default_db_location
|
||||
database = config.default_database
|
||||
logger.debug(f'Default (path, db) {path}, {database}')
|
||||
click.secho(f'Using database {database}')
|
||||
full_db_path = Path(path) / database
|
||||
if not full_db_path.is_dir():
|
||||
logger.warning(f'No database at {database}')
|
||||
click.echo(f'Could not find a database.')
|
||||
click.echo(f'You need to initialize the testbed before before you add devices!')
|
||||
click.echo(f'To initialize the testbed in the default location run "iottb init-db"')
|
||||
click.echo('Exiting...')
|
||||
exit()
|
||||
|
||||
# Step 3: Check if device already exists in database
|
||||
# dependency: DeviceMetadata object
|
||||
device_metadata = DeviceMetadata(device_name=dev)
|
||||
device_dir = full_db_path / device_metadata.canonical_name
|
||||
|
||||
# Check if device is already registered
|
||||
if device_dir.exists():
|
||||
logger.warning(f'Device directory {device_dir} already exists.')
|
||||
click.echo(f'Device {dev} already exists in the database.')
|
||||
click.echo('Exiting...')
|
||||
exit()
|
||||
try:
|
||||
device_dir.mkdir()
|
||||
except OSError as e:
|
||||
logger.error(f'Error trying to create device {e}')
|
||||
click.echo('Exiting...')
|
||||
exit()
|
||||
|
||||
# Step 4: Save metadata into device_dir
|
||||
metadata_path = device_dir / definitions.DEVICE_METADATA_FILE_NAME
|
||||
with metadata_path.open('w') as metadata_file:
|
||||
json.dump(device_metadata.__dict__, metadata_file, indent=4)
|
||||
click.echo(f'Successfully added device {dev} to database')
|
||||
logger.debug(f'Added device {dev} to database {database}. Full path of metadata {metadata_path}')
|
||||
logger.info(f'Metadata for {dev} {device_metadata.print_attributes()}')
|
||||
|
||||
|
||||
123
code/iottb-project/iottb/commands/developer.py
Normal file
123
code/iottb-project/iottb/commands/developer.py
Normal file
@ -0,0 +1,123 @@
|
||||
from pathlib import Path
|
||||
import logging
|
||||
import click
|
||||
|
||||
from iottb.definitions import DB_NAME, CFG_FILE_PATH
|
||||
from iottb.models.iottb_config import IottbConfig
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
@click.group('util')
|
||||
def tb():
|
||||
pass
|
||||
|
||||
|
||||
@click.command()
|
||||
@click.option('--file', default=DB_NAME)
|
||||
@click.option('--table', type=str, default='DefaultDatabase')
|
||||
@click.option('--key')
|
||||
@click.option('--value')
|
||||
@click.pass_context
|
||||
def set_key_in_table_to(ctx, file, table, key, value):
|
||||
"""Edit config or metadata files. TODO: Implement"""
|
||||
click.echo(f'set_key_in_table_to invoked')
|
||||
logger.warning("Unimplemented subcommand invoked.")
|
||||
|
||||
|
||||
@click.command()
|
||||
@click.confirmation_option(prompt="Are you certain that you want to delete the cfg file?")
|
||||
def rm_cfg():
|
||||
""" Removes the cfg file from the filesystem.
|
||||
|
||||
This is mostly a utility during development. Once non-standard database locations are implemented,
|
||||
deleting this would lead to iottb not being able to find them anymore.
|
||||
"""
|
||||
Path(CFG_FILE_PATH).unlink()
|
||||
click.echo(f'Iottb configuration removed at {CFG_FILE_PATH}')
|
||||
|
||||
|
||||
@click.command()
|
||||
@click.confirmation_option(prompt="Are you certain that you want to delete the databases file?")
|
||||
def rm_dbs(dbs):
|
||||
""" Removes ALL(!) databases from the filesystem if they're empty.
|
||||
|
||||
Development utility currently unfit for use.
|
||||
"""
|
||||
config = IottbConfig()
|
||||
paths = config.get_know_database_paths()
|
||||
logger.debug(f'Known db paths: {str(paths)}')
|
||||
for dbs in paths:
|
||||
try:
|
||||
Path(dbs).rmdir()
|
||||
click.echo(f'{dbs} deleted')
|
||||
except Exception as e:
|
||||
logger.debug(f'Failed unlinking db {dbs} with error {e}')
|
||||
logger.info(f'All databases deleted')
|
||||
|
||||
|
||||
@click.command('show-cfg', help='Show the current configuration context')
|
||||
@click.option('--cfg-file', type=click.Path(), default=CFG_FILE_PATH, help='Path to the config file')
|
||||
@click.option('-pp', is_flag=True, default=False, help='Pretty Print')
|
||||
@click.pass_context
|
||||
def show_cfg(ctx, cfg_file, pp):
|
||||
logger.debug(f'Pretty print option set to {pp}')
|
||||
if pp:
|
||||
try:
|
||||
config = IottbConfig(Path(cfg_file))
|
||||
click.echo("Configuration Context:")
|
||||
click.echo(f"Default Database: {config.default_database}")
|
||||
click.echo(f"Default Database Path: {config.default_db_location}")
|
||||
click.echo("Database Locations:")
|
||||
for db_name, db_path in config.db_path_dict.items():
|
||||
click.echo(f" - {db_name}: {db_path}")
|
||||
except Exception as e:
|
||||
logger.error(f"Error loading configuration: {e}")
|
||||
click.echo(f"Failed to load configuration from {cfg_file}")
|
||||
else:
|
||||
path = Path(cfg_file)
|
||||
|
||||
if path.is_file():
|
||||
with path.open('r') as file:
|
||||
content = file.read()
|
||||
click.echo(content)
|
||||
else:
|
||||
click.echo(f"Configuration file not found at {cfg_file}")
|
||||
|
||||
|
||||
@click.command('show-all', help='Show everything: configuration, databases, and device metadata')
|
||||
@click.pass_context
|
||||
def show_everything(ctx):
|
||||
"""Show everything that can be recursively found based on config except file contents."""
|
||||
config = ctx.obj['CONFIG']
|
||||
click.echo("Configuration Context:")
|
||||
click.echo(f"Default Database: {config.default_database}")
|
||||
click.echo(f"Default Database Path: {config.default_db_location}")
|
||||
click.echo("Database Locations:")
|
||||
for db_name, db_path in config.db_path_dict.items():
|
||||
full_db_path = Path(db_path) / db_name
|
||||
click.echo(f" - {db_name}: {full_db_path}")
|
||||
if full_db_path.is_dir():
|
||||
click.echo(f"Contents of {db_name} at {full_db_path}:")
|
||||
for item in full_db_path.iterdir():
|
||||
if item.is_file():
|
||||
click.echo(f" - {item.name}")
|
||||
try:
|
||||
with item.open('r', encoding='utf-8') as file:
|
||||
content = file.read()
|
||||
click.echo(f" Content:\n{content}")
|
||||
except UnicodeDecodeError:
|
||||
click.echo(" Content is not readable as text")
|
||||
elif item.is_dir():
|
||||
click.echo(f" - {item.name}/")
|
||||
for subitem in item.iterdir():
|
||||
if subitem.is_file():
|
||||
click.echo(f" - {subitem.name}")
|
||||
elif subitem.is_dir():
|
||||
click.echo(f" - {subitem.name}/")
|
||||
else:
|
||||
click.echo(f" {full_db_path} is not a directory")
|
||||
|
||||
|
||||
warnstyle = {'fg': 'red', 'bold': True}
|
||||
click.secho('Developer command used', **warnstyle)
|
||||
100
code/iottb-project/iottb/commands/initialize_testbed.py
Normal file
100
code/iottb-project/iottb/commands/initialize_testbed.py
Normal file
@ -0,0 +1,100 @@
|
||||
import click
|
||||
from pathlib import Path
|
||||
import logging
|
||||
from logging.handlers import RotatingFileHandler
|
||||
import sys
|
||||
from iottb.models.iottb_config import IottbConfig
|
||||
from iottb.definitions import DB_NAME
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
@click.command()
|
||||
@click.option('-d', '--dest', type=click.Path(), help='Location to put (new) iottb database')
|
||||
@click.option('-n', '--name', default=DB_NAME, type=str, help='Name of new database.')
|
||||
@click.option('--update-default/--no-update-default', default=True, help='If new db should be set as the new default')
|
||||
@click.pass_context
|
||||
def init_db(ctx, dest, name, update_default):
|
||||
logger.info('init-db invoked')
|
||||
config = ctx.obj['CONFIG']
|
||||
logger.debug(f'str(config)')
|
||||
# Use the default path from config if dest is not provided
|
||||
known_dbs = config.get_known_databases()
|
||||
logger.debug(f'Known databases: {known_dbs}')
|
||||
if name in known_dbs:
|
||||
dest = config.get_database_location(name)
|
||||
if Path(dest).joinpath(name).is_dir():
|
||||
click.echo(f'A database {name} already exists.')
|
||||
logger.debug(f'DB {name} exists in {dest}')
|
||||
click.echo(f'Exiting...')
|
||||
exit()
|
||||
logger.debug(f'DB name {name} registered but does not exist.')
|
||||
if not dest:
|
||||
logger.info('No dest set, choosing default destination.')
|
||||
dest = Path(config.default_db_location).parent
|
||||
|
||||
db_path = Path(dest).joinpath(name)
|
||||
logger.debug(f'Full path for db {str(db_path)}')
|
||||
# Create the directory if it doesn't exist
|
||||
db_path.mkdir(parents=True, exist_ok=True)
|
||||
logger.info(f"mkdir {db_path} successful")
|
||||
click.echo(f'Created {db_path}')
|
||||
|
||||
# Update configuration
|
||||
config.set_database_location(name, str(dest))
|
||||
if update_default:
|
||||
config.set_default_database(name, str(dest))
|
||||
config.save_config()
|
||||
logger.info(f"Updated configuration with database {name} at {db_path}")
|
||||
|
||||
|
||||
@click.command()
|
||||
@click.option('-d', '--dest', type=click.Path(), help='Location to put (new) iottb database')
|
||||
@click.option('-n', '--name', default=DB_NAME, type=str, help='Name of new database.')
|
||||
@click.option('--update-default/--no-update-default', default=True, help='If new db should be set as the new default')
|
||||
@click.pass_context
|
||||
def init_db_inactive(ctx, dest, name, update_default):
|
||||
logger.info('init-db invoked')
|
||||
config = ctx.obj['CONFIG']
|
||||
logger.debug(f'str(config)')
|
||||
|
||||
# Retrieve known databases
|
||||
known_dbs = config.get_known_databases()
|
||||
|
||||
# Determine destination path
|
||||
if name in known_dbs:
|
||||
dest = Path(config.get_database_location(name))
|
||||
if dest.joinpath(name).is_dir():
|
||||
click.echo(f'A database {name} already exists.')
|
||||
logger.debug(f'DB {name} exists in {dest}')
|
||||
click.echo(f'Exiting...')
|
||||
exit()
|
||||
logger.debug(f'DB name {name} registered but does not exist.')
|
||||
elif not dest:
|
||||
logger.info('No destination set, using default path from config.')
|
||||
dest = Path(config.default_db_location).parent
|
||||
|
||||
# Ensure destination path is absolute
|
||||
dest = dest.resolve()
|
||||
|
||||
# Combine destination path with database name
|
||||
db_path = dest / name
|
||||
logger.debug(f'Full path for database: {str(db_path)}')
|
||||
|
||||
# Create the directory if it doesn't exist
|
||||
try:
|
||||
db_path.mkdir(parents=True, exist_ok=True)
|
||||
logger.info(f'Directory {db_path} created successfully.')
|
||||
click.echo(f'Created {db_path}')
|
||||
except Exception as e:
|
||||
logger.error(f'Failed to create directory {db_path}: {e}')
|
||||
click.echo(f'Failed to create directory {db_path}: {e}', err=True)
|
||||
exit(1)
|
||||
|
||||
# Update configuration
|
||||
config.set_database_location(name, str(db_path))
|
||||
if update_default:
|
||||
config.set_default_database(name, str(db_path))
|
||||
config.save_config()
|
||||
logger.info(f'Updated configuration with database {name} at {db_path}')
|
||||
click.echo(f'Updated configuration with database {name} at {db_path}')
|
||||
133
code/iottb-project/iottb/commands/sniff.py
Normal file
133
code/iottb-project/iottb/commands/sniff.py
Normal file
@ -0,0 +1,133 @@
|
||||
import click
|
||||
import subprocess
|
||||
import json
|
||||
from pathlib import Path
|
||||
import logging
|
||||
import re
|
||||
from datetime import datetime
|
||||
from iottb.definitions import APP_NAME, CFG_FILE_PATH
|
||||
from iottb.models.iottb_config import IottbConfig
|
||||
from iottb.utils.string_processing import make_canonical_name
|
||||
# Setup logger
|
||||
logger = logging.getLogger('iottb.sniff')
|
||||
|
||||
|
||||
def is_ip_address(address):
|
||||
ip_pattern = re.compile(r"^(?:[0-9]{1,3}\.){3}[0-9]{1,3}$")
|
||||
return ip_pattern.match(address) is not None
|
||||
|
||||
|
||||
def is_mac_address(address):
|
||||
mac_pattern = re.compile(r"^([0-9A-Fa-f]{2}:){5}[0-9A-Fa-f]{2}$")
|
||||
return mac_pattern.match(address) is not None
|
||||
|
||||
|
||||
def load_config(cfg_file):
|
||||
"""Loads configuration from the given file path."""
|
||||
with open(cfg_file, 'r') as config_file:
|
||||
return json.load(config_file)
|
||||
|
||||
|
||||
def validate_sniff(ctx, param, value):
|
||||
logger.info('Validating sniff...')
|
||||
if ctx.params.get('unsafe') and not value:
|
||||
return None
|
||||
if not ctx.params.get('unsafe') and not value:
|
||||
raise click.BadParameter('Address is required unless --unsafe is set.')
|
||||
return value
|
||||
|
||||
|
||||
@click.command('sniff', help='Sniff packets with tcpdump')
|
||||
@click.argument('device')
|
||||
@click.option('-i', '--interface', callback=validate_sniff, help='Network interface to capture on',
|
||||
envvar='IOTTB_CAPTURE_INTERFACE')
|
||||
@click.option('-a', '--address', callback=validate_sniff, help='IP or MAC address to filter packets by',
|
||||
envvar='IOTTB_CAPTURE_ADDRESS')
|
||||
@click.option('--db', '--database', type=click.Path(exists=True, file_okay=False), envvar='IOTTB_DB',
|
||||
help='Database of device. Only needed if not current default.')
|
||||
@click.option('--unsafe', is_flag=True, default=False, envvar='IOTTB_UNSAFE', is_eager=True,
|
||||
help='Disable checks for otherwise required options')
|
||||
@click.option('--guided', is_flag=True, default=False)
|
||||
def sniff(device, interface, address, db, unsafe, guided):
|
||||
""" Sniff packets from a device """
|
||||
logger.info('sniff command invoked')
|
||||
|
||||
# Step1: Load Config
|
||||
config = IottbConfig(Path(CFG_FILE_PATH))
|
||||
logger.debug(f'Config loaded: {config}')
|
||||
|
||||
# Step2: determine relevant database
|
||||
database = db if db else config.default_database
|
||||
path = config.default_db_location[database]
|
||||
full_db_path = Path(path) / database
|
||||
logger.debug(f'Full db path is {str(path)}')
|
||||
|
||||
# Check if it exists
|
||||
assert full_db_path.is_dir(), "DB unexpectedly missing"
|
||||
|
||||
canonical_name = make_canonical_name(device)
|
||||
click.echo(f'Using canonical device name {canonical_name}')
|
||||
|
||||
if not database_path:
|
||||
logger.error('No default database path found in configuration')
|
||||
click.echo('No default database path found in configuration')
|
||||
return
|
||||
|
||||
# Verify device directory
|
||||
device_path = Path(database_path) / device
|
||||
if not device_path.exists():
|
||||
logger.error(f'Device path {device_path} does not exist')
|
||||
click.echo(f'Device path {device_path} does not exist')
|
||||
return
|
||||
|
||||
# Generate filter
|
||||
if not unsafe:
|
||||
if is_ip_address(address):
|
||||
packet_filter = f"host {address}"
|
||||
elif is_mac_address(address):
|
||||
packet_filter = f"ether host {address}"
|
||||
else:
|
||||
logger.error('Invalid address format')
|
||||
click.echo('Invalid address format')
|
||||
return
|
||||
else:
|
||||
packet_filter = None
|
||||
|
||||
# Prepare capture directory
|
||||
capture_dir = device_path / 'captures' / datetime.now().strftime('%Y%m%d_%H%M%S')
|
||||
capture_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# Prepare capture file
|
||||
pcap_file = capture_dir / f"{device}_{datetime.now().strftime('%Y%m%d_%H%M%S')}.pcap"
|
||||
|
||||
# Build tcpdump command
|
||||
cmd = ['sudo', 'tcpdump', '-i', interface, '-w', str(pcap_file)]
|
||||
if packet_filter:
|
||||
cmd.append(packet_filter)
|
||||
logger.info(f'Executing: {" ".join(cmd)}')
|
||||
|
||||
# Execute tcpdump
|
||||
try:
|
||||
subprocess.run(cmd, check=True)
|
||||
click.echo(f"Capture complete. Saved to {pcap_file}")
|
||||
except subprocess.CalledProcessError as e:
|
||||
logger.error(f'Failed to capture packets: {e}')
|
||||
click.echo(f'Failed to capture packets: {e}')
|
||||
|
||||
|
||||
@click.command('sniff', help='Sniff packets with tcpdump')
|
||||
@click.argument('device')
|
||||
@click.option('-i', '--interface', required=False, help='Network interface to capture on', envvar='IOTTB_CAPTURE_INTERFACE')
|
||||
@click.option('-a', '--address', required=True, help='IP or MAC address to filter packets by', envvar='IOTTB_CAPTURE_ADDRESS')
|
||||
@click.option('--db', '--database', type=click.Path(exists=True, file_okay=False), envvar='IOTTB_DB',
|
||||
help='Database of device. Only needed if not current default.')
|
||||
@click.option('--unsafe', is_flag=True, default=False, envvar='IOTTB_UNSAFE',
|
||||
help='Disable checks for otherwise required options')
|
||||
@click.option('--guided', is_flag=True)
|
||||
def sniff2(device, interface, address, cfg_file):
|
||||
""" Sniff packets from a device """
|
||||
logger.info('sniff command invoked')
|
||||
# Step 1: Load Config
|
||||
# Dependency: Config file must exist
|
||||
config = IottbConfig(Path(CFG_FILE_PATH))
|
||||
logger.debug(f'Config loaded: {config}')
|
||||
44
code/iottb-project/iottb/definitions.py
Normal file
44
code/iottb-project/iottb/definitions.py
Normal file
@ -0,0 +1,44 @@
|
||||
import logging
|
||||
from pathlib import Path
|
||||
|
||||
import click
|
||||
|
||||
APP_NAME = 'iottb'
|
||||
DB_NAME = 'iottb.db'
|
||||
CFG_FILE_PATH = str(Path(click.get_app_dir(APP_NAME)).joinpath('iottb.cfg'))
|
||||
CONSOLE_LOG_FORMATS = {
|
||||
0: '%(levelname)s - %(message)s',
|
||||
1: '%(levelname)s - %(module)s - %(message)s',
|
||||
2: '%(levelname)s - %(module)s - %(funcName)s - %(lineno)d - %(message)s'
|
||||
}
|
||||
|
||||
LOGFILE_LOG_FORMAT = {
|
||||
0: '%(levelname)s - %(asctime)s - %(module)s - %(message)s',
|
||||
1: '%(levelname)s - %(asctime)s - %(module)s - %(funcName)s - %(message)s',
|
||||
2: '%(levelname)s - %(asctime)s - %(module)s - %(funcName)s - %(lineno)d - %(message)s'
|
||||
}
|
||||
MAX_VERBOSITY = len(CONSOLE_LOG_FORMATS) - 1
|
||||
assert len(LOGFILE_LOG_FORMAT) == len(CONSOLE_LOG_FORMATS), 'Log formats must be same size'
|
||||
|
||||
LOGLEVEL = logging.DEBUG
|
||||
LOGDIR = Path.cwd() / 'logs'
|
||||
|
||||
# Characters to just replace
|
||||
REPLACEMENT_SET_CANONICAL_DEVICE_NAMES = {' ', '_', ',', '!', '@', '#', '$', '%', '^', '&', '*', '(', ')', '+', '=',
|
||||
'{', '}', '[', ']',
|
||||
'|',
|
||||
'\\', ':', ';', '"', "'", '<', '>', '?', '/', '`', '~'}
|
||||
# Characters to possibly error on
|
||||
ERROR_SET_CANONICAL_DEVICE_NAMES = {',', '!', '@', '#', '$', '%', '^', '&', '*', '(', ')', '+', '=', '{', '}', '[', ']',
|
||||
'|',
|
||||
'\\', ':', ';', '"', "'", '<', '>', '?', '/', '`', '~'}
|
||||
|
||||
DEVICE_METADATA_FILE_NAME = 'device_metadata.json'
|
||||
|
||||
TB_ECHO_STYLES = {
|
||||
'w': {'fg': 'yellow', 'bold': True},
|
||||
'i': {'fg': 'blue', 'italic': True},
|
||||
's': {'fg': 'green', 'bold': True},
|
||||
'e': {'fg': 'red', 'bold': True},
|
||||
'header': {'fg': 'bright_cyan', 'bold': True, 'italic': True}
|
||||
}
|
||||
62
code/iottb-project/iottb/main.py
Normal file
62
code/iottb-project/iottb/main.py
Normal file
@ -0,0 +1,62 @@
|
||||
import click
|
||||
from pathlib import Path
|
||||
import logging
|
||||
|
||||
from iottb.commands.sniff import sniff
|
||||
from iottb.commands.developer import set_key_in_table_to, rm_cfg, rm_dbs, show_cfg, show_everything
|
||||
##################################################
|
||||
# Import package modules
|
||||
#################################################
|
||||
from iottb.utils.logger_config import setup_logging
|
||||
from iottb import definitions
|
||||
from iottb.models.iottb_config import IottbConfig
|
||||
from iottb.commands.initialize_testbed import init_db
|
||||
from iottb.commands.add_device import add_device
|
||||
|
||||
############################################################################
|
||||
# Module shortcuts for global definitions
|
||||
###########################################################################
|
||||
APP_NAME = definitions.APP_NAME
|
||||
DB_NAME = definitions.DB_NAME
|
||||
CFG_FILE_PATH = definitions.CFG_FILE_PATH
|
||||
# These are (possibly) redundant when defined in definitions.py
|
||||
# keeping them here until refactored and tested
|
||||
MAX_VERBOSITY = definitions.MAX_VERBOSITY
|
||||
|
||||
# Logger stuff
|
||||
loglevel = definitions.LOGLEVEL
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
@click.group()
|
||||
@click.option('-v', '--verbosity', count=True, type=click.IntRange(0, 3), default=0,
|
||||
help='Set verbosity')
|
||||
@click.option('-d', '--debug', is_flag=True, default=False,
|
||||
help='Enable debug mode')
|
||||
@click.option('--cfg-file', type=click.Path(),
|
||||
default=Path(click.get_app_dir(APP_NAME)).joinpath('iottb.cfg'),
|
||||
envvar='IOTTB_CONF_HOME', help='Path to iottb config file')
|
||||
@click.pass_context
|
||||
def cli(ctx, verbosity, debug, cfg_file):
|
||||
setup_logging(verbosity, debug) # Setup logging based on the loaded configuration and other options
|
||||
ctx.ensure_object(dict) # Make sure context is ready for use
|
||||
logger.info("Starting execution.")
|
||||
ctx.obj['CONFIG'] = IottbConfig(cfg_file) # Load configuration directly
|
||||
ctx.meta['FULL_PATH_CONFIG_FILE'] = str(cfg_file)
|
||||
|
||||
|
||||
##################################################################################
|
||||
# Add all subcommands to group here
|
||||
#################################################################################
|
||||
# noinspection PyTypeChecker
|
||||
cli.add_command(init_db)
|
||||
cli.add_command(rm_cfg)
|
||||
cli.add_command(set_key_in_table_to)
|
||||
cli.add_command(rm_dbs)
|
||||
# noinspection PyTypeChecker
|
||||
cli.add_command(add_device)
|
||||
cli.add_command(show_cfg)
|
||||
cli.add_command(sniff)
|
||||
cli.add_command(show_everything)
|
||||
if __name__ == '__main__':
|
||||
cli(auto_envvar_prefix='IOTTB', show_default=True, show_envvars=True)
|
||||
0
code/iottb-project/iottb/models/__init__.py
Normal file
0
code/iottb-project/iottb/models/__init__.py
Normal file
6
code/iottb-project/iottb/models/database.py
Normal file
6
code/iottb-project/iottb/models/database.py
Normal file
@ -0,0 +1,6 @@
|
||||
class Database:
|
||||
|
||||
def __init__(self, name, path):
|
||||
self.name = name
|
||||
self.path = path
|
||||
self.device_list = [] # List of the canonical names of devices registered in this database
|
||||
44
code/iottb-project/iottb/models/device_metadata.py
Normal file
44
code/iottb-project/iottb/models/device_metadata.py
Normal file
@ -0,0 +1,44 @@
|
||||
import logging
|
||||
import uuid
|
||||
from datetime import datetime
|
||||
import logging
|
||||
import click
|
||||
|
||||
from iottb.utils.string_processing import make_canonical_name
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class DeviceMetadata:
|
||||
def __init__(self, device_name, description="", model="", manufacturer="", firmware_version="", device_type="",
|
||||
supported_interfaces="", companion_applications="", save_to_file=None):
|
||||
self.device_id = str(uuid.uuid4())
|
||||
self.device_name = device_name
|
||||
cn, aliases = make_canonical_name(device_name)
|
||||
logger.debug(f'cn, aliases = {cn}, {str(aliases)}')
|
||||
self.aliases = aliases
|
||||
self.canonical_name = cn
|
||||
self.date_added = datetime.now().isoformat()
|
||||
self.description = description
|
||||
self.model = model
|
||||
self.manufacturer = manufacturer
|
||||
self.current_firmware_version = firmware_version
|
||||
self.device_type = device_type
|
||||
self.supported_interfaces = supported_interfaces
|
||||
self.companion_applications = companion_applications
|
||||
self.last_metadata_update = datetime.now().isoformat()
|
||||
if save_to_file is not None:
|
||||
click.echo('TODO: Implement saving config to file after creation!')
|
||||
|
||||
def add_alias(self, alias: str = ""):
|
||||
if alias == "":
|
||||
return
|
||||
self.aliases.append(alias)
|
||||
|
||||
def get_canonical_name(self):
|
||||
return self.canonical_name
|
||||
|
||||
def print_attributes(self):
|
||||
print(f'Printing attribute value pairs in {__name__}')
|
||||
for attr, value in self.__dict__.items():
|
||||
print(f'{attr}: {value}')
|
||||
124
code/iottb-project/iottb/models/iottb_config.py
Normal file
124
code/iottb-project/iottb/models/iottb_config.py
Normal file
@ -0,0 +1,124 @@
|
||||
import json
|
||||
from pathlib import Path
|
||||
|
||||
from iottb import definitions
|
||||
import logging
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
DB_NAME = definitions.DB_NAME
|
||||
|
||||
|
||||
class IottbConfig:
|
||||
""" Class to handle testbed configuration.
|
||||
|
||||
TODO: Add instead of overwrite Database locations when initializing if a location with valid db
|
||||
exists.
|
||||
"""
|
||||
|
||||
@staticmethod
|
||||
def warn():
|
||||
logger.warning(f'DatabaseLocations are DatabaseLocationMap in the class {__name__}')
|
||||
|
||||
def __init__(self, cfg_file=definitions.CFG_FILE_PATH):
|
||||
logger.info('Initializing Config object')
|
||||
IottbConfig.warn()
|
||||
self.cfg_file = Path(cfg_file)
|
||||
self.default_database = None
|
||||
self.default_db_location = None
|
||||
self.db_path_dict = dict()
|
||||
self.load_config()
|
||||
|
||||
def create_default_config(self):
|
||||
"""Create default iottb config file."""
|
||||
logger.info(f'Creating default config file at {self.cfg_file}')
|
||||
self.default_database = DB_NAME
|
||||
self.default_db_location = str(Path.home())
|
||||
self.db_path_dict = {
|
||||
DB_NAME: self.default_db_location
|
||||
}
|
||||
|
||||
defaults = {
|
||||
'DefaultDatabase': self.default_database,
|
||||
'DefaultDatabasePath': self.default_db_location,
|
||||
'DatabaseLocations': self.db_path_dict
|
||||
}
|
||||
|
||||
try:
|
||||
self.cfg_file.parent.mkdir(parents=True, exist_ok=True)
|
||||
with self.cfg_file.open('w') as config_file:
|
||||
json.dump(defaults, config_file, indent=4)
|
||||
except IOError as e:
|
||||
logger.error(f"Failed to create default configuration file at {self.cfg_file}: {e}")
|
||||
raise RuntimeError(f"Failed to create configuration file: {e}") from e
|
||||
|
||||
def load_config(self):
|
||||
"""Loads or creates default configuration from given file path."""
|
||||
logger.info('Loading configuration file')
|
||||
if not self.cfg_file.is_file():
|
||||
logger.info('Config file does not exist.')
|
||||
self.create_default_config()
|
||||
else:
|
||||
logger.info('Config file exists, opening.')
|
||||
with self.cfg_file.open('r') as config_file:
|
||||
data = json.load(config_file)
|
||||
self.default_database = data.get('DefaultDatabase')
|
||||
self.default_db_location = data.get('DefaultDatabasePath')
|
||||
self.db_path_dict = data.get('DatabaseLocations', {})
|
||||
|
||||
def save_config(self):
|
||||
"""Save the current configuration to the config file."""
|
||||
data = {
|
||||
'DefaultDatabase': self.default_database,
|
||||
'DefaultDatabasePath': self.default_db_location,
|
||||
'DatabaseLocations': self.db_path_dict
|
||||
}
|
||||
try:
|
||||
with self.cfg_file.open('w') as config_file:
|
||||
json.dump(data, config_file, indent=4)
|
||||
except IOError as e:
|
||||
logger.error(f"Failed to save configuration file at {self.cfg_file}: {e}")
|
||||
raise RuntimeError(f"Failed to save configuration file: {e}") from e
|
||||
|
||||
def set_default_database(self, name, path):
|
||||
"""Set the default database and its path."""
|
||||
self.default_database = name
|
||||
self.default_db_location = path
|
||||
self.db_path_dict[name] = path
|
||||
|
||||
def get_default_database_location(self):
|
||||
return self.default_db_location
|
||||
|
||||
def get_default_database(self):
|
||||
return self.default_database
|
||||
|
||||
def get_database_location(self, name):
|
||||
"""Get the location of a specific database."""
|
||||
return self.db_path_dict.get(name)
|
||||
|
||||
def set_database_location(self, name, path):
|
||||
"""Set the location for a database."""
|
||||
logger.debug(f'Type of "path" parameter {type(path)}')
|
||||
logger.debug(f'String value of "path" parameter {str(path)}')
|
||||
logger.debug(f'Type of "name" parameter {type(name)}')
|
||||
logger.debug(f'String value of "name" parameter {str(name)}')
|
||||
path = Path(path)
|
||||
name = Path(name)
|
||||
logger.debug(f'path:name = {path}:{name}')
|
||||
if path.name == name:
|
||||
path = path.parent
|
||||
self.db_path_dict[str(name)] = str(path)
|
||||
|
||||
def get_known_databases(self):
|
||||
"""Get the set of known databases"""
|
||||
logger.info(f'Getting known databases.')
|
||||
|
||||
return self.db_path_dict.keys()
|
||||
|
||||
def get_know_database_paths(self):
|
||||
"""Get the paths of all known databases"""
|
||||
logger.info(f'Getting known database paths.')
|
||||
return self.db_path_dict.values()
|
||||
|
||||
def get_full_default_path(self):
|
||||
return Path(self.default_db_location) / self.default_database
|
||||
4
code/iottb-project/iottb/models/sniff_metadata.py
Normal file
4
code/iottb-project/iottb/models/sniff_metadata.py
Normal file
@ -0,0 +1,4 @@
|
||||
import logging
|
||||
|
||||
logger = logging.getLogger('iottb.sniff') # Log with sniff subcommand
|
||||
|
||||
0
code/iottb-project/iottb/utils/__init__.py
Normal file
0
code/iottb-project/iottb/utils/__init__.py
Normal file
41
code/iottb-project/iottb/utils/logger_config.py
Normal file
41
code/iottb-project/iottb/utils/logger_config.py
Normal file
@ -0,0 +1,41 @@
|
||||
import logging
|
||||
import sys
|
||||
from logging.handlers import RotatingFileHandler
|
||||
|
||||
from iottb import definitions
|
||||
from iottb.definitions import MAX_VERBOSITY, CONSOLE_LOG_FORMATS, APP_NAME, LOGFILE_LOG_FORMAT
|
||||
|
||||
loglevel = definitions.LOGLEVEL
|
||||
|
||||
|
||||
def setup_logging(verbosity, debug=loglevel):
|
||||
""" Setup root logger for iottb """
|
||||
log_level = loglevel
|
||||
handlers = []
|
||||
date_format = '%Y-%m-%d %H:%M:%S'
|
||||
if verbosity > 0:
|
||||
log_level = logging.WARNING
|
||||
if verbosity > MAX_VERBOSITY:
|
||||
verbosity = MAX_VERBOSITY
|
||||
log_level = logging.INFO
|
||||
assert verbosity <= MAX_VERBOSITY, f'Verbosity must be <= {MAX_VERBOSITY}'
|
||||
console_handler = logging.StreamHandler(sys.stdout)
|
||||
print(str(sys.stdout))
|
||||
console_handler.setFormatter(logging.Formatter(CONSOLE_LOG_FORMATS[verbosity], datefmt=date_format))
|
||||
console_handler.setLevel(logging.DEBUG) # can keep at debug since it depends on global level?
|
||||
handlers.append(console_handler)
|
||||
|
||||
if debug:
|
||||
log_level = logging.DEBUG
|
||||
|
||||
# Logfile logs INFO+, no debugs though
|
||||
file_handler = RotatingFileHandler(f'{str(definitions.LOGDIR / APP_NAME)}.log', maxBytes=10240, backupCount=5)
|
||||
file_handler.setFormatter(logging.Formatter(LOGFILE_LOG_FORMAT[verbosity], datefmt=date_format))
|
||||
file_handler.setLevel(logging.INFO)
|
||||
|
||||
# finnish root logger setup
|
||||
handlers.append(file_handler)
|
||||
# Force this config to be applied to root logger
|
||||
logging.basicConfig(level=log_level, handlers=handlers, force=True)
|
||||
|
||||
|
||||
40
code/iottb-project/iottb/utils/string_processing.py
Normal file
40
code/iottb-project/iottb/utils/string_processing.py
Normal file
@ -0,0 +1,40 @@
|
||||
import re
|
||||
from iottb import definitions
|
||||
import logging
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def normalize_string(s, chars_to_replace=None, replacement=None, allow_unicode=False):
|
||||
pass
|
||||
|
||||
|
||||
def make_canonical_name(name):
|
||||
"""
|
||||
Normalize the device name to a canonical form:
|
||||
- Replace the first two occurrences of spaces and transform characters with dashes.
|
||||
- Remove any remaining spaces and non-ASCII characters.
|
||||
- Convert to lowercase.
|
||||
"""
|
||||
aliases = [name]
|
||||
logger.info(f'Normalizing name {name}')
|
||||
|
||||
# We first normalize
|
||||
chars_to_replace = definitions.REPLACEMENT_SET_CANONICAL_DEVICE_NAMES
|
||||
pattern = re.compile('|'.join(re.escape(char) for char in chars_to_replace))
|
||||
norm_name = pattern.sub('-', name)
|
||||
norm_name = re.sub(r'[^\x00-\x7F]+', '', norm_name) # removes non ascii chars
|
||||
|
||||
aliases.append(norm_name)
|
||||
# Lower case
|
||||
norm_name = norm_name.lower()
|
||||
aliases.append(norm_name)
|
||||
|
||||
# canonical name is only first two parts of resulting string
|
||||
parts = norm_name.split('-')
|
||||
canonical_name = canonical_name = '-'.join(parts[:2])
|
||||
aliases.append(canonical_name)
|
||||
|
||||
logger.debug(f'Canonical name: {canonical_name}')
|
||||
logger.debug(f'Aliases: {aliases}')
|
||||
return canonical_name, list(set(aliases))
|
||||
42
code/iottb-project/iottb/utils/user_interaction.py
Normal file
42
code/iottb-project/iottb/utils/user_interaction.py
Normal file
@ -0,0 +1,42 @@
|
||||
# iottb/utils/user_interaction.py
|
||||
|
||||
import click
|
||||
from iottb.definitions import TB_ECHO_STYLES
|
||||
import sys
|
||||
import os
|
||||
|
||||
|
||||
def tb_echo2(msg: str, lvl='i', log=True):
|
||||
style = TB_ECHO_STYLES.get(lvl, {})
|
||||
click.secho(f'[IOTTB]', **style)
|
||||
click.secho(f'[IOTTB] \t {msg}', **style)
|
||||
|
||||
|
||||
last_prefix = None
|
||||
|
||||
|
||||
def tb_echo(msg: str, lvl='i', log=True):
|
||||
global last_prefix
|
||||
prefix = f'Testbed [{lvl.upper()}]\n'
|
||||
|
||||
if last_prefix != prefix:
|
||||
click.secho(prefix, nl=False, **TB_ECHO_STYLES['header'])
|
||||
last_prefix = prefix
|
||||
|
||||
click.secho(f' {msg}', **TB_ECHO_STYLES[lvl])
|
||||
|
||||
|
||||
def main():
|
||||
tb_echo('Info message', 'i')
|
||||
tb_echo('Warning message', 'w')
|
||||
tb_echo('Error message', 'e')
|
||||
tb_echo('Success message', 's')
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
# arrrgggg hacky
|
||||
current_dir = os.path.dirname(os.path.abspath(__file__))
|
||||
project_root = os.path.abspath(os.path.join(current_dir, '../../'))
|
||||
sys.path.insert(0, project_root)
|
||||
|
||||
main()
|
||||
103
code/iottb-project/poetry.lock
generated
Normal file
103
code/iottb-project/poetry.lock
generated
Normal file
@ -0,0 +1,103 @@
|
||||
# This file is automatically @generated by Poetry 1.8.3 and should not be changed by hand.
|
||||
|
||||
[[package]]
|
||||
name = "click"
|
||||
version = "8.1.7"
|
||||
description = "Composable command line interface toolkit"
|
||||
optional = false
|
||||
python-versions = ">=3.7"
|
||||
files = [
|
||||
{file = "click-8.1.7-py3-none-any.whl", hash = "sha256:ae74fb96c20a0277a1d615f1e4d73c8414f5a98db8b799a7931d1582f3390c28"},
|
||||
{file = "click-8.1.7.tar.gz", hash = "sha256:ca9853ad459e787e2192211578cc907e7594e294c7ccc834310722b41b9ca6de"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
colorama = {version = "*", markers = "platform_system == \"Windows\""}
|
||||
|
||||
[[package]]
|
||||
name = "colorama"
|
||||
version = "0.4.6"
|
||||
description = "Cross-platform colored terminal text."
|
||||
optional = false
|
||||
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,!=3.6.*,>=2.7"
|
||||
files = [
|
||||
{file = "colorama-0.4.6-py2.py3-none-any.whl", hash = "sha256:4f1d9991f5acc0ca119f9d443620b77f9d6b33703e51011c16baf57afb285fc6"},
|
||||
{file = "colorama-0.4.6.tar.gz", hash = "sha256:08695f5cb7ed6e0531a20572697297273c47b8cae5a63ffc6d6ed5c201be6e44"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "iniconfig"
|
||||
version = "2.0.0"
|
||||
description = "brain-dead simple config-ini parsing"
|
||||
optional = false
|
||||
python-versions = ">=3.7"
|
||||
files = [
|
||||
{file = "iniconfig-2.0.0-py3-none-any.whl", hash = "sha256:b6a85871a79d2e3b22d2d1b94ac2824226a63c6b741c88f7ae975f18b6778374"},
|
||||
{file = "iniconfig-2.0.0.tar.gz", hash = "sha256:2d91e135bf72d31a410b17c16da610a82cb55f6b0477d1a902134b24a455b8b3"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "packaging"
|
||||
version = "24.1"
|
||||
description = "Core utilities for Python packages"
|
||||
optional = false
|
||||
python-versions = ">=3.8"
|
||||
files = [
|
||||
{file = "packaging-24.1-py3-none-any.whl", hash = "sha256:5b8f2217dbdbd2f7f384c41c628544e6d52f2d0f53c6d0c3ea61aa5d1d7ff124"},
|
||||
{file = "packaging-24.1.tar.gz", hash = "sha256:026ed72c8ed3fcce5bf8950572258698927fd1dbda10a5e981cdf0ac37f4f002"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "pluggy"
|
||||
version = "1.5.0"
|
||||
description = "plugin and hook calling mechanisms for python"
|
||||
optional = false
|
||||
python-versions = ">=3.8"
|
||||
files = [
|
||||
{file = "pluggy-1.5.0-py3-none-any.whl", hash = "sha256:44e1ad92c8ca002de6377e165f3e0f1be63266ab4d554740532335b9d75ea669"},
|
||||
{file = "pluggy-1.5.0.tar.gz", hash = "sha256:2cffa88e94fdc978c4c574f15f9e59b7f4201d439195c3715ca9e2486f1d0cf1"},
|
||||
]
|
||||
|
||||
[package.extras]
|
||||
dev = ["pre-commit", "tox"]
|
||||
testing = ["pytest", "pytest-benchmark"]
|
||||
|
||||
[[package]]
|
||||
name = "pytest"
|
||||
version = "8.2.2"
|
||||
description = "pytest: simple powerful testing with Python"
|
||||
optional = false
|
||||
python-versions = ">=3.8"
|
||||
files = [
|
||||
{file = "pytest-8.2.2-py3-none-any.whl", hash = "sha256:c434598117762e2bd304e526244f67bf66bbd7b5d6cf22138be51ff661980343"},
|
||||
{file = "pytest-8.2.2.tar.gz", hash = "sha256:de4bb8104e201939ccdc688b27a89a7be2079b22e2bd2b07f806b6ba71117977"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
colorama = {version = "*", markers = "sys_platform == \"win32\""}
|
||||
iniconfig = "*"
|
||||
packaging = "*"
|
||||
pluggy = ">=1.5,<2.0"
|
||||
|
||||
[package.extras]
|
||||
dev = ["argcomplete", "attrs (>=19.2)", "hypothesis (>=3.56)", "mock", "pygments (>=2.7.2)", "requests", "setuptools", "xmlschema"]
|
||||
|
||||
[[package]]
|
||||
name = "scapy"
|
||||
version = "2.5.0"
|
||||
description = "Scapy: interactive packet manipulation tool"
|
||||
optional = false
|
||||
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, <4"
|
||||
files = [
|
||||
{file = "scapy-2.5.0.tar.gz", hash = "sha256:5b260c2b754fd8d409ba83ee7aee294ecdbb2c235f9f78fe90bc11cb6e5debc2"},
|
||||
]
|
||||
|
||||
[package.extras]
|
||||
basic = ["ipython"]
|
||||
complete = ["cryptography (>=2.0)", "ipython", "matplotlib", "pyx"]
|
||||
docs = ["sphinx (>=3.0.0)", "sphinx_rtd_theme (>=0.4.3)", "tox (>=3.0.0)"]
|
||||
|
||||
[metadata]
|
||||
lock-version = "2.0"
|
||||
python-versions = "^3.12"
|
||||
content-hash = "10b2c268b0f10db15eab2cca3d2dc9dc25bc60f4b218ebf786fb780fa85557e0"
|
||||
22
code/iottb-project/pyproject.toml
Normal file
22
code/iottb-project/pyproject.toml
Normal file
@ -0,0 +1,22 @@
|
||||
[tool.poetry]
|
||||
name = "iottb"
|
||||
version = "0.1.0"
|
||||
description = "IoT Testbed"
|
||||
authors = ["Sebastian Lenzlinger <sebastian.lenzlinger@unibas.ch>"]
|
||||
readme = "README.md"
|
||||
package-mode = false
|
||||
|
||||
[tool.poetry.dependencies]
|
||||
python = "^3.12"
|
||||
click = "^8.1"
|
||||
scapy = "^2.5"
|
||||
|
||||
[tool.poetry.scripts]
|
||||
iottb = "iottb.main:cli"
|
||||
|
||||
[tool.poetry.group.test.dependencies]
|
||||
pytest = "^8.2.2"
|
||||
|
||||
[build-system]
|
||||
requires = ["poetry-core"]
|
||||
build-backend = "poetry.core.masonry.api"
|
||||
0
code/iottb-project/tests/__init__.py
Normal file
0
code/iottb-project/tests/__init__.py
Normal file
23
code/iottb-project/tests/test_make_canonical_name.py
Normal file
23
code/iottb-project/tests/test_make_canonical_name.py
Normal file
@ -0,0 +1,23 @@
|
||||
from iottb.utils.string_processing import make_canonical_name
|
||||
|
||||
import pytest
|
||||
|
||||
|
||||
class TestMakeCanonicalName:
|
||||
|
||||
def test_normalizes_name_with_spaces_to_dashes(self):
|
||||
name = "Device Name With Spaces"
|
||||
expected_canonical_name = "device-name"
|
||||
canonical_name, aliases = make_canonical_name(name)
|
||||
assert canonical_name == expected_canonical_name
|
||||
assert "device-name-with-spaces" in aliases
|
||||
assert "device-name" in aliases
|
||||
assert "Device Name With Spaces" in aliases
|
||||
|
||||
def test_name_with_no_spaces_or_special_characters(self):
|
||||
name = "DeviceName123"
|
||||
expected_canonical_name = "devicename123"
|
||||
canonical_name, aliases = make_canonical_name(name)
|
||||
assert canonical_name == expected_canonical_name
|
||||
assert "DeviceName123" in aliases
|
||||
assert "devicename123" in aliases
|
||||
0
code/iottb/__init__.py
Normal file
0
code/iottb/__init__.py
Normal file
82
code/iottb/__main__.py
Normal file
82
code/iottb/__main__.py
Normal file
@ -0,0 +1,82 @@
|
||||
#!/usr/bin/env python3
|
||||
import argparse
|
||||
from os import environ
|
||||
from pathlib import Path
|
||||
|
||||
from iottb.logger import logger
|
||||
from iottb.subcommands.add_device import setup_init_device_root_parser
|
||||
from iottb.subcommands.capture import setup_capture_parser
|
||||
from iottb.utils.tcpdump_utils import list_interfaces
|
||||
from definitions import IOTTB_HOME_ABS, ReturnCodes
|
||||
|
||||
|
||||
######################
|
||||
# Argparse setup
|
||||
######################
|
||||
def setup_argparse():
|
||||
# create top level parser
|
||||
root_parser = argparse.ArgumentParser(prog='iottb')
|
||||
subparsers = root_parser.add_subparsers(title='subcommands', required=True, dest='command')
|
||||
|
||||
# shared options
|
||||
root_parser.add_argument('--verbose', '-v', action='count', default=0)
|
||||
# configure subcommands
|
||||
setup_capture_parser(subparsers)
|
||||
setup_init_device_root_parser(subparsers)
|
||||
|
||||
# Utility to list interfaces directly with iottb instead of relying on external tooling
|
||||
|
||||
interfaces_parser = subparsers.add_parser('list-interfaces', aliases=['li', 'if'],
|
||||
help='List available network interfaces.')
|
||||
interfaces_parser.set_defaults(func=list_interfaces)
|
||||
|
||||
return root_parser
|
||||
|
||||
|
||||
def check_iottb_env():
|
||||
# This makes the option '--root-dir' obsolescent # TODO How to streamline this?\
|
||||
try:
|
||||
iottb_home = environ['IOTTB_HOME'] # TODO WARN implicit declaration of env var name!
|
||||
except KeyError:
|
||||
logger.error(f"Environment variable 'IOTTB_HOME' is not set."
|
||||
f"Setting environment variable 'IOTTB_HOME' to '~/{IOTTB_HOME_ABS}'")
|
||||
environ['IOTTB_HOME'] = IOTTB_HOME_ABS
|
||||
finally:
|
||||
if not Path(IOTTB_HOME_ABS).exists():
|
||||
print(f'"{IOTTB_HOME_ABS}" does not exist.')
|
||||
response = input('Do you want to create it now? [y/N]')
|
||||
logger.debug(f'response: {response}')
|
||||
if response.lower() != 'y':
|
||||
logger.debug(f'Not creating "{environ['IOTTB_HOME']}"')
|
||||
print('TODO')
|
||||
print("Aborting execution...")
|
||||
return ReturnCodes.ABORTED
|
||||
else:
|
||||
print(f'Creating "{environ['IOTTB_HOME']}"')
|
||||
Path(IOTTB_HOME_ABS).mkdir(parents=True,
|
||||
exist_ok=False) # Should always work since in 'not exist' code path
|
||||
return ReturnCodes.OK
|
||||
logger.info(f'"{IOTTB_HOME_ABS}" exists.')
|
||||
# TODO: Check that it is a valid iottb dir or can we say it is valid by definition if?
|
||||
return ReturnCodes.OK
|
||||
|
||||
|
||||
def main():
|
||||
if check_iottb_env() != ReturnCodes.OK:
|
||||
exit(ReturnCodes.ABORTED)
|
||||
parser = setup_argparse()
|
||||
args = parser.parse_args()
|
||||
print(args)
|
||||
if args.command:
|
||||
try:
|
||||
args.func(args)
|
||||
except KeyboardInterrupt:
|
||||
print('Received keyboard interrupt. Exiting...')
|
||||
exit(1)
|
||||
except Exception as e:
|
||||
print(f'Error: {e}')
|
||||
# create_capture_directory(args.device_name)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
41
code/iottb/definitions.py
Normal file
41
code/iottb/definitions.py
Normal file
@ -0,0 +1,41 @@
|
||||
from datetime import datetime
|
||||
from enum import Flag, unique, global_enum
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
'''
|
||||
Defining IOTTB_HOME_ABS here implies that it be immutable.
|
||||
It is used here so that one could configure it.
|
||||
But after its used in __man__ this cannot be relied upon.
|
||||
'''
|
||||
IOTTB_HOME_ABS = Path().home() / 'IOTTB.db'
|
||||
|
||||
# TODO maybe wrap this into class to make it easier to pass along to different objects
|
||||
# But will need more refactoring
|
||||
DEVICE_METADATA_FILE = 'device_metadata.json'
|
||||
CAPTURE_METADATA_FILE = 'capture_metadata.json'
|
||||
TODAY_DATE_STRING = datetime.now().strftime('%d%b%Y').lower() # TODO convert to function in utils or so
|
||||
|
||||
CAPTURE_FOLDER_BASENAME = 'capture_###'
|
||||
|
||||
AFFIRMATIVE_USER_RESPONSE = {'yes', 'y', 'true', 'Y', 'Yes', 'YES'}
|
||||
NEGATIVE_USER_RESPONSE = {'no', 'n', 'N', 'No'}
|
||||
YES_DEFAULT = AFFIRMATIVE_USER_RESPONSE.union({'', ' '})
|
||||
NO_DEFAULT = NEGATIVE_USER_RESPONSE.union({'', ' '})
|
||||
|
||||
|
||||
@unique
|
||||
@global_enum
|
||||
class ReturnCodes(Flag):
|
||||
SUCCESS = 0
|
||||
ABORTED = 1
|
||||
FAILURE = 2
|
||||
UNKNOWN = 3
|
||||
FILE_NOT_FOUND = 4
|
||||
FILE_ALREADY_EXISTS = 5
|
||||
INVALID_ARGUMENT = 6
|
||||
INVALID_ARGUMENT_VALUE = 7
|
||||
|
||||
|
||||
def iottb_home_abs():
|
||||
return None
|
||||
28
code/iottb/logger.py
Normal file
28
code/iottb/logger.py
Normal file
@ -0,0 +1,28 @@
|
||||
import logging
|
||||
import sys
|
||||
from logging.handlers import RotatingFileHandler
|
||||
|
||||
|
||||
def setup_logging():
|
||||
logger_obj = logging.getLogger('iottbLogger')
|
||||
logger_obj.setLevel(logging.DEBUG)
|
||||
|
||||
file_handler = RotatingFileHandler('iottb.log')
|
||||
console_handler = logging.StreamHandler(sys.stdout)
|
||||
|
||||
file_handler.setLevel(logging.INFO)
|
||||
console_handler.setLevel(logging.DEBUG)
|
||||
|
||||
file_fmt = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s')
|
||||
console_fmt = logging.Formatter('%(asctime)s - %(levelname)s - %(filename)s:%(lineno)d - %(funcName)s - %(message)s')
|
||||
|
||||
file_handler.setFormatter(file_fmt)
|
||||
console_handler.setFormatter(console_fmt)
|
||||
|
||||
logger_obj.addHandler(file_handler)
|
||||
logger_obj.addHandler(console_handler)
|
||||
|
||||
return logger_obj
|
||||
|
||||
|
||||
logger = setup_logging()
|
||||
0
code/iottb/models/__init__.py
Normal file
0
code/iottb/models/__init__.py
Normal file
102
code/iottb/models/capture_metadata_model.py
Normal file
102
code/iottb/models/capture_metadata_model.py
Normal file
@ -0,0 +1,102 @@
|
||||
import json
|
||||
import uuid
|
||||
from datetime import datetime
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
|
||||
from iottb.definitions import ReturnCodes, CAPTURE_METADATA_FILE
|
||||
from iottb.models.device_metadata_model import DeviceMetadata
|
||||
from iottb.logger import logger
|
||||
|
||||
|
||||
class CaptureMetadata:
|
||||
# Required Fields
|
||||
device_metadata: DeviceMetadata
|
||||
capture_id: str = lambda: str(uuid.uuid4())
|
||||
device_id: str
|
||||
capture_dir: Path
|
||||
capture_file: str
|
||||
capture_date: str = lambda: datetime.now().strftime('%d-%m-%YT%H:%M:%S').lower()
|
||||
|
||||
# Statistics
|
||||
start_time: str
|
||||
stop_time: str
|
||||
|
||||
# tcpdump
|
||||
packet_count: Optional[int]
|
||||
pcap_filter: str = ''
|
||||
tcpdump_command: str = ''
|
||||
interface: str = ''
|
||||
|
||||
# Optional Fields
|
||||
device_ip_address: str = 'No IP Address set'
|
||||
device_mac_address: Optional[str] = None
|
||||
|
||||
app: Optional[str] = None
|
||||
app_version: Optional[str] = None
|
||||
firmware_version: Optional[str] = None
|
||||
|
||||
def __init__(self, device_metadata: DeviceMetadata, capture_dir: Path):
|
||||
logger.info(f'Creating CaptureMetadata model from DeviceMetadata: {device_metadata}')
|
||||
self.device_metadata = device_metadata
|
||||
|
||||
self.capture_dir = capture_dir
|
||||
assert capture_dir.is_dir(), f'Capture directory {capture_dir} does not exist'
|
||||
|
||||
def build_capture_file_name(self):
|
||||
logger.info(f'Building capture file name')
|
||||
if self.app is None:
|
||||
logger.debug(f'No app specified')
|
||||
prefix = self.device_metadata.device_short_name
|
||||
else:
|
||||
logger.debug(f'App specified: {self.app}')
|
||||
assert str(self.app).strip() not in {'', ' '}, f'app is not a valid name: {self.app}'
|
||||
prefix = self.app.lower().replace(' ', '_')
|
||||
# assert self.capture_dir is not None, f'{self.capture_dir} does not exist'
|
||||
filename = f'{prefix}_{str(self.capture_id)}.pcap'
|
||||
logger.debug(f'Capture file name: {filename}')
|
||||
self.capture_file = filename
|
||||
|
||||
def save_capture_metadata_to_json(self, file_path: Path = Path(CAPTURE_METADATA_FILE)):
|
||||
assert self.capture_dir.is_dir(), f'capture_dir is not a directory: {self.capture_dir}'
|
||||
if file_path.is_file():
|
||||
print(f'File {file_path} already exists, update instead.')
|
||||
return ReturnCodes.FILE_ALREADY_EXISTS
|
||||
metadata = self.to_json(indent=2)
|
||||
with file_path.open('w') as file:
|
||||
json.dump(metadata, file)
|
||||
return ReturnCodes.SUCCESS
|
||||
|
||||
def to_json(self, indent=2):
|
||||
# TODO: Where to validate data?
|
||||
logger.info(f'Converting CaptureMetadata to JSON')
|
||||
data = {}
|
||||
|
||||
# List of fields from CaptureData class, if fields[key]==True, then it is a required field
|
||||
fields = {
|
||||
'capture_id': True, #
|
||||
'device_id': True,
|
||||
'capture_dir': True,
|
||||
'capture_file': False,
|
||||
'capture_date': False,
|
||||
'start_time': True,
|
||||
'stop_time': True,
|
||||
'packet_count': False,
|
||||
'pcap_filter': False,
|
||||
'tcpdump_command': False,
|
||||
'interface': False,
|
||||
'device_ip_address': False,
|
||||
'device_mac_address': False,
|
||||
'app': False,
|
||||
'app_version': False,
|
||||
'firmware_version': False
|
||||
}
|
||||
|
||||
for field, is_mandatory in fields.items():
|
||||
value = getattr(self, field, None)
|
||||
if value not in [None, ''] or is_mandatory:
|
||||
if value in [None, ''] and is_mandatory:
|
||||
raise ValueError(f'Field {field} is required and cannot be empty.')
|
||||
data[field] = str(value) if not isinstance(value, str) else value
|
||||
logger.debug(f'Capture metadata: {data}')
|
||||
return json.dumps(data, indent=indent)
|
||||
111
code/iottb/models/device_metadata_model.py
Normal file
111
code/iottb/models/device_metadata_model.py
Normal file
@ -0,0 +1,111 @@
|
||||
import json
|
||||
import uuid
|
||||
from datetime import datetime
|
||||
from pathlib import Path
|
||||
from typing import Optional, List
|
||||
|
||||
# iottb modules
|
||||
from iottb.definitions import ReturnCodes, DEVICE_METADATA_FILE
|
||||
from iottb.logger import logger
|
||||
# 3rd party libs
|
||||
|
||||
IMMUTABLE_FIELDS = {'device_name', 'device_short_name', 'device_id', 'date_created'}
|
||||
|
||||
|
||||
class DeviceMetadata:
|
||||
# Required fields
|
||||
device_name: str
|
||||
device_short_name: str
|
||||
device_id: str
|
||||
date_created: str
|
||||
|
||||
device_root_path: Path
|
||||
# Optional Fields
|
||||
aliases: Optional[List[str]] = None
|
||||
device_type: Optional[str] = None
|
||||
device_serial_number: Optional[str] = None
|
||||
device_firmware_version: Optional[str] = None
|
||||
date_updated: Optional[str] = None
|
||||
|
||||
capture_files: Optional[List[str]] = []
|
||||
|
||||
def __init__(self, device_name: str, device_root_path: Path):
|
||||
self.device_name = device_name
|
||||
self.device_short_name = device_name.lower().replace(' ', '_')
|
||||
self.device_id = str(uuid.uuid4())
|
||||
self.date_created = datetime.now().strftime('%d-%m-%YT%H:%M:%S').lower()
|
||||
self.device_root_path = device_root_path
|
||||
if not self.device_root_path or not self.device_root_path.is_dir():
|
||||
logger.error(f'Invalid device root path: {device_root_path}')
|
||||
raise ValueError(f'Invalid device root path: {device_root_path}')
|
||||
logger.debug(f'Device name: {device_name}')
|
||||
logger.debug(f'Device short_name: {self.device_short_name}')
|
||||
logger.debug(f'Device root dir: {device_root_path}')
|
||||
logger.info(f'Initialized DeviceMetadata model: {device_name}')
|
||||
|
||||
@classmethod
|
||||
def load_from_json(cls, device_file_path: Path):
|
||||
logger.info(f'Loading DeviceMetadata from JSON file: {device_file_path}')
|
||||
assert device_file_path.is_file(), f'{device_file_path} is not a file'
|
||||
assert device_file_path.name == DEVICE_METADATA_FILE, f'{device_file_path} is not a {DEVICE_METADATA_FILE}'
|
||||
device_meta_filename = device_file_path
|
||||
|
||||
with device_meta_filename.open('r') as file:
|
||||
metadata_json = json.load(file)
|
||||
metadata_model_obj = cls.from_json(metadata_json)
|
||||
return metadata_model_obj
|
||||
|
||||
def save_to_json(self, file_path: Path):
|
||||
logger.info(f'Saving DeviceMetadata to JSON file: {file_path}')
|
||||
if file_path.is_file():
|
||||
print(f'File {file_path} already exists, update instead.')
|
||||
return ReturnCodes.FILE_ALREADY_EXISTS
|
||||
metadata = self.to_json(indent=2)
|
||||
with file_path.open('w') as file:
|
||||
json.dump(metadata, file)
|
||||
return ReturnCodes.SUCCESS
|
||||
|
||||
@classmethod
|
||||
def from_json(cls, metadata_json):
|
||||
if isinstance(metadata_json, dict):
|
||||
return DeviceMetadata(**metadata_json)
|
||||
|
||||
def to_json(self, indent=2):
|
||||
# TODO: atm almost exact copy as in CaptureMetadata
|
||||
data = {}
|
||||
|
||||
fields = {
|
||||
'device_name': True,
|
||||
'device_short_name': True,
|
||||
'device_id': True,
|
||||
'date_created': True,
|
||||
'device_root_path': True,
|
||||
'aliases': False,
|
||||
'device_type': False,
|
||||
'device_serial_number': False,
|
||||
'device_firmware_version': False,
|
||||
'date_updated': False,
|
||||
'capture_files': False,
|
||||
}
|
||||
|
||||
for field, is_mandatory in fields.items():
|
||||
value = getattr(self, field, None)
|
||||
if value not in [None, ''] or is_mandatory:
|
||||
if value in [None, ''] and is_mandatory:
|
||||
logger.debug(f'Mandatory field {field}: {value}')
|
||||
raise ValueError(f'Field {field} is required and cannot be empty.')
|
||||
data[field] = str(value) if not isinstance(value, str) else value
|
||||
logger.debug(f'Device metadata: {data}')
|
||||
return json.dumps(data, indent=indent)
|
||||
|
||||
|
||||
def dir_contains_device_metadata(dir_path: Path):
|
||||
if not dir_path.is_dir():
|
||||
return False
|
||||
else:
|
||||
meta_file_path = dir_path / DEVICE_METADATA_FILE
|
||||
print(f'Device metadata file path {str(meta_file_path)}')
|
||||
if not meta_file_path.is_file():
|
||||
return False
|
||||
else:
|
||||
return True
|
||||
0
code/iottb/subcommands/__init__.py
Normal file
0
code/iottb/subcommands/__init__.py
Normal file
76
code/iottb/subcommands/add_device.py
Normal file
76
code/iottb/subcommands/add_device.py
Normal file
@ -0,0 +1,76 @@
|
||||
import logging
|
||||
import pathlib
|
||||
|
||||
from iottb import definitions
|
||||
from iottb.definitions import DEVICE_METADATA_FILE, ReturnCodes
|
||||
from iottb.logger import logger
|
||||
from iottb.models.device_metadata_model import DeviceMetadata
|
||||
|
||||
logger.setLevel(logging.INFO) # Since module currently passes all tests
|
||||
|
||||
|
||||
def setup_init_device_root_parser(subparsers):
|
||||
parser = subparsers.add_parser('add-device', aliases=['add-device-root', 'add'],
|
||||
help='Initialize a folder for a device.')
|
||||
parser.add_argument('--root_dir', type=pathlib.Path, default=pathlib.Path.cwd())
|
||||
group = parser.add_mutually_exclusive_group()
|
||||
group.add_argument('--guided', action='store_true', help='Guided setup', default=False)
|
||||
group.add_argument('--name', action='store', type=str, help='name of device')
|
||||
parser.set_defaults(func=handle_add)
|
||||
|
||||
|
||||
def handle_add(args):
|
||||
logger.info(f'Add device handler called with args {args}')
|
||||
|
||||
args.root_dir.mkdir(parents=True,
|
||||
exist_ok=True) # else metadata.save_to_file will fail TODO: unclear what to assume
|
||||
|
||||
if args.guided:
|
||||
logger.debug('begin guided setup')
|
||||
metadata = guided_setup(args.root_dir)
|
||||
logger.debug('guided setup complete')
|
||||
else:
|
||||
logger.debug('Setup through passed args: setup')
|
||||
if not args.name:
|
||||
logger.error('No device name specified with unguided setup.')
|
||||
return ReturnCodes.ERROR
|
||||
metadata = DeviceMetadata(args.name, args.root_dir)
|
||||
|
||||
file_path = args.root_dir / DEVICE_METADATA_FILE
|
||||
if file_path.exists():
|
||||
print('Directory already contains a metadata file. Aborting.')
|
||||
return ReturnCodes.ABORTED
|
||||
serialized_metadata = metadata.to_json()
|
||||
response = input(f'Confirm device metadata: {serialized_metadata} [y/N]')
|
||||
logger.debug(f'response: {response}')
|
||||
if response not in definitions.AFFIRMATIVE_USER_RESPONSE:
|
||||
print('Adding device aborted by user.')
|
||||
return ReturnCodes.ABORTED
|
||||
|
||||
logger.debug(f'Device metadata file {file_path}')
|
||||
if metadata.save_to_json(file_path) == ReturnCodes.FILE_ALREADY_EXISTS:
|
||||
logger.error('File exists after checking, which should not happen.')
|
||||
return ReturnCodes.ABORTED
|
||||
|
||||
print('Device metadata successfully created.')
|
||||
return ReturnCodes.SUCCESS
|
||||
|
||||
|
||||
def configure_metadata():
|
||||
pass
|
||||
|
||||
|
||||
def guided_setup(device_root) -> DeviceMetadata:
|
||||
logger.info('Guided setup')
|
||||
response = 'N'
|
||||
device_name = ''
|
||||
while response.upper() == 'N':
|
||||
device_name = input('Please enter name of device: ')
|
||||
response = input(f'Confirm device name: {device_name} [y/N] ')
|
||||
if device_name == '' or device_name is None:
|
||||
print('Name cannot be empty')
|
||||
logger.warning('Name cannot be empty')
|
||||
logger.debug(f'Response is {response}')
|
||||
logger.debug(f'Device name is {device_name}')
|
||||
|
||||
return DeviceMetadata(device_name, device_root)
|
||||
174
code/iottb/subcommands/capture.py
Normal file
174
code/iottb/subcommands/capture.py
Normal file
@ -0,0 +1,174 @@
|
||||
import subprocess
|
||||
from pathlib import Path
|
||||
|
||||
from iottb.definitions import *
|
||||
from iottb.models.capture_metadata_model import CaptureMetadata
|
||||
from iottb.models.device_metadata_model import DeviceMetadata, dir_contains_device_metadata
|
||||
from iottb.utils.capture_utils import get_capture_src_folder, make_capture_src_folder
|
||||
from iottb.utils.tcpdump_utils import check_installed
|
||||
|
||||
def setup_capture_parser(subparsers):
|
||||
parser = subparsers.add_parser('sniff', help='Sniff packets with tcpdump')
|
||||
# metadata args
|
||||
parser.add_argument('-a', '--ip-address', help='IP address of the device to sniff', dest='device_ip')
|
||||
# tcpdump args
|
||||
parser.add_argument('device_root', help='Root folder for device to sniff',
|
||||
type=Path, default=Path.cwd())
|
||||
parser.add_argument('-s', '--safe', help='Ensure correct device root folder before sniffing', action='store_true')
|
||||
parser.add_argument('--app', help='Application name to sniff', dest='app_name', default=None)
|
||||
|
||||
parser_sniff_tcpdump = parser.add_argument_group('tcpdump arguments')
|
||||
parser_sniff_tcpdump.add_argument('-i', '--interface', help='Interface to capture on.', dest='capture_interface',
|
||||
required=True)
|
||||
parser_sniff_tcpdump.add_argument('-I', '--monitor-mode', help='Put interface into monitor mode',
|
||||
action='store_true')
|
||||
parser_sniff_tcpdump.add_argument('-n', help='Deactivate name resolution. True by default.',
|
||||
action='store_true', dest='no_name_resolution')
|
||||
parser_sniff_tcpdump.add_argument('-#', '--number',
|
||||
help='Print packet number at beginning of line. True by default.',
|
||||
action='store_true')
|
||||
parser_sniff_tcpdump.add_argument('-e', help='Print link layer headers. True by default.',
|
||||
action='store_true', dest='print_link_layer')
|
||||
parser_sniff_tcpdump.add_argument('-t', action='count', default=0,
|
||||
help='Please see tcpdump manual for details. Unused by default.')
|
||||
|
||||
cap_size_group = parser.add_mutually_exclusive_group(required=False)
|
||||
cap_size_group.add_argument('-c', '--count', type=int, help='Number of packets to capture.', default=1000)
|
||||
cap_size_group.add_argument('--mins', type=int, help='Time in minutes to capture.', default=1)
|
||||
|
||||
parser.set_defaults(func=handle_capture)
|
||||
|
||||
|
||||
def cwd_is_device_root_dir() -> bool:
|
||||
device_metadata_file = Path.cwd() / DEVICE_METADATA_FILE
|
||||
return device_metadata_file.is_file()
|
||||
|
||||
|
||||
def start_guided_device_root_dir_setup():
|
||||
assert False, 'Not implemented'
|
||||
|
||||
|
||||
def handle_metadata():
|
||||
assert not cwd_is_device_root_dir()
|
||||
print(f'Unable to find {DEVICE_METADATA_FILE} in current working directory')
|
||||
print('You need to setup a device root directory before using this command')
|
||||
response = input('Would you like to be guided through the setup? [y/n]')
|
||||
if response.lower() == 'y':
|
||||
start_guided_device_root_dir_setup()
|
||||
else:
|
||||
print('\'iottb init-device-root --help\' for more information.')
|
||||
exit(ReturnCodes.ABORTED)
|
||||
# device_id = handle_capture_metadata()
|
||||
return ReturnCodes.SUCCESS
|
||||
|
||||
|
||||
def get_device_metadata_from_file(device_metadata_filename: Path) -> str:
|
||||
assert device_metadata_filename.is_file(), f'Device metadata file f"{device_metadata_filename}" does not exist'
|
||||
device_metadata = DeviceMetadata.load_from_json(device_metadata_filename)
|
||||
return device_metadata
|
||||
|
||||
|
||||
def run_tcpdump(cmd):
|
||||
# TODO: Maybe specify files for stout and stderr
|
||||
try:
|
||||
p = subprocess.run(cmd, capture_output=True, text=True, check=True)
|
||||
if p.returncode != 0:
|
||||
print(f'Error running tcpdump {p.stderr}')
|
||||
# TODO add logging
|
||||
else:
|
||||
print(f'tcpdump run successfully\n: {p.stdout}')
|
||||
except KeyboardInterrupt:
|
||||
pass
|
||||
|
||||
|
||||
def handle_capture(args):
|
||||
if not check_installed():
|
||||
print('Please install tcpdump first')
|
||||
exit(ReturnCodes.ABORTED)
|
||||
assert args.device_root is not None, f'Device root directory is required'
|
||||
assert dir_contains_device_metadata(args.device_root), f'Device metadata file \'{args.device_root}\' does not exist'
|
||||
# get device metadata
|
||||
if args.safe and not dir_contains_device_metadata(args.device_root):
|
||||
print(f'Supplied folder contains no device metadata. '
|
||||
f'Please setup a device root directory before using this command')
|
||||
exit(ReturnCodes.ABORTED)
|
||||
elif dir_contains_device_metadata(args.device_root):
|
||||
device_metadata_filename = args.device_root / DEVICE_METADATA_FILE
|
||||
device_data = DeviceMetadata.load_from_json(device_metadata_filename)
|
||||
else:
|
||||
name = input('Please enter a device name: ')
|
||||
args.device_root.mkdir(parents=True, exist_ok=True)
|
||||
device_data = DeviceMetadata(name, args.device_root)
|
||||
# start constructing environment for capture
|
||||
capture_dir = get_capture_src_folder(args.device_root)
|
||||
make_capture_src_folder(capture_dir)
|
||||
capture_metadata = CaptureMetadata(device_data, capture_dir)
|
||||
|
||||
capture_metadata.interface = args.capture_interface
|
||||
cmd = ['sudo', 'tcpdump', '-i', args.capture_interface]
|
||||
cmd = build_tcpdump_args(args, cmd, capture_metadata)
|
||||
capture_metadata.tcpdump_command = cmd
|
||||
|
||||
print('Executing: ' + ' '.join(cmd))
|
||||
|
||||
# run capture
|
||||
try:
|
||||
start_time = datetime.now().strftime('%H:%M:%S')
|
||||
run_tcpdump(cmd)
|
||||
stop_time = datetime.now().strftime('%H:%M:%S')
|
||||
capture_metadata.start_time = start_time
|
||||
capture_metadata.stop_time = stop_time
|
||||
except KeyboardInterrupt:
|
||||
print('Received keyboard interrupt.')
|
||||
exit(ReturnCodes.ABORTED)
|
||||
except subprocess.CalledProcessError as e:
|
||||
print(f'Failed to capture packet: {e}')
|
||||
exit(ReturnCodes.FAILURE)
|
||||
except Exception as e:
|
||||
print(f'Failed to capture packet: {e}')
|
||||
exit(ReturnCodes.FAILURE)
|
||||
|
||||
return ReturnCodes.SUCCESS
|
||||
|
||||
|
||||
def build_tcpdump_args(args, cmd, capture_metadata: CaptureMetadata):
|
||||
if args.monitor_mode:
|
||||
cmd.append('-I')
|
||||
if args.no_name_resolution:
|
||||
cmd.append('-n')
|
||||
if args.number:
|
||||
cmd.append('-#')
|
||||
if args.print_link_layer:
|
||||
cmd.append('-e')
|
||||
|
||||
if args.count:
|
||||
cmd.append('-c')
|
||||
cmd.append(str(args.count))
|
||||
elif args.mins:
|
||||
assert False, 'Unimplemented option'
|
||||
|
||||
if args.app_name is not None:
|
||||
capture_metadata.app = args.app_name
|
||||
|
||||
capture_metadata.build_capture_file_name()
|
||||
cmd.append('-w')
|
||||
cmd.append(capture_metadata.capture_file)
|
||||
|
||||
if args.safe:
|
||||
cmd.append(f'host {args.device_ip}') # if not specified, filter 'any' implied by tcpdump
|
||||
capture_metadata.device_id = args.device_ip
|
||||
|
||||
return cmd
|
||||
|
||||
|
||||
# def capture_file_cmd(args, cmd, capture_dir, capture_metadata: CaptureMetadata):
|
||||
# capture_file_prefix = capture_metadata.get_device_metadata().get_device_short_name()
|
||||
# if args.app_name is not None:
|
||||
# capture_file_prefix = args.app_name
|
||||
# capture_metadata.set_app(args.app_name)
|
||||
# capfile_name = capture_file_prefix + '_' + str(capture_metadata.get_capture_id()) + '.pcap'
|
||||
# capture_metadata.set_capture_file(capfile_name)
|
||||
# capfile_abs_path = capture_dir / capfile_name
|
||||
# capture_metadata.set_capture_file(capfile_name)
|
||||
# cmd.append('-w')
|
||||
# cmd.append(str(capfile_abs_path))
|
||||
0
code/iottb/utils/__init__.py
Normal file
0
code/iottb/utils/__init__.py
Normal file
44
code/iottb/utils/capture_utils.py
Normal file
44
code/iottb/utils/capture_utils.py
Normal file
@ -0,0 +1,44 @@
|
||||
import uuid
|
||||
from pathlib import Path
|
||||
from iottb.models.device_metadata_model import dir_contains_device_metadata
|
||||
from iottb.utils.utils import get_iso_date
|
||||
|
||||
|
||||
def get_capture_uuid():
|
||||
return str(uuid.uuid4())
|
||||
|
||||
|
||||
def get_capture_date_folder(device_root: Path):
|
||||
today_iso = get_iso_date()
|
||||
today_folder = device_root / today_iso
|
||||
if dir_contains_device_metadata(device_root):
|
||||
if not today_folder.is_dir():
|
||||
try:
|
||||
today_folder.mkdir()
|
||||
except FileExistsError:
|
||||
print(f'Folder {today_folder} already exists')
|
||||
return today_folder
|
||||
raise FileNotFoundError(f'Given path {device_root} is not a device root directory')
|
||||
|
||||
|
||||
def get_capture_src_folder(device_folder: Path):
|
||||
assert device_folder.is_dir(), f'Given path {device_folder} is not a folder'
|
||||
today_iso = get_iso_date()
|
||||
max_sequence_number = 1
|
||||
for d in device_folder.iterdir():
|
||||
if d.is_dir() and d.name.startswith(f'{today_iso}_capture_'):
|
||||
name = d.name
|
||||
num = int(name.split('_')[2])
|
||||
max_sequence_number = max(max_sequence_number, num)
|
||||
|
||||
next_sequence_number = max_sequence_number + 1
|
||||
return device_folder.joinpath(f'{today_iso}_capture_{next_sequence_number:03}')
|
||||
|
||||
|
||||
def make_capture_src_folder(capture_src_folder: Path):
|
||||
try:
|
||||
capture_src_folder.mkdir()
|
||||
except FileExistsError:
|
||||
print(f'Folder {capture_src_folder} already exists')
|
||||
finally:
|
||||
return capture_src_folder
|
||||
41
code/iottb/utils/tcpdump_utils.py
Normal file
41
code/iottb/utils/tcpdump_utils.py
Normal file
@ -0,0 +1,41 @@
|
||||
import ipaddress
|
||||
import shutil
|
||||
import subprocess
|
||||
from typing import Optional
|
||||
|
||||
|
||||
def check_installed() -> bool:
|
||||
"""Check if tcpdump is installed and available on the system path."""
|
||||
return shutil.which('tcpdump') is not None
|
||||
|
||||
|
||||
def ensure_installed():
|
||||
"""Ensure that tcpdump is installed, raise an error if not."""
|
||||
if not check_installed():
|
||||
raise RuntimeError('tcpdump is not installed. Please install it to continue.')
|
||||
|
||||
|
||||
def list_interfaces() -> str:
|
||||
"""List available network interfaces using tcpdump."""
|
||||
ensure_installed()
|
||||
try:
|
||||
result = subprocess.run(['tcpdump', '--list-interfaces'], capture_output=True, text=True, check=True)
|
||||
return result.stdout
|
||||
except subprocess.CalledProcessError as e:
|
||||
print(f'Failed to list interfaces: {e}')
|
||||
return ''
|
||||
|
||||
|
||||
def is_valid_ipv4(ip: str) -> bool:
|
||||
try:
|
||||
ipaddress.IPv4Address(ip)
|
||||
return True
|
||||
except ValueError:
|
||||
return False
|
||||
|
||||
def str_to_ipv4(ip: str) -> (bool, Optional[ipaddress]):
|
||||
try:
|
||||
address = ipaddress.IPv4Address(ip)
|
||||
return address == ipaddress.IPv4Address(ip), address
|
||||
except ipaddress.AddressValueError:
|
||||
return False, None
|
||||
18
code/iottb/utils/utils.py
Normal file
18
code/iottb/utils/utils.py
Normal file
@ -0,0 +1,18 @@
|
||||
import uuid
|
||||
from datetime import datetime
|
||||
from iottb.definitions import TODAY_DATE_STRING, DEVICE_METADATA_FILE, CAPTURE_METADATA_FILE
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
def get_iso_date():
|
||||
return datetime.now().strftime('%Y-%m-%d')
|
||||
|
||||
|
||||
def subfolder_exists(parent: Path, child: str):
|
||||
return parent.joinpath(child).exists()
|
||||
|
||||
|
||||
def generate_unique_string_with_prefix(prefix: str):
|
||||
return prefix + '_' + str(uuid.uuid4())
|
||||
|
||||
|
||||
6
code/misc/dnsmasq.conf
Normal file
6
code/misc/dnsmasq.conf
Normal file
@ -0,0 +1,6 @@
|
||||
interface=wlp0s20f0u1
|
||||
dhcp-range=192.168.1.2,192.168.1.250,12h
|
||||
# Gateway
|
||||
dhcp-option=3,192.168.1.1
|
||||
# Dns server addr
|
||||
dhcp-option=6,192.168.1.1
|
||||
6
code/misc/enable-forwarding.sh
Executable file
6
code/misc/enable-forwarding.sh
Executable file
@ -0,0 +1,6 @@
|
||||
#!
|
||||
# Run as root
|
||||
#
|
||||
|
||||
sysctl -w net.ipv4.conf.all.forwarding=1
|
||||
sysctl -w net.ipv6.conf.all.forwading=1
|
||||
8
code/misc/hostapd.conf
Normal file
8
code/misc/hostapd.conf
Normal file
@ -0,0 +1,8 @@
|
||||
interface=wlp0s20f0u1
|
||||
driver=nl80211
|
||||
ssid=t3u
|
||||
hw_mode=g
|
||||
channel=11
|
||||
macaddr_acl=0
|
||||
auth_algs=1
|
||||
ignore_broadcast_ssid=0
|
||||
12
code/misc/hostapd.conf.bak
Normal file
12
code/misc/hostapd.conf.bak
Normal file
@ -0,0 +1,12 @@
|
||||
interface=wlp0s20f0u1
|
||||
driver=nl80211
|
||||
ssid=t3u
|
||||
hw_mode=g
|
||||
channel=11
|
||||
macaddr_acl=0
|
||||
auth_algs=1
|
||||
ignore_broadcast_ssid=0
|
||||
wpa=2
|
||||
wpa_passphrase=11help22help33
|
||||
wpa_key_mgmt=WPA-PSK
|
||||
rsn_pairwise=CCMP
|
||||
35
code/misc/initSwAP
Executable file
35
code/misc/initSwAP
Executable file
@ -0,0 +1,35 @@
|
||||
#!/bin/bash
|
||||
# DISCLAIMER! THIS CODE HAS BEEN TAKEN FROM:
|
||||
# https://nims11.wordpress.com/2012/04/27/hostapd-the-linux-way-to-create-virtual-wifi-access-point/
|
||||
# Usage: ./initSoftAP
|
||||
########### Initial wifi interface configuration #############
|
||||
ip link set $1 down
|
||||
ip addr flush dev $1
|
||||
ip link set $1 up
|
||||
ip addr add 10.0.0.1/24 dev $1
|
||||
|
||||
# If you still use ifconfig for some reason, replace the above lines with the following
|
||||
# ifconfig $1 up 10.0.0.1 netmask 255.255.255.0
|
||||
sleep 2
|
||||
###########
|
||||
|
||||
########### Start dnsmasq ##########
|
||||
if [ -z "$(ps -e | grep dnsmasq)" ]
|
||||
then
|
||||
dnsmasq
|
||||
fi
|
||||
###########
|
||||
########### Enable NAT ############
|
||||
iptables -t nat -A POSTROUTING -o $2 -j MASQUERADE
|
||||
iptables -A FORWARD -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
|
||||
iptables -A FORWARD -i $1 -o $2 -j ACCEPT
|
||||
|
||||
#Thanks to lorenzo
|
||||
#Uncomment the line below if facing problems while sharing PPPoE, see lorenzo's comment for more details
|
||||
#iptables -I FORWARD -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu
|
||||
|
||||
sysctl -w net.ipv4.ip_forward=1
|
||||
###########
|
||||
########## Start hostapd ###########
|
||||
hostapd $PWD/hostapd.conf ## TODO! either put config in normal place
|
||||
#killall dnsmasq
|
||||
36
code/misc/initSwAP_nftables
Executable file
36
code/misc/initSwAP_nftables
Executable file
@ -0,0 +1,36 @@
|
||||
#!/bin/bash
|
||||
# DISCLAIMER! THIS CODE HAS BEEN TAKEN FROM:
|
||||
# https://nims11.wordpress.com/2012/04/27/hostapd-the-linux-way-to-create-virtual-wifi-access-point/
|
||||
# Usage: ./initSoftAP
|
||||
########### Initial wifi interface configuration #############
|
||||
ip link set $1 down
|
||||
ip addr flush dev $1
|
||||
ip link set $1 up
|
||||
ip addr add 10.0.0.1/24 dev $1
|
||||
|
||||
# If you still use ifconfig for some reason, replace the above lines with the following
|
||||
# ifconfig $1 up 10.0.0.1 netmask 255.255.255.0
|
||||
sleep 2
|
||||
###########
|
||||
|
||||
########### Start dnsmasq ##########
|
||||
if [ -z "$(ps -e | grep dnsmasq)" ]
|
||||
then
|
||||
dnsmasq
|
||||
fi
|
||||
###########
|
||||
########### Enable NAT ############
|
||||
nft add table nat
|
||||
nft -- add chain nat prerouting { type nat hook prerouting priority -100 \; }
|
||||
nft add chain nat postrouting { type nat hook postrouting priority 100 \; }
|
||||
nft add rule nat postrouting oifname wlp44s0 wlp masquerade
|
||||
|
||||
#Thanks to lorenzo
|
||||
#Uncomment the line below if facing problems while sharing PPPoE, see lorenzo's comment for more details
|
||||
#iptables -I FORWARD -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu
|
||||
|
||||
sysctl -w net.ipv4.ip_forward=1
|
||||
###########
|
||||
########## Start hostapd ###########
|
||||
hostapd $PWD/hostapd.conf ## TODO! either put config in normal place
|
||||
#killall dnsmasq
|
||||
22
code/misc/make_ap.sh
Executable file
22
code/misc/make_ap.sh
Executable file
@ -0,0 +1,22 @@
|
||||
#! /bin/env bash
|
||||
|
||||
TYPE="wifi"
|
||||
IFNAME="wlp0s20f0u1"
|
||||
CONNAME="T3UminiConn"
|
||||
SSID="T3Umini"
|
||||
BAND="bg"
|
||||
CHAN=1
|
||||
KMGMT="wpa-psk"
|
||||
PSK=11223344
|
||||
|
||||
nmcli con add type wifi ifname wlp0s20f0u1 mode ap con-name WIFI_AP_TEST ssid MY_AP_TEST &&
|
||||
nmcli con modify WIFI_AP_TEST 802-11-wireless.band bg &&
|
||||
nmcli con modify WIFI_AP_TEST 802-11-wireless.channel 1 &&
|
||||
nmcli con modify WIFI_AP_TEST 802-11-wireless-security.key-mgmt wpa-psk &&
|
||||
nmcli con modify WIFI_AP_TEST 802-11-wireless-security.pairwise ccmp &&
|
||||
nmcli con modify WIFI_AP_TEST 802-11-wireless-security.psk 11223344 &&
|
||||
nmcli con modify WIFI_AP_TEST ipv4.method shared && nmcli con up WIFI_AP_TEST
|
||||
|
||||
' nmcli con modify WIFI_AP_TEST 802-11-wireless-security.proto rsn &&
|
||||
nmcli con modify WIFI_AP_TEST 802-11-wireless-security.group ccmp && NOT USED FOR APPLE`
|
||||
|
||||
0
code/tests/__init__.py
Normal file
0
code/tests/__init__.py
Normal file
0
code/tests/fixtures/__init__.py
vendored
Normal file
0
code/tests/fixtures/__init__.py
vendored
Normal file
15
code/tests/fixtures/shared_fixtures.py
vendored
Normal file
15
code/tests/fixtures/shared_fixtures.py
vendored
Normal file
@ -0,0 +1,15 @@
|
||||
import pytest
|
||||
import tempfile
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
@pytest.fixture(scope='session')
|
||||
def tmp_dir():
|
||||
with tempfile.TemporaryDirectory() as tmp_dir:
|
||||
yield Path(tmp_dir)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def mock_device_metadata_json_(tmp_dir):
|
||||
with tempfile.TemporaryDirectory() as tmp_dir:
|
||||
pass
|
||||
0
code/tests/models/test_capture_metadata_model.py
Normal file
0
code/tests/models/test_capture_metadata_model.py
Normal file
0
code/tests/models/test_device_metadata_model.py
Normal file
0
code/tests/models/test_device_metadata_model.py
Normal file
47
code/tests/subcommands/test_add_device.py
Normal file
47
code/tests/subcommands/test_add_device.py
Normal file
@ -0,0 +1,47 @@
|
||||
import sys
|
||||
import unittest
|
||||
from io import StringIO
|
||||
from unittest.mock import patch, MagicMock
|
||||
from pathlib import Path
|
||||
from iottb.definitions import DEVICE_METADATA_FILE
|
||||
import shutil
|
||||
from iottb.__main__ import main
|
||||
|
||||
|
||||
class TestDeviceMetadataFileCreation(unittest.TestCase):
|
||||
def setUp(self):
|
||||
self.test_dir = Path('/tmp/iottbtest/test_add_device')
|
||||
self.test_dir.mkdir(parents=True, exist_ok=True)
|
||||
# self.captured_output = StringIO()
|
||||
# sys.stdout = self.captured_output
|
||||
|
||||
def tearDown(self):
|
||||
# shutil.rmtree(str(self.test_dir))
|
||||
for item in self.test_dir.iterdir():
|
||||
if item.is_dir():
|
||||
item.rmdir()
|
||||
else:
|
||||
item.unlink()
|
||||
self.test_dir.rmdir()
|
||||
# sys.stdout = sys.__stdout__
|
||||
|
||||
@patch('builtins.input', side_effect=['iPhone 14', 'y', 'y'])
|
||||
def test_guided_device_setup(self, mock_input):
|
||||
sys.argv = ['__main__.py', 'add', '--root_dir', str(self.test_dir), '--guided']
|
||||
main()
|
||||
expected_file = self.test_dir / DEVICE_METADATA_FILE
|
||||
self.assertTrue(expected_file.exists()), f'Expected file not created: {expected_file}'
|
||||
|
||||
@patch('builtins.input', side_effect=['y']) # need mock_input else wont work
|
||||
def test_device_setup(self, mock_input):
|
||||
sys.argv = ['__main__.py', 'add', '--root_dir', str(self.test_dir), '--name', 'iPhone 14']
|
||||
main()
|
||||
expected_file = self.test_dir / DEVICE_METADATA_FILE
|
||||
self.assertTrue(expected_file.exists()), f'Expected file not created: {expected_file}'
|
||||
|
||||
def test_add_when_file_exists(self):
|
||||
# TODO
|
||||
pass
|
||||
|
||||
if __name__ == '__main__':
|
||||
unittest.main()
|
||||
2
code/tests/test_capture_metadata_model.py
Normal file
2
code/tests/test_capture_metadata_model.py
Normal file
@ -0,0 +1,2 @@
|
||||
def test_save_to_json():
|
||||
assert False
|
||||
0
code/tests/test_main.py
Normal file
0
code/tests/test_main.py
Normal file
2
code/tests/utils/test_capture_metadata_utils.py
Normal file
2
code/tests/utils/test_capture_metadata_utils.py
Normal file
@ -0,0 +1,2 @@
|
||||
|
||||
|
||||
0
code/tests/utils/test_device_metadata_utils.py
Normal file
0
code/tests/utils/test_device_metadata_utils.py
Normal file
0
code/tests/utils/test_tcpdump_utils.py
Normal file
0
code/tests/utils/test_tcpdump_utils.py
Normal file
0
data/.gitkeep
Normal file
0
data/.gitkeep
Normal file
20911
data/csv-16april24-iphone.csv
Normal file
20911
data/csv-16april24-iphone.csv
Normal file
File diff suppressed because it is too large
Load Diff
BIN
data/iphon-16-04-24-pcap-dump.pcap
Normal file
BIN
data/iphon-16-04-24-pcap-dump.pcap
Normal file
Binary file not shown.
BIN
data/iphone-seb-16-04-24-dump.pcapng
Normal file
BIN
data/iphone-seb-16-04-24-dump.pcapng
Normal file
Binary file not shown.
1013
data/mi-16april-filtered.csv
Normal file
1013
data/mi-16april-filtered.csv
Normal file
File diff suppressed because it is too large
Load Diff
BIN
data/mi-26-april-24.pcap
Normal file
BIN
data/mi-26-april-24.pcap
Normal file
Binary file not shown.
BIN
data/mi-26-aprl-24.pcapng
Normal file
BIN
data/mi-26-aprl-24.pcapng
Normal file
Binary file not shown.
0
literature/.gitkeep
Normal file
0
literature/.gitkeep
Normal file
0
notes/.gitkeep
Normal file
0
notes/.gitkeep
Normal file
34
notes/2024-04-28-meeting/IoTdb.md
Normal file
34
notes/2024-04-28-meeting/IoTdb.md
Normal file
@ -0,0 +1,34 @@
|
||||
IoT.db/
|
||||
├── Device1/
|
||||
│ ├── Rawdata/
|
||||
│ │ ├── measurement#D1#1/
|
||||
│ │ │ ├── capfile
|
||||
│ │ │ └── meta
|
||||
│ │ └── measurement#D1#2/
|
||||
│ │ └── ...
|
||||
│ ├── Experiments/
|
||||
│ │ ├── exp1#D1/
|
||||
│ │ │ └── files etc
|
||||
│ │ └── exp2#D1/
|
||||
│ │ └── ...
|
||||
│ └── Device 1 (Fixed) metadata
|
||||
├── Device2/
|
||||
│ ├── Rawdata/
|
||||
│ │ ├── measurement#D2#1/
|
||||
│ │ │ ├── capfile
|
||||
│ │ │ └── meta
|
||||
│ │ └── ...
|
||||
│ ├── Experiments/
|
||||
│ │ ├── exp1#d2/
|
||||
│ │ │ └── ...
|
||||
│ │ └── ...
|
||||
│ └── Device 2 fixed metadata
|
||||
└── .../
|
||||
├── .../
|
||||
│ ├── ..
|
||||
│ └── ..
|
||||
├── .../
|
||||
│ ├── .../
|
||||
│ │ └── ...
|
||||
│ └── ...
|
||||
└── ...
|
||||
34
notes/2024-04-28-meeting/IoTdb.txt
Normal file
34
notes/2024-04-28-meeting/IoTdb.txt
Normal file
@ -0,0 +1,34 @@
|
||||
IoT.db/
|
||||
├── Device1/
|
||||
│ ├── Rawdata/
|
||||
│ │ ├── measurement#D1#1/
|
||||
│ │ │ ├── capfile
|
||||
│ │ │ └── meta
|
||||
│ │ └── measurement#D1#2/
|
||||
│ │ └── ...
|
||||
│ ├── Experiments/
|
||||
│ │ ├── exp1#D1/
|
||||
│ │ │ └── files etc
|
||||
│ │ └── exp2#D1/
|
||||
│ │ └── ...
|
||||
│ └── Device 1 (Fixed) metadata
|
||||
├── Device2/
|
||||
│ ├── Rawdata/
|
||||
│ │ ├── measurement#D2#1/
|
||||
│ │ │ ├── capfile
|
||||
│ │ │ └── meta
|
||||
│ │ └── ...
|
||||
│ ├── Experiments/
|
||||
│ │ ├── exp1#d2/
|
||||
│ │ │ └── ...
|
||||
│ │ └── ...
|
||||
│ └── Device 2 fixed metadata
|
||||
└── .../
|
||||
├── .../
|
||||
│ ├── ..
|
||||
│ └── ..
|
||||
├── .../
|
||||
│ ├── .../
|
||||
│ │ └── ...
|
||||
│ └── ...
|
||||
└── ...
|
||||
51
notes/2024-04-28-meeting/IoTdb2_3.txt
Normal file
51
notes/2024-04-28-meeting/IoTdb2_3.txt
Normal file
@ -0,0 +1,51 @@
|
||||
Reasoning is that experiments might want data from measurements of multiple
|
||||
devices.
|
||||
IoT.db2/
|
||||
├── Devices/
|
||||
│ ├── Dev1/
|
||||
│ │ ├── devmeta
|
||||
│ │ └── Measurements/
|
||||
│ │ ├── m1/
|
||||
│ │ │ ├── raw
|
||||
│ │ │ ├── meta
|
||||
│ │ │ └── spec
|
||||
│ │ └── m2/
|
||||
│ │ └── ...
|
||||
│ ├── Dev2/
|
||||
│ │ ├── devmeta
|
||||
│ │ └── Measurements/
|
||||
│ │ ├── m1/
|
||||
│ │ │ ├── raw
|
||||
│ │ │ ├── meta
|
||||
│ │ │ └── spec
|
||||
│ │ ├── m2/
|
||||
│ │ │ └── ...
|
||||
│ │ ├── m3/
|
||||
│ │ │ └── ...
|
||||
│ │ └── ...
|
||||
│ └── Dev3/
|
||||
│ └── ....
|
||||
└── Experiments/(Or projects? Or cleaned data)
|
||||
├── E1/
|
||||
│ ├── involved measurements
|
||||
│ ├── filters/ feature extraction algo etc.
|
||||
│ └── etcetc...
|
||||
├── E2/
|
||||
│ ├── .....
|
||||
│ ├── ..
|
||||
│ ├── ...
|
||||
│ └── ..
|
||||
└── ....
|
||||
IoT.db3/
|
||||
├── Measurements/
|
||||
│ ├── m1/ (Specification of device in this substructure)
|
||||
│ │ ├── follows from above
|
||||
│ │ └── ...
|
||||
│ ├── m2
|
||||
│ └── ....
|
||||
└── Experiments/
|
||||
├── e1/
|
||||
│ ├── follows from above
|
||||
│ └── ...
|
||||
├── e2
|
||||
└── ...
|
||||
15
notes/2024-04-28-meeting/IoTdb4.txt
Normal file
15
notes/2024-04-28-meeting/IoTdb4.txt
Normal file
@ -0,0 +1,15 @@
|
||||
Like IoTdb but has no opinion on experiments
|
||||
IoT.db4/
|
||||
├── Dev1/
|
||||
│ ├── Measurements (basically raw data)/
|
||||
│ │ ├── m1/
|
||||
│ │ │ └── ....
|
||||
│ │ └── m2/
|
||||
│ │ └── ....
|
||||
│ └── Cleaned?/Features extracted?/Merged?/
|
||||
│ └── -- Where to put clean data?
|
||||
├── Dev2/
|
||||
│ └── Measurements/
|
||||
│ └── ...
|
||||
└── Algos/Scripts?/
|
||||
└── ..
|
||||
92
notes/2024-04-28-meeting/further considerations.md
Normal file
92
notes/2024-04-28-meeting/further considerations.md
Normal file
@ -0,0 +1,92 @@
|
||||
# Testbed
|
||||
- What is a testbed?
|
||||
- "[...] wissenschaftliche Plattform für Experimente" german [Wikipedia](https://de.wikipedia.org/wiki/Testbed)
|
||||
- What is a "Platform"?
|
||||
- Example [ORBIT](https://www.orbit-lab.org/) Testbed as wireless network emulator (software I guess) + computing resources. Essence of offered service: Predictable environment. What is tested: Applications and protocols.
|
||||
- [APE](https://apetestbed.sourceforge.net/) "APE testbed is short for **Ad hoc Protocol Evaluation testbed**." But also ["What exaclty is APE"](https://apetestbed.sourceforge.net/#What_exactly_is_APE): "There is no clear definition of what a testbed is or what it comprises. APE however, can be seen as containing two things:
|
||||
- An encapsulated execution environment, or more specifically, a small Linux distribution.
|
||||
- Tools for post testrun data analysis."
|
||||
- [DES-Testbed](https://www.des-testbed.net) Freie Universität Berlin. Random assortment of sometimes empy(?!) posts to a sort of bulletin board.
|
||||
## IoT Automation Testbed
|
||||
#### From Abstract:
|
||||
In this project, the student de-
|
||||
signs a testbed for the **automated analysis** of the **privacy implications** IoT devices, paying particular
|
||||
attention to features that support reproducibility.
|
||||
#### From Project description:
|
||||
To study the privacy and security as-
|
||||
pects of IoT devices **_systematically_** and **_reproducibly_** , we need an easy-to-use testbed that _automates_ the
|
||||
**_process of experimenting_** with **_IoT devices_**.
|
||||
|
||||
**Automation recipes**:
|
||||
Automate important aspects of experiments, in particular:
|
||||
- Data Collection
|
||||
- Analysis (= Experiment in most places)
|
||||
|
||||
**FAIR data storage**
|
||||
making data
|
||||
- Findable
|
||||
- Accessible
|
||||
- Interoperable
|
||||
- Reusable
|
||||
### Implications/Open questions
|
||||
#### FAIR Data Storage
|
||||
1. Who are the stakeholders? What is the scope of "FAIRness"?
|
||||
1. PersonalDB? --> [X], Tiny scope, $\lnot$ FAIR almost by definition. would only be tool/ suggestion on layout.
|
||||
2. ProjectDB? --> [X], no, probably a project _uses_ a testbed
|
||||
3. Research Group --> Focues on **F a IR**. Accessibility _per se_ not an issue. Findability -> By machine AND Human. Interoperable --> Specs may rely on local/uni/group idiosyncracies.
|
||||
4. AcademicDB --> (Strict)Subset of 3. Consider field-specific standards. Must start decerning between public/non-public parts of db/testbed. One may unwittingly leak privacy information: Like location, OS of capture host, usernames, absolute file paths etc.See [here](https://www.netresec.com/?page=Blog&month=2013-02&post=Forensics-of-Chinese-MITM-on-GitHub) and [pcapng.com](https://pcapng.com/) under "Metadata Block Types"
|
||||
5. Public DB --> (Strict) Subset of 4.
|
||||
2. Seems like something between 3. and 4. Some type of repository. Full Fledged DB? Probably unnecessary. Mix text + low spec like sqlite? Could still be tracked by git probably.
|
||||
3. Interoperability $\cap$ Automation recipes --> Recipes built from and depend only on widly available, platform-independent tools.
|
||||
4. Accessibility $\cap$ Autorec --> Built from and only depend on tools which are 1. widly available and (have permissive license OR have equivalent with permissive license). Human: Documentation.
|
||||
5. Reusable $\cap$ Autorec --> Modular tools, and accessible (license, etc.) dependencies (e.g. experiment specific scripts).
|
||||
6. Findable $\cap$ Autorec--> Must assume that recipe is found and selected manually by researcher.
|
||||
7. Interoperable --> Collected Data (Measurements) across different must follow a schema which is meaning full for
|
||||
#### Usage paths/ Workflows:
|
||||
Data Collection --> Deposit in FAIR repository
|
||||
Primary Experiment --> Define Spec. Write script/code --> Access FAIR repo for data. Possibly Access FAIR repo for predefined scripts --> Where do results go. Results "repo"
|
||||
Replication Experiment --> Chose experiment/benchmark script from testbed. --> Execute --> Publish (Produces Replication Result, i.e. same "schema" as primary experiment)
|
||||
Replication Experiment Variant --> Chose experiment/benchmark. add additional processing and input --> run --> posibbly publish
|
||||
How to define static vs dynamic aspect of experiment?
|
||||
Haven't even thought about encryption/decryption specifics....
|
||||
|
||||
But also could go like this:
|
||||
First design analysis/experiment --> Collect data --> data cleaned according to testbed scripts --> #TODO
|
||||
Get new device and want to perform some predefined tests --> first need to collect data
|
||||
For _some_ device (unknown if data already exists) want to perform test _T_ --> run script with device spec as input -> Script checks if data already available; If not, perform data collection first -> run analysis on data --> publish results to results/benchmark repo of device; if was new device, open new results branch for that device and publish initial results. _Primary Experiment_ with data collection.
|
||||
|
||||
Types of Experiments:
|
||||
"Full Stack": Data Collection + Analysis
|
||||
"Model Test": Data Access (+ Sampling) + Model (Or complete Workflow). Test subject: Model
|
||||
"Replicaton Experiment": _secondary_ data collection + testbed model + quality criteria? Test Subject: Collection scheme + analysis model = result
|
||||
"Exploratory Collection + Analysis": aka unsupervised #TODO
|
||||
**Note**:
|
||||
#TODO What types of metadata are of interest. Are metadata simple, minimal compute features. Complicated extracted/computed features? Where do we draw the line.
|
||||
#TODO Say for the same devices. When is data merged, when not? I.e. under what conditions can datasets automatically be enlarged? How is this tracked as to not tamper with reproducibility?
|
||||
|
||||
### Reproducibility:
|
||||
What are we trying to reproduce?
|
||||
What are possible results from experiments/tests?
|
||||
Types of artifacts:
|
||||
Static:
|
||||
Raw data.
|
||||
Labaled Data.
|
||||
Computational/ Instructive:
|
||||
Supervised Training. Input: Labaled Data + Learning algo. Output: Model.
|
||||
Model Applicability Test: Input: unlabeled data + model. Output: Predication/Label
|
||||
Feature Extraction: (raw, labeled?) data + extraction algo. Output: Labaled Dataset.
|
||||
New Feature Test: labeled data + feature extraction algo + learning algo. Output: Model + Model Verification -> Usability of new features... ( #todo this case exemplifies why we need modularity: we want to apply/compose new "feature extraction algo" e.g. to all those devices where applicable and train new models and verify "goodness" of new features per device/dataset etc.... )
|
||||
|
||||
### data collection and cleaning (and features):
|
||||
How uniform is the schema of data we want to collect accross IoT spectrum. Per device? Say two (possibly unrelated) datasets happen to share the same schema, can we just merge them, say, even if one set is from a VR headset and another from a roomba?
|
||||
Is the scheema always the same e.g. (Timestamp, src ip, dst ip, (mac ports? or unused features), payload?, protocols?).
|
||||
If testbed data contains uniform data --> only "one" extraction algo and dataset schema = all relevant features
|
||||
Alternativly, testbed data is heterogeneous --> feature extracts defines interoperability/mergeability of datasets.
|
||||
|
||||
Training Algo: Flexible schema, output only usable on data with same schema(?)
|
||||
Model Eval: Schema fixed, eval data must have correct schema
|
||||
|
||||
Say a project output is model which retrieves privacy relevant information from the network traffic of IoT device. #TODO how to guaranty applicability to other devices? What are the needs in the aftermath? Apply same model to other data? What of raw data schema match, but incompatible labels?
|
||||
|
||||
#todo schema <-> applicable privacy metric matching
|
||||
|
||||
11
notes/journal/2024-03-11-mon.md
Normal file
11
notes/journal/2024-03-11-mon.md
Normal file
@ -0,0 +1,11 @@
|
||||
### Completed:
|
||||
- All Devices unpacked except [[xiaomi tv stick]].
|
||||
- [[ledvance led strip]] wont enter pairing mode.
|
||||
- [[echodot]] is setup and works.
|
||||
- [[mi 360 home security camera]] needs microsd card.
|
||||
## Plan for this week:
|
||||
- Get microsd card
|
||||
- MAINLY: Get AP working or find other way to capture traffic.
|
||||
## Misc.:
|
||||
Much time lost resetting router. [[ledvance led strip]] will only connect to 2.5GHz networks.
|
||||
If laptop is connected to internet via ethernet, then I can make a AP, but iPhone wont connect to it. But IoT devices connect
|
||||
4
notes/journal/2024-03-12-tue.md
Normal file
4
notes/journal/2024-03-12-tue.md
Normal file
@ -0,0 +1,4 @@
|
||||
|
||||
- Bought two USB Wifi Adapters (Completes [[TODO1]]):
|
||||
- tp-link AC1300 Archer T3U (Mini Wireless MU-MIMO USB Adapter).
|
||||
- tp-link AC1300 Archer T3U Plus (High Gain Wireless Dual Band USB Adapter)
|
||||
12
notes/journal/2024-03-14-fri.md
Normal file
12
notes/journal/2024-03-14-fri.md
Normal file
@ -0,0 +1,12 @@
|
||||
Plan: Setup wifi adapter to capture Amazon echodot.
|
||||
Flow for setting up Access Point:
|
||||
1. Setup Access Point
|
||||
2. Configure Routing/Bridge or similar so IoT device can access internet.
|
||||
|
||||
Tried [linux-wifi-hotspot](https://github.com/lakinduakash/linux-wifi-hotspot) repo. Running it makes AP visible to iPhone, but issue is IP Address. Need to configure dhcp server or manually assign address.
|
||||
|
||||
Problem: Wifi Adapter In monitor mode sees nothing.
|
||||
Neither Adapter has driver for modern macos
|
||||
Archer T3U is using rtw_8822bu driver from kernel, this supports mac
|
||||
|
||||
Decide to go down hostapd route.
|
||||
119
notes/journal/2024-03-19-tue.md
Normal file
119
notes/journal/2024-03-19-tue.md
Normal file
@ -0,0 +1,119 @@
|
||||
Example [hostapd.conf](http://w1.fi/cgit/hostap/plain/hostapd/hostapd.conf)
|
||||
Simple article for basic setup [here](https://medium.com/p/3c18760e6f7e)
|
||||
AP can be started an iPhone manages to connect. Now must 1:.ensure WPA2 or WPA3 and 2. enable ipmasquerading for internet connection. Then finally should be able to setup devices properly and start sniffing on traffic.
|
||||
|
||||
# 1st attempt AP setup
|
||||
### Config files
|
||||
File:`/etc/dnsmasq.d/dhcp-for-ap.conf`
|
||||
Content:
|
||||
```config
|
||||
interface=wlp0s20f0u1
|
||||
dhcp-range=10.0.0.3,10.0.0.20,12h
|
||||
```
|
||||
**BEWARE**: Must load above into `/etc/dnsmasq.conf` with a line that goes `conf-file=/etc/dnsmasq.d/dhcp-for-ap.conf` or `conf-dir=/etc/dnsmasq.d/,*.conf` see [here](https://wiki.archlinux.org/title/Dnsmasq#Configuration)
|
||||
Other configs in `code/` directory.
|
||||
## Used commands
|
||||
See `code/` dir commit `devel@299912e` .
|
||||
## Sanity Check
|
||||
```bash
|
||||
$ sudo hostapd ./hostapd.conf
|
||||
# Output upon trying to connect with iPhone
|
||||
wlp0s20f0u1: interface state UNINITIALIZED->ENABLED
|
||||
wlp0s20f0u1: AP-ENABLED
|
||||
wlp0s20f0u1: STA f2:10:60:95:28:05 IEEE 802.11: authenticated
|
||||
wlp0s20f0u1: STA f2:10:60:95:28:05 IEEE 802.11: authenticated
|
||||
wlp0s20f0u1: STA f2:10:60:95:28:05 IEEE 802.11: associated (aid 1)
|
||||
wlp0s20f0u1: AP-STA-CONNECTED f2:10:60:95:28:05
|
||||
wlp0s20f0u1: STA f2:10:60:95:28:05 RADIUS: starting accounting session 9C7F40AA0385E2B2
|
||||
wlp0s20f0u1: STA f2:10:60:95:28:05 WPA: pairwise key handshake completed (RSN)
|
||||
wlp0s20f0u1: EAPOL-4WAY-HS-COMPLETED f2:10:60:95:28:05
|
||||
```
|
||||
Connection established but no internet as expected.
|
||||
## Test
|
||||
*Input*
|
||||
```bash
|
||||
sudo ./initSwAP wlp
|
||||
```
|
||||
*Output*
|
||||
```
|
||||
net.ipv4.ip_forward = 1
|
||||
wlp0s20f0u1: interface state UNINITIALIZED->ENABLED
|
||||
wlp0s20f0u1: AP-ENABLED
|
||||
wlp0s20f0u1: STA f2:10:60:95:28:05 IEEE 802.11: authenticated
|
||||
wlp0s20f0u1: STA f2:10:60:95:28:05 IEEE 802.11: associated (aid 1)
|
||||
wlp0s20f0u1: AP-STA-CONNECTED f2:10:60:95:28:05
|
||||
wlp0s20f0u1: STA f2:10:60:95:28:05 RADIUS: starting accounting session C77A903F5D15F3B3
|
||||
wlp0s20f0u1: STA f2:10:60:95:28:05 WPA: pairwise key handshake completed (RSN)
|
||||
wlp0s20f0u1: EAPOL-4WAY-HS-COMPLETED f2:10:60:95:28:05
|
||||
```
|
||||
Unfortunatly still no internet connection.
|
||||
|
||||
## Analysis
|
||||
Had forgot to import dhcp config file.
|
||||
**Changes**: Add dnsmasq dhcp config and change wpa=3 to wpa=2 s.t. only WPA2 is used -> Now iPhone doesn't warn for security.
|
||||
Unfortunatly still no internet connectino can be established.
|
||||
|
||||
## Todays 2nd attempt at Establishing an internet connection.
|
||||
__Remarks/Observations:__
|
||||
- iPhone connects to AP. Receieves IP Address `169.254.196.21` with subnet mask `255.255.0.0`. I
|
||||
- P is a reserved non-routable for link-local ->Thus it seems that iPhone did not get an address from dhcp server.
|
||||
- Could firewall be the problem? TODO -> iptables for dns and dhcp
|
||||
- Maybe need to set static ip first etc as mentioned [here](https://woshub.com/create-wi-fi-access-point-hotspot-linux/)
|
||||
```bash
|
||||
# nano /etc/network/interfaces
|
||||
auto wlp0s20f0u1
|
||||
iface wlp0s20f0u1 inet static
|
||||
address 10.10.0.1
|
||||
netmask 255.255.255.0
|
||||
```
|
||||
- `/etc/network/interfaces` doesn't exist on my machine...
|
||||
### Some configs to remember for later
|
||||
dnsmasq:
|
||||
```
|
||||
#interface=wlp0s20f0u1
|
||||
listen-address=10.0.0.2
|
||||
dhcp-range=10.0.0.3,10.0.0.20,12h
|
||||
dhcp-option=3,192.168.1.1
|
||||
dhcp-option=6,192.168.1.1
|
||||
domain-needed
|
||||
bogus-priv
|
||||
filterwin2k
|
||||
server=1.1.1.1
|
||||
no-hosts
|
||||
```
|
||||
Maybe need to enable ipv6 forwarding?
|
||||
```
|
||||
net.ipv4.ip_forward = 1
|
||||
net.ipv4.conf.all.forwarding = 1
|
||||
net.ipv6.conf.all.forwarding = 1
|
||||
```
|
||||
Flushing iptables: `iptables -F` flushes all tables. For more see [archwiki/iptables/Reset Rules](https://wiki.archlinux.org/title/Iptables#Resetting_rules)
|
||||
- `sudo systemctl status iptables` says there is no such service unit!? -> Fedora uses [[firewalld]], which _is_ reported as running .........
|
||||
#### Firewalld exploring
|
||||
```bash
|
||||
sudo firewall-cmd --get-active-zones
|
||||
# Output:
|
||||
# FedoraWorkstation (default)
|
||||
# interfaces: wlp44s0
|
||||
```
|
||||
### Steps taken after restarting with [[firewalld]]
|
||||
1. Followed steps in chapters 2.3.3 and 2.4 [here](https://wiki.archlinux.org/title/Internet_sharing#Enable_packet_forwarding). This should have enabled masquerading and have the ports ACCEPT for dns and dhcp.
|
||||
2. Firewalld is not powerfull enough it seems
|
||||
### nftables
|
||||
* #TODO : What is the source of this info?!
|
||||
|
||||
Overview of a common configuration and packet flow
|
||||
|
||||
A host acting as a simple firewall and gateway may define only a small number of nft chains, each matching a kernel hook:
|
||||
|
||||
a prerouting chain, for all newly-arrived IP traffic
|
||||
an input chain, for traffic addressed to the local host itself
|
||||
an output chain, for traffic originating from the local host itself
|
||||
a forward chain, for packets the host is asked to simply pass from one network to another
|
||||
a postrouting chain for all IP traffic leaving the firewall
|
||||
|
||||
For configuration convenience and by convention, we group the input, output, and forward chains into a filter table. Most rules in setups like this attach to the forward chain.
|
||||
|
||||
If NAT is required, we follow the convention of creating a nat table to hold the prerouting and postrouting chains. Source-NAT rules (where we rewrite the packet source) attach to the postrouting chain, and destination-NAT rules (where we rewrite the packet’s destination) attach to the prerouting chain.
|
||||
|
||||
Packet flow is straightforward. Only one chain attaches to each hook. The first accept or drop rule a packet matches wins.
|
||||
0
notes/journal/2024-03-24-tue.md
Normal file
0
notes/journal/2024-03-24-tue.md
Normal file
10
notes/journal/2024-03-25-mon.md
Normal file
10
notes/journal/2024-03-25-mon.md
Normal file
@ -0,0 +1,10 @@
|
||||
First success using mac mini.
|
||||
Could record some data of amazon echo.
|
||||
Setup gues network on router without any security, this enabled some capture since no keys had to be configured or handshakes captured (would be an issue without any channel controll)
|
||||
Issue: Channalhopping -> missing a lot of traffic
|
||||
To avoid channelhopping: Somehow fix the channel on router.
|
||||
|
||||
By leaving out any authentication/security config in hostapd.conf one can create an unsecured AP (on the usb wifi card) on my linux machine to. Having an open auth AP seems fine for this use case.
|
||||
In the end this seems to be the way. For doing experiments we want to record all traffic. For this we cannot loose traffic just because we are not connected. This is why we'd want an access point we controll fully. We don't want to rely an some other router. But even then there would still be much manual config (channel, making an open access vlan or whatever).
|
||||
|
||||
Essentially we need to know the channel exaclty and don't want to deal with any more cryptography than we must. So, ideally we can create an AP on a laptop or local computer, using a low cost wifi adapter. (Since we are talking about testing IoT devices we must rely on wireless internet, since this is how virtually all of them work.) We should be able to configure that device to be an AP. Then we need to forward to whatever interface the experiment computer has internet access to.
|
||||
0
notes/journal/2024-03-26-tue.md
Normal file
0
notes/journal/2024-03-26-tue.md
Normal file
7
notes/journal/2024-04-09-tue.md
Normal file
7
notes/journal/2024-04-09-tue.md
Normal file
@ -0,0 +1,7 @@
|
||||
New promising setup:
|
||||
- Raspberry Pi 5
|
||||
- Wired connection to router for internet
|
||||
- Can very easily create wifi network and also connect to it (tested form iPhone 13)
|
||||
- Can capture on the wifi card while still providing internet access to iPhone
|
||||
- Sanity Test: Opening Youtube app on iPhone produces a large flow of QUIC packets, likely from the video that starts autoplaying.
|
||||
|
||||
177
notes/journal/2024-05-08-sun.md
Normal file
177
notes/journal/2024-05-08-sun.md
Normal file
@ -0,0 +1,177 @@
|
||||
# Commands to remember + sample output
|
||||
Used commands: [[nmcli]], [[iw]], [[grep]], [[sed]]
|
||||
Resources: [Capturing Wireless LAN Packets in Monitor Mode with iw](https://sandilands.info/sgordon/capturing-wifi-in-monitor-mode-with-iw)
|
||||
Foreign BSSIDs have been made anonymous by replacing with `XX:XX:XX:XX:XX:XX`.
|
||||
## [[nmcli]]
|
||||
Useful for getting channel needed to setup monitor mode properly.
|
||||
### `nmcli dev wifi`
|
||||
```
|
||||
IN-USE BSSID SSID MODE CHAN RATE SIGNAL BARS SECURITY
|
||||
XX:XX:XX:XX:XX:XX FRITZ!Box 5490 PB Infra 6 195 Mbit/s 75 ▂▄▆_ WPA2
|
||||
* 4C:1B:86:D1:06:7B LenbrO Infra 100 540 Mbit/s 67 ▂▄▆_ WPA2
|
||||
4C:1B:86:D1:06:7C LenbrO Infra 6 260 Mbit/s 64 ▂▄▆_ WPA2
|
||||
B8:BE:F4:4D:48:17 LenbrO Infra 1 130 Mbit/s 62 ▂▄▆_ WPA
|
||||
XX:XX:XX:XX:XX:XX -- Infra 6 260 Mbit/s 60 ▂▄▆_ WPA2
|
||||
XX:XX:XX:XX:XX:XX FRITZ!Box 5490 PB Infra 60 405 Mbit/s 37 ▂▄__ WPA2
|
||||
XX:XX:XX:XX:XX:XX FRITZ!Box Fon WLAN 7360 BP Infra 1 130 Mbit/s 34 ▂▄__ WPA1 WPA2
|
||||
XX:XX:XX:XX:XX:XX FRITZ!Box 5490 PB Infra 6 195 Mbit/s 34 ▂▄__ WPA2
|
||||
XX:XX:XX:XX:XX:XX Sunrise_Wi-Fi_09FB29 Infra 7 540 Mbit/s 34 ▂▄__ WPA2 WPA3
|
||||
XX:XX:XX:XX:XX:XX Madchenband Infra 11 260 Mbit/s 34 ▂▄__ WPA2
|
||||
XX:XX:XX:XX:XX:XX LenbrO Infra 36 270 Mbit/s 34 ▂▄__ WPA2
|
||||
XX:XX:XX:XX:XX:XX FibreBox_X6-01EF47 Infra 1 260 Mbit/s 32 ▂▄__ WPA2
|
||||
XX:XX:XX:XX:XX:XX -- Infra 11 260 Mbit/s 32 ▂▄__ WPA2
|
||||
XX:XX:XX:XX:XX:XX EEG-04666 Infra 1 405 Mbit/s 30 ▂___ WPA2
|
||||
XX:XX:XX:XX:XX:XX Salt_2GHz_8A9170 Infra 11 260 Mbit/s 29 ▂___ WPA2
|
||||
XX:XX:XX:XX:XX:XX -- Infra 11 260 Mbit/s 24 ▂___ WPA2
|
||||
XX:XX:XX:XX:XX:XX FRITZ!Box 5490 PB Infra 60 405 Mbit/s 19 ▂___ WPA2
|
||||
```
|
||||
### `nmcli -t dev wifi`
|
||||
```
|
||||
XX\:XX\:XX\:XX\:XX\:XX:FRITZ!Box 5490 PB:Infra:6:195 Mbit/s:79:▂▄▆_:WPA2
|
||||
:XX\:XX\:XX\:XX\:XX\:XX::Infra:6:260 Mbit/s:75:▂▄▆_:WPA2
|
||||
:4C\:1B\:86\:D1\:06\:7C:LenbrO:Infra:6:260 Mbit/s:74:▂▄▆_:WPA2
|
||||
*:4C\:1B\:86\:D1\:06\:7B:LenbrO:Infra:100:540 Mbit/s:72:▂▄▆_:WPA2
|
||||
:B8\:BE\:F4\:4D\:48\:17:LenbrO:Infra:1:130 Mbit/s:65:▂▄▆_:WPA2
|
||||
:XX\:XX\:XX\:XX\:XX\:XX:Sunrise_Wi-Fi_09FB29:Infra:7:540 Mbit/s:52:▂▄__:WPA2 WPA3
|
||||
:XX\:XX\:XX\:XX\:XX\:XX:FRITZ!Box 5490 PB:Infra:60:405 Mbit/s:50:▂▄__:WPA2
|
||||
:XX\:XX\:XX\:XX\:XX\:XX:FRITZ!Box Fon WLAN 7360 BP:Infra:1:130 Mbit/s:47:▂▄__:WPA1 WPA2
|
||||
:XX\:XX\:XX\:XX\:XX\:XX:FRITZ!Box 5490 PB:Infra:6:195 Mbit/s:45:▂▄__:WPA2
|
||||
:XX\:XX\:XX\:XX\:XX\:XX:Zentrum der Macht:Infra:1:195 Mbit/s:44:▂▄__:WPA2
|
||||
:XX\:XX\:XX\:XX\:XX\:XX:FibreBox_X6-01EF47:Infra:1:260 Mbit/s:42:▂▄__:WPA2
|
||||
:XX\:XX\:XX\:XX\:XX\:XX:Madchenband:Infra:11:260 Mbit/s:40:▂▄__:WPA2
|
||||
:XX\:XX\:XX\:XX\:XX\:XX:LenbrO:Infra:36:270 Mbit/s:37:▂▄__:WPA2
|
||||
:XX\:XX\:XX\:XX\:XX\:XX::Infra:11:260 Mbit/s:34:▂▄__:WPA2
|
||||
:XX\:XX\:XX\:XX\:XX\:XX:EEG-04666:Infra:1:405 Mbit/s:30:▂___:WPA2
|
||||
:XX\:XX\:XX\:XX\:XX\:XX:Salt_2GHz_8A9170:Infra:11:260 Mbit/s:29:▂___:WPA2
|
||||
:XX\:XX\:XX\:XX\:XX\:XX:FRITZ!Box 5490 PB:Infra:60:405 Mbit/s:27:▂___:WPA2
|
||||
:XX\:XX\:XX\:XX\:XX\:XX:Madchenband2.0:Infra:100:540 Mbit/s:25:▂___:WPA2
|
||||
:XX\:XX\:XX\:XX\:XX\:XX::Infra:11:260 Mbit/s:24:▂___:WPA2
|
||||
:XX\:XX\:XX\:XX\:XX\:XX:FibreBox_X6-01EF47:Infra:44:540 Mbit/s:20:▂___:WPA2
|
||||
```
|
||||
## [[iw]]
|
||||
### `iw dev`
|
||||
Useful to list interfaces and see which hardware they correspond to.
|
||||
Can use that to create a monitor interface with an easier to remember name.
|
||||
```
|
||||
phy#1
|
||||
Unnamed/non-netdev interface
|
||||
wdev 0x100000002
|
||||
addr 3c:21:9c:f2:e4:00
|
||||
type P2P-device
|
||||
Interface wlp44s0
|
||||
ifindex 5
|
||||
wdev 0x100000001
|
||||
addr e6:bf:0c:3c:47:ba
|
||||
ssid LenbrO
|
||||
type managed
|
||||
channel 100 (5500 MHz), width: 80 MHz, center1: 5530 MHz
|
||||
txpower 22.00 dBm
|
||||
multicast TXQ:
|
||||
qsz-byt qsz-pkt flows drops marks overlmt hashcol tx-bytes tx-packets
|
||||
0 0 0 0 0 0 0 0 0
|
||||
phy#0
|
||||
Interface mon0
|
||||
ifindex 7
|
||||
wdev 0x2
|
||||
addr a8:42:a1:8b:f4:e3
|
||||
type monitor
|
||||
channel 6 (2437 MHz), width: 20 MHz (no HT), center1: 2437 MHz
|
||||
txpower 20.00 dBm
|
||||
Interface wlp0s20f0u6
|
||||
ifindex 4
|
||||
wdev 0x1
|
||||
addr a8:42:a1:8b:f4:e3
|
||||
type monitor
|
||||
channel 6 (2437 MHz), width: 20 MHz (no HT), center1: 2437 MHz
|
||||
txpower 20.00 dBm
|
||||
multicast TXQ:
|
||||
qsz-byt qsz-pkt flows drops marks overlmt hashcol tx-bytes tx-packets
|
||||
0 0 0 0 0 0 0 0 0
|
||||
|
||||
```
|
||||
Here, `phy#1` is my laptops built-in WiFi card, and `phy#0` is a WiFi USB adapter.
|
||||
### `iw [phy phy<index> | phy#<index>] info | grep -f monitor -B 10`
|
||||
```
|
||||
➜ iw phy phy0 info | fgrep monitor -B 10
|
||||
* CMAC-256 (00-0f-ac:13)
|
||||
* GMAC-128 (00-0f-ac:11)
|
||||
* GMAC-256 (00-0f-ac:12)
|
||||
Available Antennas: TX 0x3 RX 0x3
|
||||
Configured Antennas: TX 0x3 RX 0x3
|
||||
Supported interface modes:
|
||||
* IBSS
|
||||
* managed
|
||||
* AP
|
||||
* AP/VLAN
|
||||
* monitor
|
||||
--
|
||||
* register_beacons
|
||||
* start_p2p_device
|
||||
* set_mcast_rate
|
||||
* connect
|
||||
* disconnect
|
||||
* set_qos_map
|
||||
* set_multicast_to_unicast
|
||||
* set_sar_specs
|
||||
software interface modes (can always be added):
|
||||
* AP/VLAN
|
||||
* monitor
|
||||
```
|
||||
Can do better
|
||||
### `iw phy#0 info | grep monitor`
|
||||
```
|
||||
* monitor
|
||||
* monitor
|
||||
```
|
||||
Concise but possible need more context to be sure?
|
||||
### `iw phy phy0 info | sed -n '/software interface modes/,/monitor/p'`
|
||||
More concise but with good context. Assuming only sw interfaces need to support monitor mode
|
||||
```
|
||||
software interface modes (can always be added):
|
||||
* AP/VLAN
|
||||
* monitor
|
||||
```
|
||||
### Getting a monitor interface
|
||||
```
|
||||
iw phy#0 interface add mon0 type monitor
|
||||
```
|
||||
Add a easy interface to wifi hw and make it a monitor. Can check again with 'iw dev' to make sure it is really in monitor mode. If there is an other interface it must be taken down or deleted e.g with
|
||||
```
|
||||
iw dev <phy#0 other interface> del # or
|
||||
ip link set <phy#0 other interface> down
|
||||
```
|
||||
Then to enable `mon0` interface,
|
||||
```
|
||||
ip link set mon0 up
|
||||
```
|
||||
To effectively capture packets, we should set the interface to the correct frequency. For this we get the channel e.g. via the above mentioned `nmcli dev wifi`. We can see that, e.g. the BSSID I am connected to (marked with `*`) is on channel 100. We can also see that it there is also a BSSID belonging to the same SSID with the interface on channel 6. I.e., it is running one interface in 2.4 GHz (802.11b/g/n/ax/be) and one in 5 GHz (802.11a/h/n/ac/ax/be). We chose which which channel to tune our `mon0` interface to, then we can lookup what the center frequency is on [wikipedia(List of Wifi Channels)](https://en.wikipedia.org/wiki/List_of_WLAN_channels). E.g. for channel 6 (i.e. 2.4 GHz radio) we see that the center frequency is 2437. We set our interface to that frequency:
|
||||
```
|
||||
iw dev mon0 set freq 2437
|
||||
```
|
||||
Now double check that the interface is in monitor mode and tunedto the correct frequency:
|
||||
```
|
||||
iw dev mon0 info
|
||||
```
|
||||
Should give an output like
|
||||
```
|
||||
Interface mon0
|
||||
ifindex 7
|
||||
wdev 0x2
|
||||
addr a8:42:a1:8b:f4:e3
|
||||
type monitor
|
||||
wiphy 0
|
||||
channel 6 (2437 MHz), width: 20 MHz (no HT), center1: 2437 MHz
|
||||
txpower 20.00 dBm
|
||||
```
|
||||
This concludes preparing the wifi card for packet capture in monitor mode.
|
||||
### [remarks]
|
||||
- `sudo` is probably required for these commands
|
||||
- These network tools are what is available on fedora 40, on $(uname -r)= 6.8.8 Linux Kernel. It might be that other OSs still use older tools, which are being phased out. But other operating systems might still be using older versions of these commands. For a table on how they match up, see [this](https://www.tecmint.com/deprecated-linux-networking-commands-and-their-replacements/) recent article (July 2023), according to which the old commands are even deprecated in recent Debian and Ubuntu releases.
|
||||
- If smth is not working run `rfkill list` to check device is blocked. If it is, `rfkill unblock 0`, where `0` is the same index used above and represents `phy0` /`phy#0`.
|
||||
- To ensure that [[NetworkManager]] not managing you card, `nmcli device set wlp0s20f0u6 managed no` if the interface is called `wlp0s20f0u6`. Check with `nmcli dev`, the STATE should be "unmanaged".
|
||||
- See resources on how to put interface/wifi hardware back into managed mode, if you need the card for personal use.
|
||||
|
||||
# Important
|
||||
Monitor mode is actually completely useless, unless we can observe the EAPOL handshake. That means the Wifi AP should be using WPA/WPA2 with psk. Also we need to know the SSID and passphrase. So it is still better if we can setup an environment where we can just do port mirroring from the wifi router, or setup ourselves in AP mode, but then we need to be able to bridge to the internet somehow, which I haven't managed reliably. Have done some testing on raspberry pi seemed to work. But raspberry pi sometimes goes to sleep so the AP goes down which means the IoT device loses connection.
|
||||
|
||||
If we happen to know the MAC address we need, then in wireshark we can filter `wlan.addr == [MAC]`. In tcpdump we can use the filter
|
||||
14
notes/journal/2024-05-15-wed.md
Normal file
14
notes/journal/2024-05-15-wed.md
Normal file
@ -0,0 +1,14 @@
|
||||
# `IOTTB_HOME`
|
||||
I introduced the environment variable `IOTTB_HOME` into the code. It is used to configure where the root of a iottb database is. #TODO this means that some code needs refactoring. But, I think it will streamline the code. The path in `IOTTB_HOME` shall be used to define the database root. Then, all the code handling adding devices and running captures can rely on the fact that a canonical home exists. Unfortunately I've hard coded quite a bit of ad-hoc configuration to use `Path.cwd()`, i.e. the current working directory, by default. So there will be some refactoring involved in switching over to using `IOTTB_HOME`s value as the default path.
|
||||
|
||||
# Adding Functionality
|
||||
## Quick and dirty capture
|
||||
I want to have a mode which just takes a command and runs it directly with its arguments.
|
||||
The question is weather to only allow a preconfigured list of commands or in principle allow any command to be passed and write the output. I tend toward providing a subcommand for each utility we want to support. The question is what to do about the syntax errors of those commands. Maybe the thing to do is only write a file into the db if the command runs successfully.
|
||||
### Refactoring the tcpdump capture
|
||||
With the above idea it would be possible to also refactor or rewrite how tcpdump is called completely. But, the command has a lot of options and maybe its better also offer some guidance to users via `-h`, e.g. to only input the needed and correct filters for example. Choosing the wrong filter could make the capture potentially useless and one might only see that after the capture has completed.
|
||||
## Converting pcap to csv
|
||||
I want an option such that one can automatically convert a captures resulting file into a csv. Probably will focus on tcpdump for now, since other tools like [[mitmproxy]] have different output files.
|
||||
|
||||
## Defining Experiment
|
||||
I want a pair of commands that 1. provide a guided cli interface to define an experiment and 2. to run that experiment -> Here [Collective Knowledge Framework](https://github.com/mlcommons/ck) might actually come in handy. The already have tooling for setting up and defining aspects of experiments so that they become reproducible. So maybe one part of the `iottb` as a tool would be to write the correct json files into the directory which contain the informatin on how the command was run. Caveat: All all option values are the same, basically only, if it was used or not (flagging options) or that it was used (e.g. an ip address was used in the filter but the specific value of the ip is of no use for reproducing). Also, Collective Minds tooling relies very common ML algos/framework and static data. So maybe this only comes into play after a capture has been done. So maybe a feature extraction tool (see [[further considerations#Usage paths/ Workflows]]) should create the data and built the database separately.
|
||||
0
notes/testbed/data analysis/privacy metrics.md
Normal file
0
notes/testbed/data analysis/privacy metrics.md
Normal file
16
notes/testbed/data collection/Design document.md
Normal file
16
notes/testbed/data collection/Design document.md
Normal file
@ -0,0 +1,16 @@
|
||||
# Needed Metadata
|
||||
- _Must_ contain IP address of *IoT* device
|
||||
- _Can_ contain IP addr of capture host
|
||||
|
||||
# Options
|
||||
## tcpdump options
|
||||
see [[tcpdump]]
|
||||
## kybcap options
|
||||
| Option | Desciption|
|
||||
| ------- | ---------- |
|
||||
| `--setup` | Go through guided setup process |
|
||||
| `--meta-config` | Go through guided meta data setup |
|
||||
| `--mdevice=` | _Metadata_ : Specify device name |
|
||||
| `--mipdev=` | _Metadata_ : Specify device ip address |
|
||||
| `--mmac=` | _Metadata_ : Specify device MAC address |
|
||||
| `--to-csv` | _post_processing: extract pcap into csv |
|
||||
1
notes/testbed/scope.md
Normal file
1
notes/testbed/scope.md
Normal file
@ -0,0 +1 @@
|
||||
What is scope if testbed as a system?
|
||||
14
notes/testbed/testbed design and architecture.md
Normal file
14
notes/testbed/testbed design and architecture.md
Normal file
@ -0,0 +1,14 @@
|
||||
FAIR data + privacy metric evaluation algos.
|
||||
Dataset offers some schema.
|
||||
Privacy metric requires some set of features.
|
||||
**Case 1**
|
||||
dataset schema = required features by privacy metric
|
||||
**Case 2**
|
||||
dataset schema $\subset$ required features ->
|
||||
_2.1:_ feature extraction algo(s) exists, which can computed missing features
|
||||
_2.2:_ missing features CANNOT be computed form available schema/data
|
||||
**Case 3**
|
||||
dataset schema $\supset$ required features -> project schema down into relevant feature space/ leave out uneeded data
|
||||
**Case 4**
|
||||
Unknown relationship -> further investigaton needed
|
||||
Is this realistic case?
|
||||
14
notes/todos/todo.md
Normal file
14
notes/todos/todo.md
Normal file
@ -0,0 +1,14 @@
|
||||
- [ ] !Need microsd card for Mi 360 home camera
|
||||
- [ ] Cannot get Ledvance LED strip into discovery mode s.t. connection could be established
|
||||
- [ ] Have not managed to setup AP/Hotspot: Amazon echodot needs iOS app but iPhone will not connect to AP on fedora Laptop
|
||||
- [x] ~~Ask Valentyna/Nima for other approach to capture traffic~~ Preliminary Fix: USB Plugable Wifi Adapters.
|
||||
- [ ] Look into how to route to internet!
|
||||
|
||||
|
||||
|
||||
|
||||
IEEE 802.11: www.ieee802.org/11/
|
||||
FCC 2.4 GHz: https://transition.fcc.gov/Bureaus/Engineering_Technology/Orders/2000/fcc00312.pdf
|
||||
WPA3 Specification: www.wi-fi.org/download.php?file=/sites/default/files/private/WPA3_Specification_v3.0.pdf
|
||||
Wireless LAN Display Filters: www.wireshark.org/docs/dfref/w/wlan.html
|
||||
WPA-PSK Key Generator Tool: www.wireshark.org/tools/wpa-psk.html
|
||||
48
notes/wiki/EnvironmentSetup.md
Normal file
48
notes/wiki/EnvironmentSetup.md
Normal file
@ -0,0 +1,48 @@
|
||||
Here I try to document the setup needed to perform reliable captures of IoT device traffic. Setting up the environment properly is a precondition for capture tools like
|
||||
[[Wireshark]] et al. to capture ALL traffic needed reliable (while also avoiding nosie).
|
||||
|
||||
Since most IoT devices use the internet, it is vital that any capturing mechanism/setup does not interfear with their ability to phone home.
|
||||
|
||||
At this point I can descerne the following steps.
|
||||
Essentially, all this is to enable reliable [[monitoring]] of IoT network traffic.
|
||||
# Overview/Big Picture
|
||||
Assumption: The machine used to capture traffic has internet acces either wired (ethernet) or wireless (wifi, maybe bluetooth?).
|
||||
Since IoT devices work wirelessly the testing/experiment environment needs at least none wifi card which supports AP mode (see [[iw]]). It will act as the AP for the device to be tested.
|
||||
Since many IoT devices are internet enabled we need a way to bridge the IoT<->AP network to the internet.
|
||||
Problem: How do we get internet access to an IoT device?
|
||||
1. It connects to a router. The router must then be able to: Mirror ports/run required capturing software itself
|
||||
2. It connects to an AP on some other machine. The other machine is connected via some other iterface to the internet.
|
||||
1. Wired Internet: Either using a (software) bridge or NAT make sure traffic IoT<->Internet can be established and that it can capture all needed packets.
|
||||
2. Wifi Internet: Same as wired. But special care must be taken on a "unclean" system. Desktop systems tend to come with running network management utilities and daemons running. To avoid them interfereing with the AP card special care must be taken, see e.g. [[nmcli]].
|
||||
So what must a toolkit which sets up the experiment environment be able to do:
|
||||
1. __AP Service__ Through config or detection setup a properly configure AP, possibly on a external adapter
|
||||
2. __IP networking dependencies__ Since the experiment machine is replacing some functionality usually offered by the router to connecting host, some router functionality must be offerd. In particular [[dhcp]] (IoT device needs an IP) and [[dns]] (IoT device needs some way to get IPs of hosts it wants to connect to).
|
||||
3. __Internet Gateway__ Enable any IoT device to connect to the Internet. That is, test machine must at least be a [[gateway]] and the IoT device should ideally be able to understand that without any configuration.
|
||||
4. Any firewall must allow for [[dhcp]] and [[dns]] services to be accepted by experiment host.
|
||||
# AP Configuration
|
||||
## Using NetworkManager
|
||||
See [here](https://variwiki.com/index.php?title=Wifi_NetworkManager#Configuring_WiFi_Access_Point_with_NetworkManager). Can use the command line tool [[nmcli]].
|
||||
|
||||
## Using [[hostapd]]
|
||||
Must first make sure that the interface is not managed by nmcli, see [[nmcli]].
|
||||
It turns out that _**leaving out**_ those parts of the config file which have to do with security and auth:
|
||||
```
|
||||
# hostapd.conf
|
||||
# Do not include in config if we wish to have an open auth AP!
|
||||
wpa=2
|
||||
wpa_passphrase=11help22help33
|
||||
wpa_key_mgmt=WPA-PSK
|
||||
rsn_pairwise=CCMP
|
||||
```
|
||||
Further more we set the config option `auth_algs` appropriatly so open auth is allowed:
|
||||
```
|
||||
auth_algs=1
|
||||
```
|
||||
see [[hostapd]] for description of the option.
|
||||
|
||||
# DNS and DHCP
|
||||
#TODO
|
||||
Tools: [[dnsmasq]]
|
||||
# Internet
|
||||
#TODO
|
||||
Possible tooling: [[iw]], [[firewalld]], [[iptables]], [[netables]]
|
||||
9
notes/wiki/Tools.md
Normal file
9
notes/wiki/Tools.md
Normal file
@ -0,0 +1,9 @@
|
||||
# Wifi Tools
|
||||
- [[aircrack-ng]]can easily enable monitor mode
|
||||
- [[nmcli]] NetworkManager cli
|
||||
- [[hostapd]]
|
||||
- [[iw]]
|
||||
# Wifi Adapter not found anymore
|
||||
- __Issue__: After using `airmon-ng` to put my wifi adapter into monitor mode and then supposedly back into normal mode: network manager couldn't find wifi adapter anymore.
|
||||
- `sudo nmcli dev` showed that `wlp44s0` interface was "unmanaged".
|
||||
- __Fix__: `sudo nmcli set wlp44s0 managed yes`
|
||||
1
notes/wiki/aircrack-ng.md
Normal file
1
notes/wiki/aircrack-ng.md
Normal file
@ -0,0 +1 @@
|
||||
#tldr : #TODO
|
||||
33
notes/wiki/dnsmasq.md
Normal file
33
notes/wiki/dnsmasq.md
Normal file
@ -0,0 +1,33 @@
|
||||
#tldr : #TODO
|
||||
**Resources**:
|
||||
- https://variwiki.com/index.php?title=Wifi_NetworkManager#Configuring_WiFi_Access_Point
|
||||
- https://wiki.archlinux.org/title/Dnsmasq
|
||||
- https://thekelleys.org.uk/dnsmasq/doc.html
|
||||
- https://thekelleys.org.uk/dnsmasq/docs/dnsmasq-man.html
|
||||
- https://thekelleys.org.uk/dnsmasq/docs/FAQ
|
||||
|
||||
|
||||
## Configuring WiFi Access Point with NetworkManager
|
||||
|
||||
NetworkManager can also be used to turn WiFi interface into Access Point.
|
||||
The benefit of using NetworkManager in this scenario is the complete automation of WiFi, DHCP server and NAT configuration.
|
||||
|
||||
### Disabling standalone dnsmasq service
|
||||
|
||||
Dnsmasq is a lightweight DNS forwarder and DHCP server.
|
||||
By default dnsmasq runs as a standalone service and will conflict with dnsmasq instance launched by NetworkManager.
|
||||
To prevent the conflict, disable dnsmasq service by running the following commands:
|
||||
|
||||
```
|
||||
systemctl disable dnsmasq
|
||||
```
|
||||
```
|
||||
systemctl stop dnsmasq
|
||||
```
|
||||
|
||||
For NetworkManager to run dnsmasq as a local caching DNS server, edit/create /etc/NetworkManager/NetworkManager.conf and add the following
|
||||
```system
|
||||
[main]
|
||||
dns=dnsmasq
|
||||
```
|
||||
#note: Maybe must disable #NetworkManager #dnsmasq and enable system service dnsmasq.
|
||||
3
notes/wiki/firewalld.md
Normal file
3
notes/wiki/firewalld.md
Normal file
@ -0,0 +1,3 @@
|
||||
Resources: [Firewalld](https://wiki.archlinux.org/title/Firewalld), [Internet Sharing](https://wiki.archlinux.org/title/Internet_sharing#With_firewalld)
|
||||
|
||||
Fazit: Not really viable since not enough fine grain control.
|
||||
33
notes/wiki/hostapd.md
Normal file
33
notes/wiki/hostapd.md
Normal file
@ -0,0 +1,33 @@
|
||||
#tldr : #TODO
|
||||
|
||||
```bash
|
||||
# For nl80211, this parameter can be used to request the AP interface to be
|
||||
# added to the bridge automatically (brctl may refuse to do this before hostapd
|
||||
# has been started to change the interface mode). If needed, the bridge
|
||||
# interface is also created.
|
||||
bridge=br0
|
||||
```
|
||||
|
||||
# Operation mode
|
||||
```bash
|
||||
# (a = IEEE 802.11a (5 GHz), b = IEEE 802.11b (2.4 GHz),
|
||||
# g = IEEE 802.11g (2.4 GHz), ad = IEEE 802.11ad (60 GHz); a/g options are used
|
||||
# with IEEE 802.11n (HT), too, to specify band). For IEEE 802.11ac (VHT), this
|
||||
# needs to be set to hw_mode=a. For IEEE 802.11ax (HE) on 6 GHz this needs
|
||||
# to be set to hw_mode=a. When using ACS (see channel parameter), a
|
||||
# special value "any" can be used to indicate that any support band can be used.
|
||||
# This special case is currently supported only with drivers with which
|
||||
# offloaded ACS is used.
|
||||
# Default: IEEE 802.11b
|
||||
hw_mode=g
|
||||
```
|
||||
|
||||
```bash
|
||||
# IEEE 802.11 specifies two authentication algorithms. hostapd can be
|
||||
# configured to allow both of these or only one. Open system authentication
|
||||
# should be used with IEEE 802.1X.
|
||||
# Bit fields of allowed authentication algorithms:
|
||||
# bit 0 = Open System Authentication
|
||||
# bit 1 = Shared Key Authentication (requires WEP)
|
||||
auth_algs=3
|
||||
```
|
||||
32
notes/wiki/ip-forwarding.md
Normal file
32
notes/wiki/ip-forwarding.md
Normal file
@ -0,0 +1,32 @@
|
||||
Resources:
|
||||
[archwiki-internet-sharing](https://wiki.archlinux.org/title/Internet_sharing#Configuration)
|
||||
[archwiki-sysctl](https://wiki.archlinux.org/title/Sysctl#Configuration)
|
||||
[kernel-sysctl](https://www.kernel.org/doc/html/latest//networking/ip-sysctl.html)
|
||||
|
||||
Remark: Many resources mention that all #firewall config should be executed in one go from a script.
|
||||
They also mention to make sure to flush all previous rules/tables/chains before beginning the setup.
|
||||
Order of rules matter.
|
||||
|
||||
*Check current settings*
|
||||
```bash
|
||||
sudo sysctl -a | grep forward
|
||||
```
|
||||
|
||||
# Config
|
||||
```
|
||||
net.ipv4.conf.all.bc_forwarding = 0 # broadcast?
|
||||
net.ipv4.conf.all.forwarding = 1 # Enable IP forwarding on this interface.
|
||||
```
|
||||
Latter above controls whether packets received _on_ this (in this case on _all_) interface can be forwarded.
|
||||
|
||||
```
|
||||
net.ipv4.conf.all.mc_forwarding = 0 # Multicast routing
|
||||
```
|
||||
## Locations
|
||||
### Preloaded
|
||||
|
||||
# Tags
|
||||
#firewall #nat
|
||||
#sysctl
|
||||
#ip-forwarding
|
||||
#masquerading
|
||||
10
notes/wiki/iw.md
Normal file
10
notes/wiki/iw.md
Normal file
@ -0,0 +1,10 @@
|
||||
#tldr: show / manipulate wirless devices and their configs.
|
||||
|
||||
# Commands used:
|
||||
- `iw list` shows extensive info about all wirless devices.
|
||||
- To check if any devices is AP ready:
|
||||
```bash
|
||||
iw list | grep -i ap -A 5 -B 5
|
||||
```
|
||||
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Loading…
x
Reference in New Issue
Block a user