Compare commits

..

No commits in common. "what" and "main" have entirely different histories.
what ... main

128 changed files with 9766 additions and 431 deletions

44
.gitignore vendored
View File

@ -1,10 +1,38 @@
.obsidian
venv
__pycache__
*.log
.idea/*
*/.idea
*.idea
/.idea
.venv
iottb.egg-info
.idea/
2024-bsc-sebastian-lenzlinger.iml
*.log
logs/
*.pyc
.obsidian
dist/
build/
# Covers JetBrains IDEs: IntelliJ, RubyMine, PhpStorm, AppCode, PyCharm, CLion, Android Studio, WebStorm and Rider
# Reference: https://intellij-support.jetbrains.com/hc/en-us/articles/206544839
# User-specific stuff
.idea/**/workspace.xml
.idea/**/tasks.xml
.idea/**/usage.statistics.xml
.idea/**/dictionaries
.idea/**/shelf
# AWS User-specific
.idea/**/aws.xml
# Generated files
.idea/**/contentModel.xml
# Sensitive or high-churn files
.idea/**/dataSources/
.idea/**/dataSources.ids
.idea/**/dataSources.local.xml
.idea/**/sqlDataSources.xml
.idea/**/dynamic.xml
.idea/**/uiDesigner.xml
.idea/**/dbnavigator.xml
.private/
*.pcap

View File

@ -1,4 +1,4 @@
# Your Project Name
# IOTTB
Hello! This is the README file that accompanies the Gitlab repository for your Bachelor or Master thesis. You'll need to update this README as you work on your thesis to reflect relevant information about your thesis.
@ -6,29 +6,26 @@ Hello! This is the README file that accompanies the Gitlab repository for your B
## Organization of the repository
- **code** folder: holds source code
- **data** folder: holds (input) data required for the project. If your input data files are larger than 100MB, create a sample data file smaller than 100MB and commit the sample instead of the full data file. Include a note explaining how the full data can be retrieved.
- **results** folder: holds results files generated as part of the project
- **thesis** folder: contains the latex sources + PDF of the final thesis. You can use the [basilea-latex template](https://github.com/ivangiangreco/basilea-latex) as a starting point.
- **presentation** folder: contains the sources of the presentation (e.g., latex or PPT)
- **literature** folder: contains any research paper that the student needs to read or finds interesting
- **notes** folder: holds minutes of meetings
- **data** folder: Holds no relevant data for this thesis. Files in here where used for debugging and testing.
- **thesis** folder: contains the latex sources + PDF of the final thesis.
- **presentation** folder: contains PDF and sources of the presentation.
- **literature** used can be found in the **thesis** folder .bib or in the **presentation** folders .bib file.
- **notes** folder: Various notes and the beginnings of a wiki.
- `iottb` is the python testbed as a single executable (including python interpreter) which should be able to run on Linux machines.
## Useful resources
- [Efficient Reading of Papers in Science and Technology](https://www.cs.columbia.edu/~hgs/netbib/efficientReading.pdf)
- [Heilmeier's catechism](https://en.wikipedia.org/wiki/George_H._Heilmeier#Heilmeier%27s_Catechism)
## Description
Let people know what your project can do specifically. Provide context and add a link to any reference visitors might be unfamiliar with. A list of Features or a Background subsection can also be added here. If there are alternatives to your project, this is a good place to list differentiating factors.
## Visuals
Depending on what you are making, it can be a good idea to include screenshots or even a video (you'll frequently see GIFs rather than actual videos). Tools like ttygif can help, but check out Asciinema for a more sophisticated method.
## Installation
Within a particular ecosystem, there may be a common way of installing things, such as using Yarn, NuGet, or Homebrew. However, consider the possibility that whoever is reading your README is a novice and would like more guidance. Listing specific steps helps remove ambiguity and gets people to using your project as quickly as possible. If it only runs in a specific context like a particular programming language version or operating system or has dependencies that have to be installed manually, also add a Requirements subsection.
In this thesis I design a automation testbed for IoT devices.
The main result is the software `iottb` which automates some aspects of experimenting with IoT devices.
Currently, it implements a database guided by the FAIR principles of open data as well as wraps tcpdump such that metadata is stored.
## Usage
Use examples liberally, and show the expected output if you can. It's helpful to have inline the smallest example of usage that you can demonstrate, while providing links to more sophisticated examples if they are too long to reasonably include in the README.
For more info see `code/iottb-project/README.md`.
As well as examples in the thesis writeup at `thesis/BScThesisUnibas_main-5.pdf`. <br>
In general:
```bash
iottb --help # Most general overview
iottb <subcommand> --help
```
## License
To allow further development and use during public events of the implemented system through the University of Basel, the system is expected to be well documented and provided to the university under a license that allows such reuse, e.g., the [BSD 3-clause license](https://opensource.org/license/bsd-3-clause/). The student agrees that all code produced during the project may be released open-source in the context of the PET group's projects.
The code is licensed under a BSD 3-clause license, a copy of which is provided in the file `code/iottb-project/LICENSE`.

View File

@ -30,3 +30,30 @@ def setup_sniff_parser(subparsers):
def setup_pcap_filter_parser(parser_sniff):
parser_pcap_filter = parser_sniff.add_argument_parser('pcap-filter expression')
pass
def check_iottb_env():
# This makes the option '--root-dir' obsolescent # TODO How to streamline this?\
try:
iottb_home = environ['IOTTB_HOME'] # TODO WARN implicit declaration of env var name!
except KeyError:
logger.error(f"Environment variable 'IOTTB_HOME' is not set."
f"Setting environment variable 'IOTTB_HOME' to '~/{IOTTB_HOME_ABS}'")
environ['IOTTB_HOME'] = IOTTB_HOME_ABS
finally:
if not Path(IOTTB_HOME_ABS).exists():
print(f'"{IOTTB_HOME_ABS}" does not exist.')
response = input('Do you want to create it now? [y/N]')
logger.debug(f'response: {response}')
if response.lower() != 'y':
logger.debug(f'Not setting "IOTTB_HOME"')
print('TODO')
print("Aborting execution...")
return ReturnCodes.ABORTED
else:
print(f'Setting environment variable IOTTB_HOME""')
Path(IOTTB_HOME_ABS).mkdir(parents=True,
exist_ok=False) # Should always work since in 'not exist' code path
return ReturnCodes.SUCCESS
logger.info(f'"{IOTTB_HOME_ABS}" exists.')
# TODO: Check that it is a valid iottb dir or can we say it is valid by definition if?
return ReturnCodes.SUCCESS

107
archive/iottb/__main__.py Normal file
View File

@ -0,0 +1,107 @@
#!/usr/bin/env python3
import argparse
from os import environ
from pathlib import Path
import logging
from archive.iottb.subcommands.add_device import setup_init_device_root_parser
# from iottb.subcommands.capture import setup_capture_parser
from iottb.subcommands.sniff import setup_sniff_parser
from iottb.utils.tcpdump_utils import list_interfaces
from iottb.logger import setup_logging
logger = logging.getLogger('iottbLogger.__main__')
logger.setLevel(logging.DEBUG)
######################
# Argparse setup
######################
def setup_argparse():
# create top level parser
root_parser = argparse.ArgumentParser(prog='iottb')
# shared options
root_parser.add_argument('--verbose', '-v', action='count', default=0)
root_parser.add_argument('--script-mode', action='store_true', help='Run in script mode (non-interactive)')
# Group of args w.r.t iottb.db creation
group = root_parser.add_argument_group('database options')
group.add_argument('--db-home', default=Path.home() / 'IoTtb.db')
group.add_argument('--config-home', default=Path.home() / '.config' / 'iottb.conf', type=Path, )
group.add_argument('--user', default=Path.home().stem, type=Path, )
# configure subcommands
subparsers = root_parser.add_subparsers(title='subcommands', required=True, dest='command')
# setup_capture_parser(subparsers)
setup_init_device_root_parser(subparsers)
setup_sniff_parser(subparsers)
# Utility to list interfaces directly with iottb instead of relying on external tooling
interfaces_parser = subparsers.add_parser('list-interfaces', aliases=['li', 'if'],
help='List available network interfaces.')
interfaces_parser.set_defaults(func=list_interfaces)
return root_parser
###
# Where put ?!
###
class IoTdb:
def __init__(self, db_home=Path.home() / 'IoTtb.db', iottb_config=Path.home() / '.conf' / 'iottb.conf',
user=Path.home().stem):
self.db_home = db_home
self.config_home = iottb_config
self.default_filters_home = db_home / 'default_filters'
self.user = user
def create_db(self, mode=0o777, parents=False, exist_ok=False):
logger.info(f'Creating db at {self.db_home}')
try:
self.db_home.mkdir(mode=mode, parents=parents, exist_ok=exist_ok)
except FileExistsError:
logger.error(f'Database path already at {self.db_home} exists and is not a directory')
finally:
logger.debug(f'Leaving finally clause in create_db')
def create_device_tree(self, mode=0o777, parents=False, exist_ok=False):
logger.info(f'Creating device tree at {self.db_home / 'devices'}')
#TODO
def parse_db_config(self):
pass
def parse_iottb_config(self):
pass
def get_known_devices(self):
pass
def iottb_db_exists(db_home=Path.home() / 'IoTtb.db'):
res = db_home.is_dir()
def main():
logger.debug(f'Pre setup_argparse()')
parser = setup_argparse()
logger.debug('Post setup_argparse().')
args = parser.parse_args()
logger.debug(f'Args parsed: {args}')
if args.command:
try:
args.func(args)
except KeyboardInterrupt:
print('Received keyboard interrupt. Exiting...')
exit(1)
except Exception as e:
logger.debug(f'Error in main: {e}')
print(f'Error: {e}')
# create_capture_directory(args.device_name)
if __name__ == '__main__':
setup_logging()
logger.debug("Debug level is working")
logger.info("Info level is working")
logger.warning("Warning level is working")
main()

35
archive/iottb/logger.py Normal file
View File

@ -0,0 +1,35 @@
import logging
import sys
import os
from logging.handlers import RotatingFileHandler
def setup_logging():
# Ensure the logs directory exists
log_directory = 'logs'
if not os.path.exists(log_directory):
os.makedirs(log_directory)
# Create handlers
file_handler = RotatingFileHandler(os.path.join(log_directory, 'iottb.log'), maxBytes=1048576, backupCount=5)
console_handler = logging.StreamHandler(sys.stdout)
# Create formatters and add it to handlers
file_fmt = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s')
console_fmt = logging.Formatter(
'%(asctime)s - %(levelname)s - %(filename)s:%(lineno)d - %(funcName)s - %(message)s')
file_handler.setFormatter(file_fmt)
console_handler.setFormatter(console_fmt)
# Get the root logger and add handlers
root_logger = logging.getLogger()
root_logger.setLevel(logging.DEBUG)
root_logger.addHandler(file_handler)
root_logger.addHandler(console_handler)
# Prevent propagation to the root logger to avoid duplicate logs
root_logger.propagate = False
setup_logging()

View File

@ -6,17 +6,20 @@ from typing import Optional
from iottb.definitions import ReturnCodes, CAPTURE_METADATA_FILE
from iottb.models.device_metadata_model import DeviceMetadata
from iottb.logger import logger
import logging
logger = logging.getLogger('iottbLogger.capture_metadata_model')
logger.setLevel(logging.DEBUG)
class CaptureMetadata:
# Required Fields
device_metadata: DeviceMetadata
capture_id: str = lambda: str(uuid.uuid4())
device_id: str
capture_dir: Path
capture_file: str
capture_date: str = lambda: datetime.now().strftime('%d-%m-%YT%H:%M:%S').lower()
# Statistics
start_time: str
@ -39,7 +42,8 @@ class CaptureMetadata:
def __init__(self, device_metadata: DeviceMetadata, capture_dir: Path):
logger.info(f'Creating CaptureMetadata model from DeviceMetadata: {device_metadata}')
self.device_metadata = device_metadata
self.capture_id = str(uuid.uuid4())
self.capture_date = datetime.now().strftime('%d-%m-%YT%H:%M:%S').lower()
self.capture_dir = capture_dir
assert capture_dir.is_dir(), f'Capture directory {capture_dir} does not exist'
@ -47,7 +51,7 @@ class CaptureMetadata:
logger.info(f'Building capture file name')
if self.app is None:
logger.debug(f'No app specified')
prefix = self.device_metadata.device_short_name
prefix = "iphone-14" #self.device_metadata.device_short_name
else:
logger.debug(f'App specified: {self.app}')
assert str(self.app).strip() not in {'', ' '}, f'app is not a valid name: {self.app}'

View File

@ -6,7 +6,10 @@ from typing import Optional, List
# iottb modules
from iottb.definitions import ReturnCodes, DEVICE_METADATA_FILE
from iottb.logger import logger
import logging
logger = logging.getLogger('iottbLogger.device_metadata_model')
logger.setLevel(logging.DEBUG)
# 3rd party libs
IMMUTABLE_FIELDS = {'device_name', 'device_short_name', 'device_id', 'date_created'}

View File

@ -1,18 +1,21 @@
import logging
import os
import pathlib
from iottb import definitions
from iottb.definitions import DEVICE_METADATA_FILE, ReturnCodes
from iottb.logger import logger
from iottb.models.device_metadata_model import DeviceMetadata
logger.setLevel(logging.INFO) # Since module currently passes all tests
# logger.setLevel(logging.INFO) # Since module currently passes all tests
logger = logging.getLogger('iottbLogger.add_device')
logger.setLevel(logging.INFO)
def setup_init_device_root_parser(subparsers):
#assert os.environ['IOTTB_HOME'] is not None, "IOTTB_HOME environment variable is not set"
parser = subparsers.add_parser('add-device', aliases=['add-device-root', 'add'],
help='Initialize a folder for a device.')
parser.add_argument('--root_dir', type=pathlib.Path, default=pathlib.Path.cwd())
parser.add_argument('--root_dir', type=pathlib.Path,
default=definitions.IOTTB_HOME_ABS) # TODO: Refactor code to not use this or handle iottb here
group = parser.add_mutually_exclusive_group()
group.add_argument('--guided', action='store_true', help='Guided setup', default=False)
group.add_argument('--name', action='store', type=str, help='name of device')
@ -20,14 +23,12 @@ def setup_init_device_root_parser(subparsers):
def handle_add(args):
# TODO: This whole function should be refactored into using the fact that IOTTB_HOME is set, and the dir exists
logger.info(f'Add device handler called with args {args}')
args.root_dir.mkdir(parents=True,
exist_ok=True) # else metadata.save_to_file will fail TODO: unclear what to assume
if args.guided:
logger.debug('begin guided setup')
metadata = guided_setup(args.root_dir)
metadata = guided_setup(args.root_dir) # TODO refactor to use IOTTB_HOME
logger.debug('guided setup complete')
else:
logger.debug('Setup through passed args: setup')
@ -36,7 +37,7 @@ def handle_add(args):
return ReturnCodes.ERROR
metadata = DeviceMetadata(args.name, args.root_dir)
file_path = args.root_dir / DEVICE_METADATA_FILE
file_path = args.root_dir / DEVICE_METADATA_FILE # TODO IOTTB_HOME REFACTOR
if file_path.exists():
print('Directory already contains a metadata file. Aborting.')
return ReturnCodes.ABORTED

View File

@ -2,11 +2,15 @@ import subprocess
from pathlib import Path
from iottb.definitions import *
import logging
from iottb.models.capture_metadata_model import CaptureMetadata
from iottb.models.device_metadata_model import DeviceMetadata, dir_contains_device_metadata
from iottb.utils.capture_utils import get_capture_src_folder, make_capture_src_folder
from iottb.utils.tcpdump_utils import check_installed
logger = logging.getLogger('iottbLogger.capture')
logger.setLevel(logging.DEBUG)
def setup_capture_parser(subparsers):
parser = subparsers.add_parser('sniff', help='Sniff packets with tcpdump')
# metadata args
@ -33,7 +37,7 @@ def setup_capture_parser(subparsers):
help='Please see tcpdump manual for details. Unused by default.')
cap_size_group = parser.add_mutually_exclusive_group(required=False)
cap_size_group.add_argument('-c', '--count', type=int, help='Number of packets to capture.', default=1000)
cap_size_group.add_argument('-c', '--count', type=int, help='Number of packets to capture.', default=10)
cap_size_group.add_argument('--mins', type=int, help='Time in minutes to capture.', default=1)
parser.set_defaults(func=handle_capture)
@ -88,6 +92,7 @@ def handle_capture(args):
assert args.device_root is not None, f'Device root directory is required'
assert dir_contains_device_metadata(args.device_root), f'Device metadata file \'{args.device_root}\' does not exist'
# get device metadata
logger.info(f'Device root directory: {args.device_root}')
if args.safe and not dir_contains_device_metadata(args.device_root):
print(f'Supplied folder contains no device metadata. '
f'Please setup a device root directory before using this command')
@ -98,6 +103,7 @@ def handle_capture(args):
else:
name = input('Please enter a device name: ')
args.device_root.mkdir(parents=True, exist_ok=True)
device_data = DeviceMetadata(name, args.device_root)
# start constructing environment for capture
capture_dir = get_capture_src_folder(args.device_root)
@ -152,7 +158,7 @@ def build_tcpdump_args(args, cmd, capture_metadata: CaptureMetadata):
capture_metadata.build_capture_file_name()
cmd.append('-w')
cmd.append(capture_metadata.capture_file)
cmd.append(str(capture_metadata.capture_dir) + "/" + capture_metadata.capture_file)
if args.safe:
cmd.append(f'host {args.device_ip}') # if not specified, filter 'any' implied by tcpdump
@ -160,7 +166,6 @@ def build_tcpdump_args(args, cmd, capture_metadata: CaptureMetadata):
return cmd
# def capture_file_cmd(args, cmd, capture_dir, capture_metadata: CaptureMetadata):
# capture_file_prefix = capture_metadata.get_device_metadata().get_device_short_name()
# if args.app_name is not None:

View File

@ -0,0 +1,63 @@
import subprocess
import logging
logger = logging.getLogger('iottbLogger.capture')
logger.setLevel(logging.DEBUG)
class Sniffer:
def __init__(self):
pass
def setup_sniff_parser(subparsers):
parser = subparsers.add_parser('sniff', help='Sniff packets with tcpdump')
# metadata args
parser.add_argument('-a', '--addr', help='IP or MAC address of IoT device')
# tcpdump args
parser.add_argument('--app', help='Application name to sniff', default=None)
parser_sniff_tcpdump = parser.add_argument_group('tcpdump arguments')
parser_sniff_tcpdump.add_argument('-i', '--interface', help='Interface to capture on.', dest='capture_interface',
required=True)
parser_sniff_tcpdump.add_argument('-I', '--monitor-mode', help='Put interface into monitor mode',
action='store_true')
parser_sniff_tcpdump.add_argument('-n', help='Deactivate name resolution. True by default.',
action='store_true', dest='no_name_resolution')
parser_sniff_tcpdump.add_argument('-#', '--number',
help='Print packet number at beginning of line. True by default.',
action='store_true')
parser_sniff_tcpdump.add_argument('-e', help='Print link layer headers. True by default.',
action='store_true', dest='print_link_layer')
parser_sniff_tcpdump.add_argument('-t', action='count', default=0,
help='Please see tcpdump manual for details. Unused by default.')
cap_size_group = parser.add_mutually_exclusive_group(required=False)
cap_size_group.add_argument('-c', '--count', type=int, help='Number of packets to capture.', default=10)
cap_size_group.add_argument('--mins', type=int, help='Time in minutes to capture.', default=1)
parser.set_defaults(func=sniff)
def parse_addr(addr):
#TODO Implement
pass
def sniff(args):
if args.addr is None:
print('You must supply either a MAC or IP(v4) address to use this tool!')
logger.info("Exiting on account of missing MAC/IP.")
exit(1)
else:
(type, value) = parse_addr(args.addr)
#TODO Get this party started
def sniff_tcpdump(args, filter):
pass
def sniff_mitmproxy(args, filter):
pass
def sniff_raw(cmd,args):
pass

View File

@ -15,12 +15,12 @@ def ensure_installed():
raise RuntimeError('tcpdump is not installed. Please install it to continue.')
def list_interfaces() -> str:
def list_interfaces(args) -> str:
"""List available network interfaces using tcpdump."""
ensure_installed()
try:
result = subprocess.run(['tcpdump', '--list-interfaces'], capture_output=True, text=True, check=True)
return result.stdout
print(result.stdout)
except subprocess.CalledProcessError as e:
print(f'Failed to list interfaces: {e}')
return ''

16
archive/pyproject.toml Normal file
View File

@ -0,0 +1,16 @@
[build-system]
requires = ["setuptools>=42", "wheel"]
build-backend = "setuptools.build_meta"
[project]
name = 'iottb'
version = '0.1.0'
authors = [{name = "Sebastian Lenzlinger", email = "sebastian.lenzlinger@unibas.ch"}]
description = "Automation Tool for Capturing Network packets of IoT devices."
requires-python = ">=3.8"
[tool.setuptools]
packages = ["iottb"]
[project.scripts]
iottb = "iottb.__main__:main"

View File

@ -1,35 +0,0 @@
__pycache__
.venv
iottb.egg-info
.idea
*.log
logs/
*.pyc
.obsidian
# Covers JetBrains IDEs: IntelliJ, RubyMine, PhpStorm, AppCode, PyCharm, CLion, Android Studio, WebStorm and Rider
# Reference: https://intellij-support.jetbrains.com/hc/en-us/articles/206544839
# User-specific stuff
.idea/**/workspace.xml
.idea/**/tasks.xml
.idea/**/usage.statistics.xml
.idea/**/dictionaries
.idea/**/shelf
# AWS User-specific
.idea/**/aws.xml
# Generated files
.idea/**/contentModel.xml
# Sensitive or high-churn files
.idea/**/dataSources/
.idea/**/dataSources.ids
.idea/**/dataSources.local.xml
.idea/**/sqlDataSources.xml
.idea/**/dynamic.xml
.idea/**/uiDesigner.xml
.idea/**/dbnavigator.xml
.private/

View File

@ -1,9 +1,82 @@
# Iottb
## Basic Invocation
## Installation
There are a few different ways to install `iottb`.
In Linux, to install to a users local bin directory using poetry or pip:
- Move into the project root `cd path/to/iottb-project`, so that you are in the directory which contains the `pyproject.toml` file.
```bash
poetry install --editable
# or with pip
pip install -e .
```
Currently, this is the recommended method.
Alternatively install with pip into any activated environment:
```bash
pip install -r requirements.txt
```
It is possible to make a single executable for you machine which you can just put in your path using pyinstaller.
1. Install pyinstaller
```bash
pip install pyinstaller
```
2. Make the executable
```bash
pyinstaller --onefile --name iottb --distpath ~/opt iottb/main.py
```
to be able to run it as `iottb` if `~/opt' is a directory on your PATH.
A executable which should be able to run on linux is included in the repo.
## Basic Invocation
```bash
Usage: iottb [OPTIONS] COMMAND [ARGS]...
Options:
-v, --verbosity Set verbosity [default: 0; 0<=x<=3]
-d, --debug Enable debug mode
--cfg-file PATH Path to iottb config file [default:
/home/seb/.config/iottb/iottb.cfg]
--help Show this message and exit.
--dry-run BOOLEAN currently NOT USED! [default: True]
Commands:
add-device Add a device to a database
init-db
sniff Sniff packets with tcpdump
Debugging Commands:
show-all Show everything: configuration, databases, and...
show-cfg Show the current configuration context
```
## Usage Examples
### Initializing a database
Before devices can be added and packet captures performed, there must be a database.
Initialze a database with default values at the default location:
```bash
iottb init-db
```
### Adding a device
Typically, captures are performed for devices. To add a device (to the current default database)
```bash
iottb add-device 'Echo Dot 2'
```
if the devices is named 'Echo Dot 2'. This will get the cannonical name 'echo-dot'. This name should be used when performing
captures with `iottb`.
### Performing captures/sniffing traffic
```bash
iottb sniff -a <ipv4-addr or mac-addr> 'echo-dot'
```
to sniff traffic on the previously added device 'Echo Dot 2' which received the canonical name 'echo-dot'.
You can get the subcommand specif helptext by adding the `--help` option.
## Configuration
### Env Vars
- IOTTB_CONF_HOME
By setting this variable you control where the basic iottb application
configuration should be looked for
## License
This project is licensed under a BSD 3-clause License, a copy of which is provided in the file `code/iottb-project/LICENSE`.

View File

@ -0,0 +1,110 @@
Usage: iottb [OPTIONS] COMMAND [ARGS]...
Options:
-v, --verbosity Set verbosity [default: 0; 0<=x<=3]
-d, --debug Enable debug mode
--dry-run [default: True]
--cfg-file PATH Path to iottb config file [default:
/home/seb/.config/iottb/iottb.cfg]
--help Show this message and exit.
Commands:
add-device Add a device to a database
init-db
rm-cfg Removes the cfg file from the filesystem.
rm-dbs Removes ALL(!) databases from the filesystem if...
set-key-in-table-to Edit config or metadata files.
show-all Show everything: configuration, databases, and...
show-cfg Show the current configuration context
sniff Sniff packets with tcpdump
Usage: iottb init-db [OPTIONS]
Options:
-d, --dest PATH Location to put (new) iottb database
-n, --name TEXT Name of new database. [default: iottb.db]
--update-default / --no-update-default
If new db should be set as the new default
[default: update-default]
--help Show this message and exit.
Usage: iottb add-device [OPTIONS]
Add a device to a database
Options:
--dev, --device-name TEXT The name of the device to be added. If this
string contains spaces or other special
characters normalization is
performed to derive a canonical name [required]
--db, --database DIRECTORY Database in which to add this device. If not
specified use default from config. [env var:
IOTTB_DB]
--guided Add device interactively [env var:
IOTTB_GUIDED_ADD]
--help Show this message and exit.
Usage: iottb sniff [OPTIONS] [TCPDUMP-ARGS] [DEVICE]
Sniff packets with tcpdump
Options:
Testbed sources:
--db, --database TEXT Database of device. Only needed if not current
default. [env var: IOTTB_DB]
--app TEXT Companion app being used during capture
Runtime behaviour:
--unsafe Disable checks for otherwise required options.
[env var: IOTTB_UNSAFE]
--guided [env var: IOTTB_GUIDED]
--pre TEXT Script to be executed before main command is
started.
--post TEXT Script to be executed upon completion of main
command.
Tcpdump options:
-i, --interface TEXT Network interface to capture on.If not specified
tcpdump tries to find and appropriate one. [env
var: IOTTB_CAPTURE_INTERFACE]
-a, --address TEXT IP or MAC address to filter packets by. [env var:
IOTTB_CAPTURE_ADDRESS]
-I, --monitor-mode Put interface into monitor mode.
--ff TEXT tcpdump filter as string or file path. [env var:
IOTTB_CAPTURE_FILTER]
-#, --print-pacno Print packet number at beginning of line. True by
default. [default: True]
-e, --print-ll Print link layer headers. True by default.
-c, --count INTEGER Number of packets to capture. [default: 1000]
--help Show this message and exit.
Utility Commands mostly for development
Usage: iottb rm-cfg [OPTIONS]
Removes the cfg file from the filesystem.
This is mostly a utility during development. Once non-standard database
locations are implemented, deleting this would lead to iottb not being able
to find them anymore.
Options:
--yes Confirm the action without prompting.
--help Show this message and exit.
Usage: iottb rm-dbs [OPTIONS]
Removes ALL(!) databases from the filesystem if they're empty.
Development utility currently unfit for use.
Options:
--yes Confirm the action without prompting.
--help Show this message and exit.
Usage: iottb show-cfg [OPTIONS]
Show the current configuration context
Options:
--cfg-file PATH Path to the config file [default:
/home/seb/.config/iottb/iottb.cfg]
-pp Pretty Print
--help Show this message and exit.
Usage: iottb show-all [OPTIONS]
Show everything: configuration, databases, and device metadata
Options:
--help Show this message and exit.

View File

@ -0,0 +1,38 @@
Usage: iottb [OPTIONS] COMMAND [ARGS]...
Options:
-v, --verbosity Set verbosity [default: 0; 0<=x<=3]
-d, --debug Enable debug mode
--dry-run [default: True]
--cfg-file PATH Path to iottb config file [default:
/home/seb/.config/iottb/iottb.cfg]
--help Show this message and exit.
Commands:
add-device Add a device to a database
init-db
rm-cfg Removes the cfg file from the filesystem.
rm-dbs Removes ALL(!) databases from the filesystem if...
set-key-in-table-to Edit config or metadata files.
show-all Show everything: configuration, databases, and...
show-cfg Show the current configuration context
sniff Sniff packets with tcpdump
Usage: iottb [OPTIONS] COMMAND [ARGS]...
Options:
-v, --verbosity Set verbosity [default: 0; 0<=x<=3]
-d, --debug Enable debug mode
--dry-run [default: True]
--cfg-file PATH Path to iottb config file [default:
/home/seb/.config/iottb/iottb.cfg]
--help Show this message and exit.
Commands:
add-device Add a device to a database
init-db
rm-cfg Removes the cfg file from the filesystem.
rm-dbs Removes ALL(!) databases from the filesystem if...
set-key-in-table-to Edit config or metadata files.
show-all Show everything: configuration, databases, and...
show-cfg Show the current configuration context
sniff Sniff packets with tcpdump

View File

@ -0,0 +1,142 @@
# Main Command: `iottb`
Usage: `iottb [OPTIONS] COMMAND [ARGS]...`
Options:
-v, --verbosity Set verbosity [0<=x<=3] \n
-d, --debug Enable debug mode
--dry-run
--cfg-file PATH Path to iottb config file
--help Show this message and exit.
Commands:
add-device Add a device to a database
init-db
rm-cfg Removes the cfg file from the filesystem.
rm-dbs Removes ALL(!) databases from the filesystem if...
set-key-in-table-to Edit config or metadata files.
show-all Show everything: configuration, databases, and...
show-cfg Show the current configuration context
sniff Sniff packets with tcpdump
Command: init-db
Usage: [OPTIONS]
Options:
-d, --dest PATH Location to put (new) iottb database
-n, --name TEXT Name of new database.
--update-default / --no-update-default
If new db should be set as the new default
--help Show this message and exit.
Command: rm-cfg
Usage: [OPTIONS]
Removes the cfg file from the filesystem.
This is mostly a utility during development. Once non-standard database
locations are implemented, deleting this would lead to iottb not being able
to find them anymore.
Options:
--yes Confirm the action without prompting.
--help Show this message and exit.
Command: set-key-in-table-to
Usage: [OPTIONS]
Edit config or metadata files. TODO: Implement
Options:
--file TEXT
--table TEXT
--key TEXT
--value TEXT
--help Show this message and exit.
Command: rm-dbs
Usage: [OPTIONS]
Removes ALL(!) databases from the filesystem if they're empty.
Development utility currently unfit for use.
Options:
--yes Confirm the action without prompting.
--help Show this message and exit.
Command: add-device
Usage: [OPTIONS]
Add a device to a database
Options:
--dev, --device-name TEXT The name of the device to be added. If this
string contains spaces or other special
characters normalization is
performed to derive a canonical name [required]
--db, --database DIRECTORY Database in which to add this device. If not
specified use default from config. [env var:
IOTTB_DB]
--guided Add device interactively [env var:
IOTTB_GUIDED_ADD]
--help Show this message and exit.
Command: show-cfg
Usage: [OPTIONS]
Show the current configuration context
Options:
--cfg-file PATH Path to the config file
-pp Pretty Print
--help Show this message and exit.
Command: sniff
Usage: [OPTIONS] [TCPDUMP-ARGS] [DEVICE]
Sniff packets with tcpdump
Options:
Testbed sources:
--db, --database TEXT Database of device. Only needed if not current
default. [env var: IOTTB_DB]
--app TEXT Companion app being used during capture
Runtime behaviour:
--unsafe Disable checks for otherwise required options.
[env var: IOTTB_UNSAFE]
--guided [env var: IOTTB_GUIDED]
--pre PATH Script to be executed before main commandis
started.
Tcpdump options:
-i, --interface TEXT Network interface to capture on.If not specified
tcpdump tries to find and appropriate one. [env
var: IOTTB_CAPTURE_INTERFACE]
-a, --address TEXT IP or MAC address to filter packets by. [env var:
IOTTB_CAPTURE_ADDRESS]
-I, --monitor-mode Put interface into monitor mode.
--ff TEXT tcpdump filter as string or file path. [env var:
IOTTB_CAPTURE_FILTER]
-#, --print-pacno Print packet number at beginning of line. True by
default.
-e, --print-ll Print link layer headers. True by default.
-c, --count INTEGER Number of packets to capture.
--help Show this message and exit.
Command: show-all
Usage: [OPTIONS]
Show everything: configuration, databases, and device metadata
Options:
--help Show this message and exit.

View File

@ -1,3 +1,5 @@
from pathlib import Path
from iottb import definitions
import logging
from iottb.utils.user_interaction import tb_echo
@ -9,3 +11,6 @@ log_dir = definitions.LOGDIR
# Ensure logs dir exists before new handlers are registered in main.py
if not log_dir.is_dir():
log_dir.mkdir()
DOCS_FOLDER = Path.cwd() / 'docs'

View File

@ -1,4 +1,5 @@
import json
import sys
import click
from pathlib import Path
@ -13,25 +14,124 @@ from iottb.definitions import CFG_FILE_PATH, TB_ECHO_STYLES
logger = logging.getLogger(__name__)
def add_device_guided(ctx, cn, db):
click.echo('TODO: Implement')
def prompt_for_device_details():
device_details = {}
aliases = []
while True:
click.echo("\nEnter the details for the new device:")
click.echo("1. Device Name")
click.echo("2. Description")
click.echo("3. Model")
click.echo("4. Manufacturer")
click.echo("5. Current Firmware Version")
click.echo("6. Device Type")
click.echo("7. Supported Interfaces")
click.echo("8. Companion Applications")
click.echo("9. Add Alias")
click.echo("10. Finish and Save")
choice = click.prompt("Choose an option", type=int)
if choice == 1:
device_details['device_name'] = click.prompt("Enter the device name")
elif choice == 2:
device_details['description'] = click.prompt("Enter the description")
elif choice == 3:
device_details['model'] = click.prompt("Enter the model")
elif choice == 4:
device_details['manufacturer'] = click.prompt("Enter the manufacturer")
elif choice == 5:
device_details['firmware_version'] = click.prompt("Enter the current firmware version")
elif choice == 6:
device_details['device_type'] = click.prompt("Enter the device type")
elif choice == 7:
device_details['supported_interfaces'] = click.prompt("Enter the supported interfaces")
elif choice == 8:
device_details['companion_applications'] = click.prompt("Enter the companion applications")
elif choice == 9:
alias = click.prompt("Enter an alias")
aliases.append(alias)
elif choice == 10:
break
else:
click.echo("Invalid choice. Please try again.")
device_details['aliases'] = aliases
return device_details
def confirm_and_add_device(device_details, db_path):
click.echo("\nDevice metadata:")
for key, value in device_details.items():
click.echo(f"{key.replace('_', ' ').title()}: {value}")
confirm = click.confirm("Do you want to add this device with above metadata?")
if confirm:
device_name = device_details.get('device_name')
if not device_name:
click.echo("Device name is required. Exiting...")
return
device_metadata = DeviceMetadata(**device_details)
device_dir = db_path / device_metadata.canonical_name
if device_dir.exists():
click.echo(f"Device {device_name} already exists in the database.")
click.echo("Exiting...")
return
try:
device_dir.mkdir(parents=True, exist_ok=True)
metadata_path = device_dir / definitions.DEVICE_METADATA_FILE_NAME
device_metadata.save_metadata_to_file(metadata_path)
click.echo(f"Successfully added device {device_name} to database.")
except OSError as e:
click.echo(f"Error trying to create device directory: {e}")
click.echo("Exiting...")
else:
click.echo("Operation cancelled. Exiting...")
def add_device_guided(cfg, db):
logger.info('Adding device interactively')
#logger.debug(f'Parameters: {params}. value: {value}')
# logger.debug(f'Parameters: {params}. value: {value}')
databases = cfg.db_path_dict
if not databases:
click.echo('No databases found in config file.')
return
click.echo('Available Databases:')
last = 0
for i, db_name in enumerate(databases.keys(), start=1):
click.echo(f'[{i}] {db_name}')
last = i if last < i else last
db_choice = click.prompt('Select the database to add the new device to (1 - {last}, 0 to quit)',
type=int, default=1)
if 1 <= db_choice <= last:
selected_db = list(databases.keys())[db_choice - 1]
click.confirm(f'Use {selected_db}?', abort=True)
db_path = Path(databases[selected_db]) / selected_db
logger.debug(f'DB Path {str(db_path)}')
device_details = prompt_for_device_details()
confirm_and_add_device(device_details, db_path)
elif db_choice == 0:
click.echo(f'Quitting...')
else:
click.echo(f'{db_choice} is not a valid choice. Please rerun command and select a valid database.')
@click.command('add-device', help='Add a device to a database')
@click.option('--dev', '--device-name', type=str, required=True,
help='The name of the device to be added. If this string contains spaces or other special characters \
normalization is performed to derive a canonical name')
@click.option('--db', '--database', type=click.Path(exists=True, file_okay=False, dir_okay=True, path_type=Path),
envvar='IOTTB_DB', show_envvar=True,
@click.argument('device', type=str, default="")
@click.option('--db', '--database', type=str,
envvar='IOTTB_DB', show_envvar=True, default="",
help='Database in which to add this device. If not specified use default from config.')
@click.option('--guided', is_flag=True, default=False, show_default=True, envvar='IOTTB_GUIDED_ADD', show_envvar=True,
@click.option('--guided', is_flag=True,
help='Add device interactively')
def add_device(dev, db, guided):
def add_device(device, db, guided):
"""Add a new device to a database
Device name must be supplied unless in an interactive setup. Database is taken from config by default.
Device name must be supplied unless in an interactive setup.
Database is taken from config by default.
If this device name contains spaces or other special characters normalization is performed to derive a canonical name.
"""
logger.info('add-device invoked')
@ -39,12 +139,18 @@ def add_device(dev, db, guided):
# Dependency: Config file must exist
config = IottbConfig(Path(CFG_FILE_PATH))
logger.debug(f'Config loaded: {config}')
# If guided flag set, continue with guided add and leave
if guided:
click.echo('Guided option set. Continuing with guided add.')
add_device_guided(config, device, db)
logger.info('Finished guided device add.')
return
# Step 2: Load database
# dependency: Database folder must exist
if db:
if db != "":
database = db
path = config.db_path_dict
path = config.db_path_dict[database]
logger.debug(f'Resolved (path, db) {path}, {database}')
else:
path = config.default_db_location
@ -54,36 +160,40 @@ def add_device(dev, db, guided):
full_db_path = Path(path) / database
if not full_db_path.is_dir():
logger.warning(f'No database at {database}')
click.echo(f'Could not find a database.')
click.echo(f'You need to initialize the testbed before before you add devices!')
click.echo(f'To initialize the testbed in the default location run "iottb init-db"')
click.echo(f'No database found at {full_db_path}', lvl='w')
click.echo(
f'You need to initialize the testbed before before you add devices!')
click.echo(
f'To initialize the testbed in the default location run "iottb init-db"')
click.echo('Exiting...')
exit()
sys.exit()
# Ensure a device name was passed as argument
if device == "":
click.echo("Device name cannot be an empty string. Exiting...", lvl='w')
return
# Step 3: Check if device already exists in database
# dependency: DeviceMetadata object
device_metadata = DeviceMetadata(device_name=dev)
device_metadata = DeviceMetadata(device_name=device)
device_dir = full_db_path / device_metadata.canonical_name
# Check if device is already registered
if device_dir.exists():
logger.warning(f'Device directory {device_dir} already exists.')
click.echo(f'Device {dev} already exists in the database.')
click.echo(f'Device {device} already exists in the database.')
click.echo('Exiting...')
exit()
sys.exit()
try:
device_dir.mkdir()
except OSError as e:
logger.error(f'Error trying to create device {e}')
click.echo('Exiting...')
exit()
sys.exit()
# Step 4: Save metadata into device_dir
metadata_path = device_dir / definitions.DEVICE_METADATA_FILE_NAME
with metadata_path.open('w') as metadata_file:
json.dump(device_metadata.__dict__, metadata_file, indent=4)
click.echo(f'Successfully added device {dev} to database')
logger.debug(f'Added device {dev} to database {database}. Full path of metadata {metadata_path}')
logger.info(f'Metadata for {dev} {device_metadata.print_attributes()}')
click.echo(f'Successfully added device {device} to database')
logger.debug(f'Added device {device} to database {database}. Full path of metadata {metadata_path}')
logger.info(f'Metadata for {device} {device_metadata.print_attributes()}')

View File

@ -2,6 +2,7 @@ from pathlib import Path
import logging
import click
from iottb import tb_echo
from iottb.definitions import DB_NAME, CFG_FILE_PATH
from iottb.models.iottb_config import IottbConfig
@ -94,12 +95,17 @@ def show_everything(ctx):
click.echo(f"Default Database: {config.default_database}")
click.echo(f"Default Database Path: {config.default_db_location}")
click.echo("Database Locations:")
everything_dict = {}
for db_name, db_path in config.db_path_dict.items():
click.echo(f" - {db_name}: {db_path}")
for db_name, db_path in config.db_path_dict.items():
full_db_path = Path(db_path) / db_name
click.echo(f" - {db_name}: {full_db_path}")
if full_db_path.is_dir():
click.echo(f"Contents of {db_name} at {full_db_path}:")
click.echo(f"\nContents of {full_db_path}:")
flag = True
for item in full_db_path.iterdir():
flag = False
if item.is_file():
click.echo(f" - {item.name}")
try:
@ -115,9 +121,10 @@ def show_everything(ctx):
click.echo(f" - {subitem.name}")
elif subitem.is_dir():
click.echo(f" - {subitem.name}/")
if flag:
tb_echo(f'\t EMPTY')
else:
click.echo(f" {full_db_path} is not a directory")
click.echo(f"{full_db_path} is not a directory")
warnstyle = {'fg': 'red', 'bold': True}
click.secho('Developer command used', **warnstyle)

View File

@ -1,100 +0,0 @@
import click
from pathlib import Path
import logging
from logging.handlers import RotatingFileHandler
import sys
from iottb.models.iottb_config import IottbConfig
from iottb.definitions import DB_NAME
logger = logging.getLogger(__name__)
@click.command()
@click.option('-d', '--dest', type=click.Path(), help='Location to put (new) iottb database')
@click.option('-n', '--name', default=DB_NAME, type=str, help='Name of new database.')
@click.option('--update-default/--no-update-default', default=True, help='If new db should be set as the new default')
@click.pass_context
def init_db(ctx, dest, name, update_default):
logger.info('init-db invoked')
config = ctx.obj['CONFIG']
logger.debug(f'str(config)')
# Use the default path from config if dest is not provided
known_dbs = config.get_known_databases()
logger.debug(f'Known databases: {known_dbs}')
if name in known_dbs:
dest = config.get_database_location(name)
if Path(dest).joinpath(name).is_dir():
click.echo(f'A database {name} already exists.')
logger.debug(f'DB {name} exists in {dest}')
click.echo(f'Exiting...')
exit()
logger.debug(f'DB name {name} registered but does not exist.')
if not dest:
logger.info('No dest set, choosing default destination.')
dest = Path(config.default_db_location).parent
db_path = Path(dest).joinpath(name)
logger.debug(f'Full path for db {str(db_path)}')
# Create the directory if it doesn't exist
db_path.mkdir(parents=True, exist_ok=True)
logger.info(f"mkdir {db_path} successful")
click.echo(f'Created {db_path}')
# Update configuration
config.set_database_location(name, str(dest))
if update_default:
config.set_default_database(name, str(dest))
config.save_config()
logger.info(f"Updated configuration with database {name} at {db_path}")
@click.command()
@click.option('-d', '--dest', type=click.Path(), help='Location to put (new) iottb database')
@click.option('-n', '--name', default=DB_NAME, type=str, help='Name of new database.')
@click.option('--update-default/--no-update-default', default=True, help='If new db should be set as the new default')
@click.pass_context
def init_db_inactive(ctx, dest, name, update_default):
logger.info('init-db invoked')
config = ctx.obj['CONFIG']
logger.debug(f'str(config)')
# Retrieve known databases
known_dbs = config.get_known_databases()
# Determine destination path
if name in known_dbs:
dest = Path(config.get_database_location(name))
if dest.joinpath(name).is_dir():
click.echo(f'A database {name} already exists.')
logger.debug(f'DB {name} exists in {dest}')
click.echo(f'Exiting...')
exit()
logger.debug(f'DB name {name} registered but does not exist.')
elif not dest:
logger.info('No destination set, using default path from config.')
dest = Path(config.default_db_location).parent
# Ensure destination path is absolute
dest = dest.resolve()
# Combine destination path with database name
db_path = dest / name
logger.debug(f'Full path for database: {str(db_path)}')
# Create the directory if it doesn't exist
try:
db_path.mkdir(parents=True, exist_ok=True)
logger.info(f'Directory {db_path} created successfully.')
click.echo(f'Created {db_path}')
except Exception as e:
logger.error(f'Failed to create directory {db_path}: {e}')
click.echo(f'Failed to create directory {db_path}: {e}', err=True)
exit(1)
# Update configuration
config.set_database_location(name, str(db_path))
if update_default:
config.set_default_database(name, str(db_path))
config.save_config()
logger.info(f'Updated configuration with database {name} at {db_path}')
click.echo(f'Updated configuration with database {name} at {db_path}')

View File

View File

@ -1,13 +1,19 @@
import click
import subprocess
import json
from pathlib import Path
import logging
import os
import re
import subprocess
import sys
import uuid
from datetime import datetime
from iottb.definitions import APP_NAME, CFG_FILE_PATH
from iottb.models.iottb_config import IottbConfig
from pathlib import Path
from time import time
import click
from click_option_group import optgroup
from iottb.utils.string_processing import make_canonical_name
# Setup logger
logger = logging.getLogger('iottb.sniff')
@ -34,100 +40,308 @@ def validate_sniff(ctx, param, value):
return None
if not ctx.params.get('unsafe') and not value:
raise click.BadParameter('Address is required unless --unsafe is set.')
if not is_ip_address(value) and not is_mac_address(value):
raise click.BadParameter('Address must be a valid IP address or MAC address.')
return value
def run_pre(pre):
subprocess.run(pre, shell=True)
logger.debug(f'finnished {pre}')
def run_post(post):
subprocess.run(post, shell=True)
logger.debug(f'finnished {post}')
@click.command('sniff', help='Sniff packets with tcpdump')
@click.argument('device')
@click.option('-i', '--interface', callback=validate_sniff, help='Network interface to capture on',
envvar='IOTTB_CAPTURE_INTERFACE')
@click.option('-a', '--address', callback=validate_sniff, help='IP or MAC address to filter packets by',
envvar='IOTTB_CAPTURE_ADDRESS')
@click.option('--db', '--database', type=click.Path(exists=True, file_okay=False), envvar='IOTTB_DB',
@optgroup.group('Testbed sources')
@optgroup.option('--db', '--database', type=str, envvar='IOTTB_DB', show_envvar=True,
help='Database of device. Only needed if not current default.')
@click.option('--unsafe', is_flag=True, default=False, envvar='IOTTB_UNSAFE', is_eager=True,
help='Disable checks for otherwise required options')
@click.option('--guided', is_flag=True, default=False)
def sniff(device, interface, address, db, unsafe, guided):
@optgroup.option('--app', type=str, help='Companion app being used during capture', required=False)
@optgroup.group('Runtime behaviour')
@optgroup.option('--unsafe', is_flag=True, default=False, envvar='IOTTB_UNSAFE', is_eager=True,
help='Disable checks for otherwise required options.\n', show_envvar=True)
@optgroup.option('--guided', is_flag=True, default=False, envvar='IOTTB_GUIDED', show_envvar=True)
@optgroup.option('--pre', help='Script to be executed before main command is started.')
@optgroup.option('--post', help='Script to be executed upon completion of main command.')
@optgroup.group('Tcpdump options')
@optgroup.option('-i', '--interface',
help='Network interface to capture on.' +
'If not specified tcpdump tries to find and appropriate one.\n', show_envvar=True,
envvar='IOTTB_CAPTURE_INTERFACE')
@optgroup.option('-a', '--address', callback=validate_sniff,
help='IP or MAC address to filter packets by.\n', show_envvar=True,
envvar='IOTTB_CAPTURE_ADDRESS')
@optgroup.option('-I', '--monitor-mode', help='Put interface into monitor mode.\n', is_flag=True)
@optgroup.option('--ff', type=str, envvar='IOTTB_CAPTURE_FILTER', show_envvar=True,
help='tcpdump filter as string or file path.')
@optgroup.option('-#', '--print-pacno', is_flag=True, default=True,
help='Print packet number at beginning of line. True by default.\n')
@optgroup.option('-e', '--print-ll', is_flag=True, default=False,
help='Print link layer headers. True by default.')
@optgroup.option('-c', '--count', type=int, help='Number of packets to capture.', default=1000)
# @optgroup.option('--mins', type=int, help='Time in minutes to capture.', default=1)
@click.argument('tcpdump-args', nargs=-1, required=False, metavar='[TCPDUMP-ARGS]')
@click.argument('device', required=False)
@click.pass_context
def sniff(ctx, device, interface, print_pacno, ff, count, monitor_mode, print_ll, address, db, unsafe, guided,
app, tcpdump_args, pre, post, **params):
""" Sniff packets from a device """
logger.info('sniff command invoked')
# Step 0: run pre script:
if pre:
click.echo(f'Running pre command {pre}')
run_pre(pre)
# Step1: Load Config
config = IottbConfig(Path(CFG_FILE_PATH))
config = ctx.obj['CONFIG']
logger.debug(f'Config loaded: {config}')
# Step2: determine relevant database
database = db if db else config.default_database
path = config.default_db_location[database]
path = config.db_path_dict[database]
full_db_path = Path(path) / database
logger.debug(f'Full db path is {str(path)}')
logger.debug(f'Full db path is {str(full_db_path)}')
# Check if it exists
assert full_db_path.is_dir(), "DB unexpectedly missing"
canonical_name = make_canonical_name(device)
click.echo(f'Using canonical device name {canonical_name}')
if not database_path:
logger.error('No default database path found in configuration')
click.echo('No default database path found in configuration')
# 2.2: Check if it exists
if not full_db_path.is_dir():
logger.error('DB unexpectedly missing')
click.echo('DB unexpectedly missing')
return
# Verify device directory
device_path = Path(database_path) / device
canonical_name, aliases = make_canonical_name(device)
click.echo(f'Using canonical device name {canonical_name}')
device_path = full_db_path / canonical_name
# Step 3: now the device
if not device_path.exists():
if not unsafe:
logger.error(f'Device path {device_path} does not exist')
click.echo(f'Device path {device_path} does not exist')
return
# Generate filter
if not unsafe:
if is_ip_address(address):
packet_filter = f"host {address}"
elif is_mac_address(address):
packet_filter = f"ether host {address}"
else:
device_path.mkdir(parents=True, exist_ok=True)
logger.info(f'Device path {device_path} created')
click.echo(f'Found device at path {device_path}')
# Step 4: Generate filter
generic_filter = None
cap_filter = None
if ff:
logger.debug(f'ff: {ff}')
if Path(ff).is_file():
logger.info('Given filter option is a file')
with open(ff, 'r') as f:
cap_filter = f.read().strip()
else:
logger.info('Given filter option is an expression')
cap_filter = ff
else:
if address is not None:
if is_ip_address(address):
generic_filter = 'net'
cap_filter = f'{generic_filter} {address}'
elif is_mac_address(address):
generic_filter = 'ether net'
cap_filter = f'{generic_filter} {address}'
elif not unsafe:
logger.error('Invalid address format')
click.echo('Invalid address format')
return
else:
packet_filter = None
# Prepare capture directory
capture_dir = device_path / 'captures' / datetime.now().strftime('%Y%m%d_%H%M%S')
capture_dir.mkdir(parents=True, exist_ok=True)
logger.info(f'Generic filter {generic_filter}')
click.echo(f'Using filter {cap_filter}')
# Prepare capture file
pcap_file = capture_dir / f"{device}_{datetime.now().strftime('%Y%m%d_%H%M%S')}.pcap"
# Step 5: prep capture directory
capture_date = datetime.now().strftime('%Y-%m-%d')
capture_base_dir = device_path / f'sniffs/{capture_date}'
capture_base_dir.mkdir(parents=True, exist_ok=True)
# Build tcpdump command
cmd = ['sudo', 'tcpdump', '-i', interface, '-w', str(pcap_file)]
if packet_filter:
cmd.append(packet_filter)
logger.info(f'Executing: {" ".join(cmd)}')
logger.debug(f'Previous captures {capture_base_dir.glob('cap*')}')
capture_count = sum(1 for _ in capture_base_dir.glob('cap*'))
logger.debug(f'Capture count is {capture_count}')
# Execute tcpdump
capture_dir = f'cap{capture_count:04d}-{datetime.now().strftime('%H%M')}'
logger.debug(f'capture_dir: {capture_dir}')
# Full path
capture_dir_full_path = capture_base_dir / capture_dir
capture_dir_full_path.mkdir(parents=True, exist_ok=True)
click.echo(f'Files will be placed in {str(capture_dir_full_path)}')
logger.debug(f'successfully created capture directory')
# Step 6: Prepare capture file names
# Generate UUID for filenames
capture_uuid = str(uuid.uuid4())
click.echo(f'Capture has id {capture_uuid}')
pcap_file = f"{canonical_name}_{capture_uuid}.pcap"
pcap_file_full_path = capture_dir_full_path / pcap_file
stdout_log_file = f'stdout_{capture_uuid}.log'
stderr_log_file = f'stderr_{capture_uuid}.log'
logger.debug(f'Full pcap file path is {pcap_file_full_path}')
logger.info(f'pcap file name is {pcap_file}')
logger.info(f'stdout log file is {stdout_log_file}')
logger.info(f'stderr log file is {stderr_log_file}')
# Step 7: Build tcpdump command
logger.debug(f'pgid {os.getpgrp()}')
logger.debug(f'ppid {os.getppid()}')
logger.debug(f'(real, effective, saved) user id: {os.getresuid()}')
logger.debug(f'(real, effective, saved) group id: {os.getresgid()}')
cmd = ['sudo', 'tcpdump']
# 7.1 process flags
flags = []
if print_pacno:
flags.append('-#')
if print_ll:
flags.append('-e')
if monitor_mode:
flags.append('-I')
flags.append('-n') # TODO: Integrate, in case name resolution is wanted!
cmd.extend(flags)
flags_string = " ".join(flags)
logger.debug(f'Flags: {flags_string}')
# debug interlude
verbosity = ctx.obj['VERBOSITY']
if verbosity > 0:
verbosity_flag = '-'
for i in range(0, verbosity):
verbosity_flag = verbosity_flag + 'v'
logger.debug(f'verbosity string to pass to tcpdump: {verbosity_flag}')
cmd.append(verbosity_flag)
# 7.2 generic (i.e. reusable) kw args
generic_kw_args = []
if count:
generic_kw_args.extend(['-c', str(count)])
# if mins:
# generic_kw_args.extend(['-G', str(mins * 60)]) TODO: this currently loads to errors with sudo
cmd.extend(generic_kw_args)
generic_kw_args_string = " ".join(generic_kw_args)
logger.debug(f'KW args: {generic_kw_args_string}')
# 7.3 special kw args (not a priori reusable)
non_generic_kw_args = []
if interface:
non_generic_kw_args.extend(['-i', interface])
non_generic_kw_args.extend(['-w', str(pcap_file_full_path)])
cmd.extend(non_generic_kw_args)
non_generic_kw_args_string = " ".join(non_generic_kw_args)
logger.debug(f'Non transferable (special) kw args: {non_generic_kw_args_string}')
# 7.4 add filter expression
if cap_filter:
logger.debug(f'cap_filter (not generic): {cap_filter}')
cmd.append(cap_filter)
full_cmd_string = " ".join(cmd)
logger.info(f'tcpdump command: {"".join(full_cmd_string)}')
click.echo('Capture setup complete!')
# Step 8: Execute tcpdump command
start_time = datetime.now().strftime("%H:%M:%S")
start = time()
try:
subprocess.run(cmd, check=True)
if guided:
click.confirm(f'Execute following command: {full_cmd_string}')
stdout_log_file_abs_path = capture_dir_full_path / stdout_log_file
stderr_log_file_abs_path = capture_dir_full_path / stderr_log_file
stdout_log_file_abs_path.touch(mode=0o777)
stderr_log_file_abs_path.touch(mode=0o777)
with open(stdout_log_file_abs_path, 'w') as out, open(stderr_log_file_abs_path, 'w') as err:
logger.debug(f'\nstdout: {out}.\nstderr: {err}.\n')
tcp_complete = subprocess.run(cmd, check=True, capture_output=True, text=True)
out.write(tcp_complete.stdout)
err.write(tcp_complete.stderr)
# click.echo(f'Mock sniff execution')
click.echo(f"Capture complete. Saved to {pcap_file}")
except subprocess.CalledProcessError as e:
logger.error(f'Failed to capture packets: {e}')
click.echo(f'Failed to capture packets: {e}')
click.echo(f'Check {stderr_log_file} for more info.')
if ctx.obj['DEBUG']:
msg = [f'STDERR log {stderr_log_file} contents:\n']
with open(capture_dir_full_path / stderr_log_file) as log:
for line in log:
msg.append(line)
click.echo("\t".join(msg), lvl='e')
# print('DEBUG ACTIVE')
if guided:
click.prompt('Create metadata anyway?')
else:
click.echo('Aborting capture...')
sys.exit()
end_time = datetime.now().strftime("%H:%M:%S")
end = time()
delta = end - start
click.echo(f'tcpdump took {delta:.2f} seconds.')
# Step 9: Register metadata
metadata = {
'device': canonical_name,
'device_id': device,
'capture_id': capture_uuid,
'capture_date_iso': datetime.now().isoformat(),
'invoked_command': " ".join(map(str, cmd)),
'capture_duration': delta,
'generic_parameters': {
'flags': flags_string,
'kwargs': generic_kw_args_string,
'filter': generic_filter
},
'non_generic_parameters': {
'kwargs': non_generic_kw_args_string,
'filter': cap_filter
},
'features': {
'interface': interface,
'address': address
},
'resources': {
'pcap_file': str(pcap_file),
'stdout_log': str(stdout_log_file),
'stderr_log': str(stderr_log_file),
'pre': str(pre),
'post': str(post)
},
'environment': {
'capture_dir': capture_dir,
'database': database,
'capture_base_dir': str(capture_base_dir),
'capture_dir_abs_path': str(capture_dir_full_path)
}
}
click.echo('Ensuring correct ownership of created files.')
username = os.getlogin()
gid = os.getgid()
# Else there are issues when running with sudo:
try:
subprocess.run(f'sudo chown -R {username}:{username} {device_path}', shell=True)
except OSError as e:
click.echo(f'Some error {e}')
click.echo(f'Saving metadata.')
metadata_abs_path = capture_dir_full_path / 'capture_metadata.json'
with open(metadata_abs_path, 'w') as f:
json.dump(metadata, f, indent=4)
click.echo(f'END SNIFF SUBCOMMAND')
if post:
click.echo(f'Running post script {post}')
run_post(post)
@click.command('sniff', help='Sniff packets with tcpdump')
@click.argument('device')
@click.option('-i', '--interface', required=False, help='Network interface to capture on', envvar='IOTTB_CAPTURE_INTERFACE')
@click.option('-a', '--address', required=True, help='IP or MAC address to filter packets by', envvar='IOTTB_CAPTURE_ADDRESS')
@click.option('--db', '--database', type=click.Path(exists=True, file_okay=False), envvar='IOTTB_DB',
help='Database of device. Only needed if not current default.')
@click.option('--unsafe', is_flag=True, default=False, envvar='IOTTB_UNSAFE',
help='Disable checks for otherwise required options')
@click.option('--guided', is_flag=True)
def sniff2(device, interface, address, cfg_file):
""" Sniff packets from a device """
logger.info('sniff command invoked')
# Step 1: Load Config
# Dependency: Config file must exist
config = IottbConfig(Path(CFG_FILE_PATH))
logger.debug(f'Config loaded: {config}')

View File

@ -0,0 +1,70 @@
import click
from pathlib import Path
import logging
from logging.handlers import RotatingFileHandler
import sys
from iottb.models.iottb_config import IottbConfig
from iottb.definitions import DB_NAME, CFG_FILE_PATH
logger = logging.getLogger(__name__)
@click.command()
@click.option('-d', '--dest', type=click.Path(exists=True, file_okay=False, dir_okay=True),
help='Location to put (new) iottb database')
@click.option('-n', '--name', default=DB_NAME, type=str,
help='Name of new database.')
@click.option('--update-default/--no-update-default', default=True,
help='If new db should be set as the new default')
@click.pass_context
def init_db(ctx, dest, name, update_default):
logger.info('init-db invoked')
config = ctx.obj['CONFIG']
logger.debug(f'str(config)')
# Use the default path from config if dest is not provided
known_dbs = config.get_known_databases()
logger.debug(f'Known databases: {known_dbs}')
if name in known_dbs:
dest = config.get_database_location(name)
if Path(dest).joinpath(name).is_dir():
click.echo(f'A database {name} already exists.')
logger.debug(f'DB {name} exists in {dest}')
click.echo(f'Exiting...')
sys.exit()
logger.debug(f'DB name {name} registered but does not exist.')
if not dest:
logger.info('No dest set, choosing default destination.')
dest = Path(config.default_db_location)
db_path = Path(dest).joinpath(name)
logger.debug(f'Full path for db {str(db_path)}')
# Create the directory if it doesn't exist
db_path.mkdir(parents=True, exist_ok=True)
logger.info(f"mkdir {db_path} successful")
click.echo(f'Created {db_path}')
# Update configuration
config.set_database_location(name, str(dest))
if update_default:
config.set_default_database(name, str(dest))
config.save_config()
logger.info(f"Updated configuration with database {name} at {db_path}")
# @click.group('config')
# @click.pass_context
# def cfg(ctx):
# pass
#
# @click.command('set', help='Set the location of a database.')
# @click.argument('database', help='Name of database')
# @click.argument('location', help='Where the database is located (i.e. its parent directory)')
# @click.pass_context
# def set(ctx, key, value):
# click.echo(f'Setting {key} to {value} in config')
# config = ctx.obj['CONFIG']
# logger.warning('No checks performed!')
# config.set_database_location(key, value)
# config.save_config()

View File

@ -42,3 +42,7 @@ TB_ECHO_STYLES = {
'e': {'fg': 'red', 'bold': True},
'header': {'fg': 'bright_cyan', 'bold': True, 'italic': True}
}
NAME_OF_CAPTURE_DIR = 'sniffs'

View File

@ -1,16 +1,19 @@
import sys
import click
from pathlib import Path
import logging
from iottb.commands.sniff import sniff
from iottb.commands.developer import set_key_in_table_to, rm_cfg, rm_dbs, show_cfg, show_everything
##################################################
# Import package modules
#################################################
from iottb.utils.logger_config import setup_logging
from iottb import definitions
from iottb.models.iottb_config import IottbConfig
from iottb.commands.initialize_testbed import init_db
from iottb.commands.testbed import init_db
from iottb.commands.add_device import add_device
############################################################################
@ -28,26 +31,34 @@ loglevel = definitions.LOGLEVEL
logger = logging.getLogger(__name__)
@click.group()
@click.option('-v', '--verbosity', count=True, type=click.IntRange(0, 3), default=0,
@click.group(context_settings=dict(auto_envvar_prefix='IOTTB', show_default=True))
@click.option('-v', '--verbosity', count=True, type=click.IntRange(0, 3), default=0, is_eager=True,
help='Set verbosity')
@click.option('-d', '--debug', is_flag=True, default=False,
@click.option('-d', '--debug', is_flag=True, default=False, is_eager=True,
help='Enable debug mode')
@click.option('--dry-run', is_flag=False, default=True, is_eager=True, help='NOT USED!')
@click.option('--cfg-file', type=click.Path(),
default=Path(click.get_app_dir(APP_NAME)).joinpath('iottb.cfg'),
envvar='IOTTB_CONF_HOME', help='Path to iottb config file')
@click.pass_context
def cli(ctx, verbosity, debug, cfg_file):
setup_logging(verbosity, debug) # Setup logging based on the loaded configuration and other options
def cli(ctx, verbosity, debug, dry_run, cfg_file):
# Setup logging based on the loaded configuration and other options
setup_logging(verbosity, debug)
ctx.ensure_object(dict) # Make sure context is ready for use
logger.info("Starting execution.")
ctx.obj['CONFIG'] = IottbConfig(cfg_file) # Load configuration directly
ctx.meta['FULL_PATH_CONFIG_FILE'] = str(cfg_file)
ctx.meta['DRY_RUN'] = dry_run
logger.debug(f'Verbosity: {verbosity}')
ctx.obj['VERBOSITY'] = verbosity
logger.debug(f'Debug: {debug}')
ctx.obj['DEBUG'] = debug
##################################################################################
# Add all subcommands to group here
#################################################################################
# TODO: Is there a way to do this without pylint freaking out?
# noinspection PyTypeChecker
cli.add_command(init_db)
cli.add_command(rm_cfg)
@ -58,5 +69,9 @@ cli.add_command(add_device)
cli.add_command(show_cfg)
cli.add_command(sniff)
cli.add_command(show_everything)
if __name__ == '__main__':
cli(auto_envvar_prefix='IOTTB', show_default=True, show_envvars=True)
cli()
for log in Path.cwd().iterdir():
log.chmod(0o777)

View File

@ -1,3 +1,4 @@
import json
import logging
import uuid
from datetime import datetime
@ -11,12 +12,12 @@ logger = logging.getLogger(__name__)
class DeviceMetadata:
def __init__(self, device_name, description="", model="", manufacturer="", firmware_version="", device_type="",
supported_interfaces="", companion_applications="", save_to_file=None):
supported_interfaces="", companion_applications="", save_to_file=None, aliases=None):
self.device_id = str(uuid.uuid4())
self.device_name = device_name
cn, aliases = make_canonical_name(device_name)
logger.debug(f'cn, aliases = {cn}, {str(aliases)}')
self.aliases = aliases
cn, default_aliases = make_canonical_name(device_name)
logger.debug(f'cn, default aliases = {cn}, {str(default_aliases)}')
self.aliases = default_aliases if aliases is None else default_aliases + aliases
self.canonical_name = cn
self.date_added = datetime.now().isoformat()
self.description = description
@ -42,3 +43,8 @@ class DeviceMetadata:
print(f'Printing attribute value pairs in {__name__}')
for attr, value in self.__dict__.items():
print(f'{attr}: {value}')
def save_metadata_to_file(self, metadata_path):
with open(metadata_path, 'w') as metadata_file:
json.dump(self.__dict__, metadata_file, indent=4)
click.echo(f'Metadata saved to {metadata_path}')

View File

@ -1,4 +1,39 @@
import json
import logging
import uuid
from datetime import datetime
from pathlib import Path
logger = logging.getLogger('iottb.sniff') # Log with sniff subcommand
class CaptureMetadata:
def __init__(self, device_id, capture_dir, interface, address, capture_file, tcpdump_command, tcpdump_stdout, tcpdump_stderr, packet_filter, alias):
self.base_data = {
'device_id': device_id,
'capture_id': str(uuid.uuid4()),
'capture_date': datetime.now().isoformat(),
'capture_dir': str(capture_dir),
'capture_file': capture_file,
'start_time': "",
'stop_time': "",
'alias': alias
}
self.features = {
'interface': interface,
'device_ip_address': address if address else "No IP Address set",
'tcpdump_stdout': str(tcpdump_stdout),
'tcpdump_stderr': str(tcpdump_stderr),
'packet_filter': packet_filter
}
self.command = tcpdump_command
def save_to_file(self):
metadata = {
'base_data': self.base_data,
'features': self.features,
'command': self.command
}
metadata_file_path = Path(self.base_data['capture_dir']) / 'metadata.json'
with open(metadata_file_path, 'w') as f:
json.dump(metadata, f, indent=4)
logger.info(f'Metadata saved to {metadata_file_path}')

View File

@ -0,0 +1,74 @@
from pathlib import Path
import click
from io import StringIO
import sys
from iottb import DOCS_FOLDER
# Import your CLI app here
from iottb.main import cli
"""Script to generate the help text and write to file.
Definitely needs better formatting.
Script is also not very flexible.
"""
def get_help_text(command):
"""Get the help text for a given command."""
help_text = StringIO()
with click.Context(command) as ctx:
# chatgpt says this helps: was right
sys_stdout = sys.stdout
sys.stdout = help_text
try:
click.echo(command.get_help(ctx))
finally:
sys.stdout = sys_stdout
return help_text.getvalue()
def write_help_to_file(cli, filename):
"""Write help messages of all commands and subcommands to a file."""
with open(filename, 'w+') as f:
# main
f.write(f"Main Command: iottb\n")
f.write(get_help_text(cli))
f.write("\n\n")
# go through subcommands
for cmd_name, cmd in cli.commands.items():
f.write(f"Command: {cmd_name}\n")
f.write(get_help_text(cmd))
f.write("\n\n")
# subcommands of subcommands
if isinstance(cmd, click.Group):
for sub_cmd_name, sub_cmd in cmd.commands.items():
f.write(f"Subcommand: {cmd_name} {sub_cmd_name}\n")
f.write(get_help_text(sub_cmd))
f.write("\n\n")
def manual():
comands = [
'init-db',
'add-device',
'sniff'
]
dev_commands = [
'show-all',
'rm-dbs',
'show-cfg',
'show-all'
]
if __name__ == "__main__":
from iottb import DOCS_FOLDER
print('Must be in project root for this to work properly!')
print(f'CWD is {str(Path.cwd())}')
DOCS_FOLDER.mkdir(exist_ok=True)
write_help_to_file(cli, str(DOCS_FOLDER / "help_messages.md"))
print(f'Wrote help_messages.md to {str(DOCS_FOLDER / "help_messages.md")}')

View File

@ -0,0 +1,4 @@
#/bin/sh
echo 'Running iottb as sudo'
sudo $(which python) iottb $@
echo 'Finished executing iottb with sudo'

View File

@ -34,7 +34,7 @@ def make_canonical_name(name):
parts = norm_name.split('-')
canonical_name = canonical_name = '-'.join(parts[:2])
aliases.append(canonical_name)
aliases = list(set(aliases))
logger.debug(f'Canonical name: {canonical_name}')
logger.debug(f'Aliases: {aliases}')
return canonical_name, list(set(aliases))
return canonical_name, aliases

View File

@ -14,6 +14,25 @@ files = [
[package.dependencies]
colorama = {version = "*", markers = "platform_system == \"Windows\""}
[[package]]
name = "click-option-group"
version = "0.5.6"
description = "Option groups missing in Click"
optional = false
python-versions = ">=3.6,<4"
files = [
{file = "click-option-group-0.5.6.tar.gz", hash = "sha256:97d06703873518cc5038509443742b25069a3c7562d1ea72ff08bfadde1ce777"},
{file = "click_option_group-0.5.6-py3-none-any.whl", hash = "sha256:38a26d963ee3ad93332ddf782f9259c5bdfe405e73408d943ef5e7d0c3767ec7"},
]
[package.dependencies]
Click = ">=7.0,<9"
[package.extras]
docs = ["Pallets-Sphinx-Themes", "m2r2", "sphinx"]
tests = ["pytest"]
tests-cov = ["coverage", "coveralls", "pytest", "pytest-cov"]
[[package]]
name = "colorama"
version = "0.4.6"
@ -82,22 +101,7 @@ pluggy = ">=1.5,<2.0"
[package.extras]
dev = ["argcomplete", "attrs (>=19.2)", "hypothesis (>=3.56)", "mock", "pygments (>=2.7.2)", "requests", "setuptools", "xmlschema"]
[[package]]
name = "scapy"
version = "2.5.0"
description = "Scapy: interactive packet manipulation tool"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, <4"
files = [
{file = "scapy-2.5.0.tar.gz", hash = "sha256:5b260c2b754fd8d409ba83ee7aee294ecdbb2c235f9f78fe90bc11cb6e5debc2"},
]
[package.extras]
basic = ["ipython"]
complete = ["cryptography (>=2.0)", "ipython", "matplotlib", "pyx"]
docs = ["sphinx (>=3.0.0)", "sphinx_rtd_theme (>=0.4.3)", "tox (>=3.0.0)"]
[metadata]
lock-version = "2.0"
python-versions = "^3.12"
content-hash = "10b2c268b0f10db15eab2cca3d2dc9dc25bc60f4b218ebf786fb780fa85557e0"
content-hash = "05aa11a74b8a6411a4413684f1a4cb0e5bcd271e16b4de9ae5205d52232c91a3"

View File

@ -4,12 +4,13 @@ version = "0.1.0"
description = "IoT Testbed"
authors = ["Sebastian Lenzlinger <sebastian.lenzlinger@unibas.ch>"]
readme = "README.md"
package-mode = false
license = "LICENSE"
[tool.poetry.dependencies]
python = "^3.12"
click = "^8.1"
scapy = "^2.5"
# scapy = "^2.5"
click-option-group = "^0.5.6"
[tool.poetry.scripts]
iottb = "iottb.main:cli"

View File

@ -0,0 +1,9 @@
click-option-group==0.5.6 ; python_version >= "3.12" and python_version < "4" \
--hash=sha256:38a26d963ee3ad93332ddf782f9259c5bdfe405e73408d943ef5e7d0c3767ec7 \
--hash=sha256:97d06703873518cc5038509443742b25069a3c7562d1ea72ff08bfadde1ce777
click==8.1.7 ; python_version >= "3.12" and python_version < "4.0" \
--hash=sha256:ae74fb96c20a0277a1d615f1e4d73c8414f5a98db8b799a7931d1582f3390c28 \
--hash=sha256:ca9853ad459e787e2192211578cc907e7594e294c7ccc834310722b41b9ca6de
colorama==0.4.6 ; python_version >= "3.12" and python_version < "4.0" and platform_system == "Windows" \
--hash=sha256:08695f5cb7ed6e0531a20572697297273c47b8cae5a63ffc6d6ed5c201be6e44 \
--hash=sha256:4f1d9991f5acc0ca119f9d443620b77f9d6b33703e51011c16baf57afb285fc6

View File

@ -1,82 +0,0 @@
#!/usr/bin/env python3
import argparse
from os import environ
from pathlib import Path
from iottb.logger import logger
from iottb.subcommands.add_device import setup_init_device_root_parser
from iottb.subcommands.capture import setup_capture_parser
from iottb.utils.tcpdump_utils import list_interfaces
from definitions import IOTTB_HOME_ABS, ReturnCodes
######################
# Argparse setup
######################
def setup_argparse():
# create top level parser
root_parser = argparse.ArgumentParser(prog='iottb')
subparsers = root_parser.add_subparsers(title='subcommands', required=True, dest='command')
# shared options
root_parser.add_argument('--verbose', '-v', action='count', default=0)
# configure subcommands
setup_capture_parser(subparsers)
setup_init_device_root_parser(subparsers)
# Utility to list interfaces directly with iottb instead of relying on external tooling
interfaces_parser = subparsers.add_parser('list-interfaces', aliases=['li', 'if'],
help='List available network interfaces.')
interfaces_parser.set_defaults(func=list_interfaces)
return root_parser
def check_iottb_env():
# This makes the option '--root-dir' obsolescent # TODO How to streamline this?\
try:
iottb_home = environ['IOTTB_HOME'] # TODO WARN implicit declaration of env var name!
except KeyError:
logger.error(f"Environment variable 'IOTTB_HOME' is not set."
f"Setting environment variable 'IOTTB_HOME' to '~/{IOTTB_HOME_ABS}'")
environ['IOTTB_HOME'] = IOTTB_HOME_ABS
finally:
if not Path(IOTTB_HOME_ABS).exists():
print(f'"{IOTTB_HOME_ABS}" does not exist.')
response = input('Do you want to create it now? [y/N]')
logger.debug(f'response: {response}')
if response.lower() != 'y':
logger.debug(f'Not creating "{environ['IOTTB_HOME']}"')
print('TODO')
print("Aborting execution...")
return ReturnCodes.ABORTED
else:
print(f'Creating "{environ['IOTTB_HOME']}"')
Path(IOTTB_HOME_ABS).mkdir(parents=True,
exist_ok=False) # Should always work since in 'not exist' code path
return ReturnCodes.OK
logger.info(f'"{IOTTB_HOME_ABS}" exists.')
# TODO: Check that it is a valid iottb dir or can we say it is valid by definition if?
return ReturnCodes.OK
def main():
if check_iottb_env() != ReturnCodes.OK:
exit(ReturnCodes.ABORTED)
parser = setup_argparse()
args = parser.parse_args()
print(args)
if args.command:
try:
args.func(args)
except KeyboardInterrupt:
print('Received keyboard interrupt. Exiting...')
exit(1)
except Exception as e:
print(f'Error: {e}')
# create_capture_directory(args.device_name)
if __name__ == '__main__':
main()

View File

@ -1,28 +0,0 @@
import logging
import sys
from logging.handlers import RotatingFileHandler
def setup_logging():
logger_obj = logging.getLogger('iottbLogger')
logger_obj.setLevel(logging.DEBUG)
file_handler = RotatingFileHandler('iottb.log')
console_handler = logging.StreamHandler(sys.stdout)
file_handler.setLevel(logging.INFO)
console_handler.setLevel(logging.DEBUG)
file_fmt = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s')
console_fmt = logging.Formatter('%(asctime)s - %(levelname)s - %(filename)s:%(lineno)d - %(funcName)s - %(message)s')
file_handler.setFormatter(file_fmt)
console_handler.setFormatter(console_fmt)
logger_obj.addHandler(file_handler)
logger_obj.addHandler(console_handler)
return logger_obj
logger = setup_logging()

BIN
iottb Executable file

Binary file not shown.

View File

@ -12,3 +12,4 @@ I want an option such that one can automatically convert a captures resulting fi
## Defining Experiment
I want a pair of commands that 1. provide a guided cli interface to define an experiment and 2. to run that experiment -> Here [Collective Knowledge Framework](https://github.com/mlcommons/ck) might actually come in handy. The already have tooling for setting up and defining aspects of experiments so that they become reproducible. So maybe one part of the `iottb` as a tool would be to write the correct json files into the directory which contain the informatin on how the command was run. Caveat: All all option values are the same, basically only, if it was used or not (flagging options) or that it was used (e.g. an ip address was used in the filter but the specific value of the ip is of no use for reproducing). Also, Collective Minds tooling relies very common ML algos/framework and static data. So maybe this only comes into play after a capture has been done. So maybe a feature extraction tool (see [[further considerations#Usage paths/ Workflows]]) should create the data and built the database separately.
#remark TCP dump filter could also be exported into an environment variable? But then again what is the use of defining a conformance, then could use the raw capture idea for tcpdump, too.

View File

13
notes/scrible Normal file
View File

@ -0,0 +1,13 @@
`iottb sniff`:
min: nothing
min meaningfull: interface
min usefull: ip/mac addr of dev
good: ip/mac, device type
better:
`iottb device`
`add`: add new device config
`iottb db`
`init` initialize device database
`add` add device

7
notes/scrible.py Normal file
View File

@ -0,0 +1,7 @@
class Config:
db_dir = Path.home()
app_config_dir = Path.home /.Config
db_name = 'IoTtb.db'
app_config_name = 'iottb.conf'

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

View File

@ -0,0 +1,447 @@
#import "/globals.typ": *
//#outline-slide()
= Introduction
== Why are we here?
#slide[
#set align(center)
#grid(align: auto, rows: (13fr, 1fr), gutter: 1pt, inset: 1pt,
[#image("resources/iot-diagram-1.jpg")
#set text(size: 13pt)
#link("https://tse3.mm.bing.net/th?id=OIP.o3AVQNkQCCG_2cmhQzD1zQHaEW&pid=Api"),
#v(5pt)]
)
]
#slide[
#set align(left)
== Project Description
To study the privacy and security aspects of IoT devices
- _systematically_ and
- _reproducibly_,
we need an easy-to-use
- _testbed_
that
- _automates_
#text(size: 0.7em, [(some aspects of)]) the process of experimenting with IoT devices.
#v(5pt)
*In this presentation I describe an implementation of such a testbed:* `IOTTB`
#speaker-note[
- _systematically_: standardization,
- _reproducible_: a systematic approach promises more reproducible experiments, and thus better verifiable results.
- _testbed_: and environment which fixes certain parameters
- _automates_: beyond reproducibility, the level of manual involvement influences feasibility w.r.t. reproduction
]
]
== Principal Objectives
#slide[
#v(5pt)
== Objectives
Key objectives:
+ _Automation recipes_ @fursinckorg2021 for repeated execution of experiments, including data collection and analysis.
+ _FAIR_ data storage (Findable, Accessible, Interoperable, Reusable) (see @faircsartefacts2022, @go-fair and @wilkinson_fair_2016).
]
= Motivation
== Problem(s)
#slide(composer: utils.side-by-side)[
1 Manual setup and configuration of tools
- e.g. `tcpdump`, `Wireshark`, `Frida`
- configurations not interoperable between tools
#pause
2 Ad-hoc decisions
- file/artefact naming
- measured/extracted data features
- metadata recorded
#pause
3 Tailored utilities
- lack interoperability
- require adaptation depending on project
][
#pause
4 Scattered data and lack of standardization
- Inconsistent data collection and storage
- Difficult to maintain compatibility across projects
#pause
5 Onboarding challenges
- New members create ad-hoc solutions
- Perpetuates inefficiency and inconsistency
]
== Challenges Faced
#slide[
- Problems with current approach:
+ Inconsistent data collection
+ Lack of standardized tools and methods
+ Issues with file naming and data structuring
- Resulting difficulties:
+ Compatibility across projects
+ Onboarding new members
+ Ad-hoc solutions perpetuating inefficiency
]
= Background
== IoT Devices
#slide[
#set text(size: 14pt)
#grid(
rows: (4fr, 7fr),
gutter: 3pt,
grid(columns: 4,
[#figure(image("resources/philips-hue.jpg"),caption: [Smart Lighting])<fig:philips-hue>],
[#figure(image("resources/echo-dot.jpeg"), caption: [Smart Speakers])<fig:echo-dot>],
[#figure(image("resources/mi-camera.png", height: 80%), caption: [Home Surveillance Camera])<fig:mi-camera>],
[#figure(image("resources/meta-quest-2.png"), caption: [VR Headset])<fig:meta-quest-2>]),
grid(columns: (2.5fr, 3fr,2.5fr, 3fr),
[#figure(image("resources/dall-e-home-topo-1.jpeg", height: 80%), caption: [Dall-E Diagram of a Smart Home Network])],
grid.cell(colspan: 1, align: top+left, inset: 0.5em, breakable: true, [
#set text(size: 15pt)
#h(12pt)
#v(12pt)
IoT devices offer #alert[benefits]:
- Home lighting control
- Remote video monitoring
- Automated cleaning
#v(-5pt)
and more! But, they becuase
+ Used in Homes
+ Connected
- LAN only
- Internet
- #text(size:0.8em, [May lead to information leakage])
]),
grid.cell(colspan: 1, align: top+left, inset: 1em, breakable: true, [
#set text(size: 15pt)
#h(12pt)
#v(12pt)
#math.arrow.r.double Security and privacy *risks*
- Surveillance potential
- Unauthorized data sharing
- Vulnerable to bugs and security failures]),
[#figure(image("resources/dall-e-home-topo-2.jpeg", height: 80%), caption: [Dall-E Schematic Smart Home Network])]
)
)
]
#slide[
#set align(left)
- *IoT Devices Overview:*
- Devices connected to the internet (voice assistants, smart watches, smart home gadgets)
- Embedded with microprocessors and software
- *Examples of IoT Devices:*
- Security cameras
- Home lighting systems
- Children's toys
- *Importance of IoT:*
- Physical dimension (sensors, controllers)
- Internet connectivity
]
== Testbeds
#slide[
#set align(left)
- *What is a Testbed?*
- Controlled environment for experiments
- Ensures reproducibility and standardization
- *Examples of Testbeds:*
- Industry and Engineering: Platforms for product development
- Natural Sciences: Laboratories (e.g., climate chambers, wind tunnels, see @vaughan2005use)
- Computing: Software testing environments (unit tests, IDEs)
- Interdisciplinary: Complex systems (e.g., smart electric grid testbeds, see @tbsmartgrid2013)
]
== FAIR Data Principles
#slide[
#set align(left)
- *FAIR Data Principles:* @wilkinson_fair_2016, @go-fair
- *Findability:* Data should be easy to find
- *Accessibility:* Data should be accessible under well-defined conditions
- *Interoperability:* Data should be integrated with other data
- *Reusability:* Data should be reusable for future research
- *Purpose:*
- Improve reusability of scientific data
- Guide for designing _data storage_ systems
#speaker-note[
#set text(size: 0.5em)
#grid(columns: 2,[
*Findability:*
- Ensuring data is easily locatable and identifiable.
- Use of persistent identifiers like DOIs.
- Metadata should be richly described to enable precise searching.
- *Positive Example:* A dataset with a DOI and comprehensive metadata that is indexed in major search engines.
- *Negative Example:* A dataset stored on a personal computer with no metadata and no persistent identifier.
*Accessibility:*
- Data should be retrievable by authorized users.
- Use of standardized protocols for data access.
- Clear access conditions and usage licenses.
- *Positive Example:* A dataset available through a well-documented API with clear access guidelines and permissions.
- *Negative Example:* A dataset stored in a proprietary format that requires special software to access, with unclear or restrictive access conditions.
],[
*Interoperability:*
- Data should integrate with other datasets.
- Use of standardized formats and vocabularies.
- Ensure compatibility with existing data and tools.
- *Positive Example:* A dataset in CSV format using standardized column headers that align with other datasets in the field.
- *Negative Example:* A dataset in a non-standard format with custom jargon that is difficult to merge with other data sources.
*Reusability:*
- Data should be well-documented to allow future use.
- Include clear licensing for reuse.
- Ensure data quality and provenance are maintained.
- *Positive Example:* A dataset with a clear Creative Commons license, detailed documentation, and a version history.
- *Negative Example:* A dataset with no documentation, unclear provenance, and no stated reuse policy.
])
]
]
== Network Traffic
#slide[
#set align(left)
- *Importance of Network Traffic in IoT:*
+ Captures communication patterns (device-to-server (internet), device-to-device (LAN, e.g., companion apps))
+ Essential for evaluating performance and identifying unauthorized communications
- *Protocol Analysis:*
+ Understand device operation and communication protocols
+ Identify compatibility, efficiency, and security issues
- *Flow Monitoring:*
+ Detect potential security threats (data breaches, unauthorized access, malware)
+ Monitor for anomalies indicating security incidents or vulnerabilities
- *Information Leakage:*
+ Adversaries can passively observe traffic and extract sensitive information
+ Even encrypted traffic can leak information about the smart environment and users
see @infoexpiot, @iothome2019, @friesssniffing2018, @infoexpiot and @peekaboo2020
#speaker-note[
- Nw traffic important for various reasons for us
- due to data being encrypted in many cases now adays
- most methods boild down to some type of network traffic analysis
]
]
== Findings from Key Studies
#slide[
#set align(left)
*Examples:*\
- *Leakage:* Personal data and device usage patterns. @infoexpiot
- *Details:* The study found that IoT devices often leak personal data and detailed usage patterns to third-party servers.
- *Leakage:* Home device interactions and usage. @iothome2019
- *Details:* This research revealed that interactions with home devices can be intercepted, providing insights into daily routines and activities.
- *Leakage:* Device/Network communication _patterns_.@friesssniffing2018
- *Details:* Sniffing tools can capture communications between IoT devices. WiFi packets expose usage patterns regardless of encryption@peekaboo2020. Those patterns contain features which can be extracted (i.e. leaked) and fed into machine learning models which are capable of exposing more meaningful information (e.g., identifying devices and their functionality) @alyamiwifi2022.
In the end these are all some aspect of the same issue: even encrypted traffic leaks information which can be valuable to adversaries.
#speaker-note[
Examples:
- how many people live in a houshold
- how many devices are in the household
- when which devices are on line
- when, who is home
]
]
== Packet Capture
#slide[
#set align(left)
- *Network Packet Capture:*
+ Intercepting and storing data packets on a network
+ Principal technique for studying device behavior and communication patterns
- *Importance in IoT Security Research:*
+ Main data collection mechanism
+ Essential for analyzing network traffic
//#math.arrow.r.double Wireshark Example
#speaker-note[
- data collection for network traffic
]
]
== Automation Recipes
#slide[
#set align(left)
- *Automation Recipes:*
- Platform agnostic automation
- e.g., install tool y, retrieve dataset x
- Integrate with existing scripts/tools
- Examples in ML
- _Collective Mind Framework:_ @CommonLanguageFacilitate2023, @fursinckorg2021
- Provides reusable recipes for building, running, benchmarking, and optimizing applications
- Platform-independent or supplemented with user-specific scripts
#speaker-note[
- *Importance of Automation:*
- Automates workflows irrespective of underlying tools
- the agnostic part is just the goal
- these recipies must be able to integrate well with existing tools, personal scripts
- Enhances reproducibility and efficiency in experiments
- Underlying data has a standardized (w.r.t. to tooling) format, if tool is available
]
]
== Summary of Key Points
#slide[
#set align(left)
- *Key Issues Identified:*
+ Manual setup and configuration of tools
+ Ad-hoc decisions in file naming, data features, and metadata
+ Tailored utilities lacking interoperability
+ Scattered data and lack of standardization
+ Onboarding challenges for new members
- *Importance of Addressing These Issues:*
+ Improve reproducibility and reliability of experiments
+ Enhance data quality and interoperability
+ Facilitate easier onboarding and collaboration
]
== Return to ...
#slide[
#set align(left)
- *How IOTTB Addresses These Issues:*
+ *Automation Recipes:*
- Standardize the setup and configuration of tools
- Ensure consistent data collection and analysis processes
+ *FAIR Data Storage:*
- Enhance findability, accessibility, interoperability, and reusability of data
- Improve data management and sharing practices
+ *Testbed Design:*
- Provide a controlled environment for reproducible experiments
- Simplify onboarding and collaboration through standardized procedures
]
= #smallcaps[IoTdb]
== Model Environment
#slide(composer: (1fr, 1fr))[
#figure(
image("resources/network-setup1.png"),
caption: [Common capture setup. Separate AP, switch and capturing device.]
)<fig:setup1>
][
#figure(
image("resources/setup2.png"),
caption: [Setup with AP and "Capture Device" on same machine.]
)
]
== The testbed
#slide[
#align(top + center)[_[...] testbed for IoT devices which automates aspects of running experiments._]
#pause
How is this realized?\
#pause
*`iottb`*:
- Python Package
- Defines Data Storage (implicit in behaviour)
- Database is a directory hierarchy in a file system
- DB is a collection of "device"-folders
- Devices in turn hold some metadata and can have subfolders containing capture data #pause
- Defines a metadata schema for devices, as well as captures
- Automates collecting of metadata + data
]
#focus-slide[#align(center+horizon,[DEMO])]
= Outlook
== Evaluation
#slide[
*FAIR*-ness?\
#pause
_Findability_:\
- supported through use of UUIDs, while maintaining human readability.
#speaker-note[Findable
F1. (Meta)data are assigned a globally unique and persistent identifier
F2. Data are described with rich metadata (defined by R1 below)
F3. Metadata clearly and explicitly include the identifier of the data they describe
F4. (Meta)data are registered or indexed in a searchable resourc]
]
#slide[
*FAIR*-ness?\
_Findability_:\
- supported through use of UUIDs, while maintaining human readability.
_Accessibility_:\
- to a degree up to user of testbed
- UUID precondition for data met
- metadata makes sense also without data
#speaker-note[
A1. (Meta)data are retrievable by their identifier using a standardised communications protocol
A1.1 The protocol is open, free, and universally implementable
A1.2 The protocol allows for an authentication and authorisation procedure, where necessary
A2. Metadata are accessible, even when the data are no longer available
]
]
#slide[
*FAIR*-ness?\
_Findability_:\
- supported through use of UUIDs, while maintaining human readability.
_Accessibility_:\
- to a degree up to user of testbed
- UUID precondition for data met
- metadata makes sense also without data
_Interoperability_:\
- Used data formats are common and well known (json, pcap)
- Metadata schema understandable given example
#speaker-note[
1. (Meta)data use a formal, accessible, shared, and broadly applicable language for knowledge representation.
I2. (Meta)data use vocabularies that follow FAIR principles
I3. (Meta)data include qualified references to other (meta)data
]
]
#slide[
*FAIR*-ness?\
_Findability_:\
- supported through use of UUIDs, while maintaining human readability.
_Accessibility_:\
- to a degree up to user of testbed
- UUID precondition for data met
- metadata makes sense also without data
_Interoperability_:\
- Used data formats are common and well known (json, pcap)
- Metadata schema understandable given example
_Reusability_:\
- Used formats support this.
- Data capture tool (`iottb`) can be made available
- + rerun with the same configuration
#speaker-note[
R1. (Meta)data are richly described with a plurality of accurate and relevant attributes
R1.1. (Meta)data are released with a clear and accessible data usage license
R1.2. (Meta)data are associated with detailed provenance
R1.3. (Meta)data meet domain-relevant community standard
]
]
#slide[
*Automation Recipes*?\
- `iottb` automates capture
- Metadata should allow repeating experiments
- want: configure capture based on metadata
]
= Questions
= Appendix
#bibliography("presentation-bsc.bib", style: "ieee")
== Images
#slide[
#set text(size: 13pt)
//#show link: underline
#show link: set text(stroke: blue)
*Introduction*#footnote([Images licenced for free share and use to the best of my knowledge.])\
- IoT Network Diagram: #link("https://tse3.mm.bing.net/th?id=OIP.o3AVQNkQCCG_2cmhQzD1zQHaEW&pid=Api")
- @fig:echo-dot: #link("https://i0.wp.com/thegroyne.com/wp-content/uploads/2018/04/Amazon-Echo-Dot-Altavoces-inteligentes-04.jpeg")
- @fig:philips-hue: #link("https://www.multimediaplayer.it/wp-content/uploads/kit-philips-hue.jpg")
- @fig:mi-camera: #link("https://d.otto.de/files/bd42f6e9-ac45-5e1c-8d5f-ac3affcee9d6.pdf")#footnote("Unclear licence")
]

View File

@ -0,0 +1,23 @@
#import "@preview/touying:0.4.2": *
#import "university.typ"
#let s = university.register(aspect-ratio: "16-9")
#let s = (s.methods.info)(
self: s,
title: [IOTTB],
subtitle: [An Automation Testbed for IoT Devices],
author: [Sebastian Lenzlinger],
date: datetime.today(),
institution: [University of Basel \ Department of Mathematics and Computer Science
\ Privacy-Enhancing Technologies Group],
logo: image("logo-en.svg")
)
#let s = (s.methods.numbering)(self: s, section: "1.", "1.1.1")
//#let s = (s.methods.show-notes-on-second-screen)(self: s, right)
#show figure.caption: set text(size: 8pt)
#let (init, slides, touying-outline, alert, speaker-note) = utils.methods(s)
#let (slide, empty-slide, title-slide, outline-slide, focus-slide, matrix-slide) = utils.slides(s)

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 20 KiB

View File

@ -0,0 +1,12 @@
// main.typ
#import "/globals.typ": *
#show link: underline
#show link: set text(stroke: blue)
#show: init
#show strong: alert
#show: slides
#include "content.typ"

View File

@ -0,0 +1,577 @@
@article{abuwaragaTestbed2020,
title = {Design and Implementation of Automated {{IoT}} Security Testbed},
author = {Abu Waraga, Omnia and Bettayeb, Meriem and Nasir, Qassim and Abu Talib, Manar},
date = {2020-01-01},
journaltitle = {Computers \& Security},
shortjournal = {Computers \& Security},
volume = {88},
pages = {101648},
issn = {0167-4048},
doi = {10.1016/j.cose.2019.101648},
abstract = {The emergence of technology associated with the Internet of Things (IoT) is reshaping our lives, while simultaneously raising many issues due to their low level of security, which attackers can exploit for malicious purposes. This research paper conducts a comprehensive analysis of previous studies on IoT device security with a focus on the various tools used to test IoT devices and the vulnerabilities that were found. Additionally, the paper contains a survey of IoT-based security testbeds in the research literature. In this research study, we introduce an open source platform for identifying weaknesses in IoT networks and communications. The platform is easily modifiable and extendible to enable the addition of new security assessment tests and functionalities. It automates security evaluation, allowing for testing without human intervention. The testbed reports the security problems of the tested devices and can detect all attacks made against the devices. It is also designed to monitor communications within the testbed and with connected devices, enabling the system to abort if malicious activity is detected. To demonstrate the capabilities of the proposed IoT security testbed, it is used to examine the vulnerabilities of two IoT devices: a wireless camera and a smart bulb.},
keywords = {Automated testbed architecture,Internet of Things,IoT testbed,Vulnerability assessment},
file = {/home/seb/Zotero/storage/U3D2SCU4/S0167404819301920.html}
}
@article{al-hawawrehDevelopingSecurityTestbed2021,
title = {Developing a {{Security Testbed}} for {{Industrial Internet}} of {{Things}}},
author = {Al-Hawawreh, Muna and Sitnikova, Elena},
date = {2021-04},
journaltitle = {IEEE Internet of Things Journal},
shortjournal = {IEEE Internet Things J.},
volume = {8},
number = {7},
pages = {5558--5573},
issn = {2327-4662},
doi = {10.1109/JIOT.2020.3032093},
abstract = {While achieving security for Industrial Internet of Things (IIoT) is a critical and nontrivial task, more attention is required for brownfield IIoT systems. This is a consequence of long life cycles of their legacy devices which were initially designed without considering security and IoT connectivity, but they are now becoming more connected and integrated with emerging IoT technologies and messaging communication protocols. Deploying today's methodologies and solutions in brownfield IIoT systems is not viable, as security solutions must co-exist and fit these systems' requirements. This necessitates a realistic standardized IIoT testbed that can be used as an optimal format to measure the credibility of security solutions of IIoT networks, analyze IIoT attack landscapes and extract threat intelligence. Developing a testbed for brownfield IIoT systems is considered a significant challenge as these systems are comprised of legacy, heterogeneous devices, communication layers and applications that need to be implemented holistically to achieve high fidelity. In this article, we propose a new generic end-to-end IIoT security testbed, with a particular focus on the brownfield system and provide details of the testbed's architectural design and the implementation process. The proposed testbed can be easily reproduced and reconfigured to support the testing activities of new processes and various security scenarios. The proposed testbed operation is demonstrated on different connected devices, communication protocols and applications. The experiments demonstrate that this testbed is effective in terms of its operation and security testing. A comparison with existing testbeds, including a table of features is provided.},
eventtitle = {{{IEEE Internet}} of {{Things Journal}}},
keywords = {Brownfield,ieee,Industrial Internet of Things (IIoT),iot,Protocols,Resilience,Security,security testing,Sensors,testbed,Testing},
file = {/home/seb/Zotero/storage/7JFQCP4C/Al-Hawawreh and Sitnikova - 2021 - Developing a Security Testbed for Industrial Inter.pdf;/home/seb/Zotero/storage/U9SM7UYK/9233425.html}
}
@inproceedings{alyamiwifi2022,
title = {{{WiFi-based IoT Devices Profiling Attack}} Based on {{Eavesdropping}} of {{Encrypted WiFi Traffic}}},
booktitle = {2022 {{IEEE}} 19th {{Annual Consumer Communications}} \& {{Networking Conference}} ({{CCNC}})},
author = {Alyami, Mnassar and Alharbi, Ibrahim and Zou, Cliff and Solihin, Yan and Ackerman, Karl},
date = {2022-01-08},
pages = {385--392},
publisher = {IEEE},
location = {Las Vegas, NV, USA},
doi = {10.1109/CCNC49033.2022.9700674},
abstract = {Recent research has shown that in-network observers of WiFi communication (i.e., observers who have joined the WiFi network) can obtain much information regarding the types, user identities, and activities of Internet-of-Things (IoT) devices in the network. What has not been explored is the question of how much information can be inferred by an out-ofnetwork observer who does not have access to the WiFi network. This attack scenario is more realistic and much harder to defend against, thus imposes a real threat to user privacy. In this paper, we investigate privacy leakage derived from an out-of-network traffic eavesdropper on the encrypted WiFi traffic of popular IoT devices. We instrumented a testbed of 12 popular IoT devices and evaluated multiple machine learning methods for fingerprinting and inferring what IoT devices exist in a WiFi network. By only exploiting the WiFi frame header information, we have achieved 95\% accuracy in identifying the devices and often their working status. This study demonstrates that information leakage and privacy attack is a real threat for WiFi networks and IoT applications.},
eventtitle = {2022 {{IEEE}} 19th {{Annual Consumer Communications}} \& {{Networking Conference}} ({{CCNC}})},
isbn = {978-1-66543-161-3},
langid = {english},
file = {/home/seb/Zotero/storage/7A9CFI4D/Alyami et al. - 2022 - WiFi-based IoT Devices Profiling Attack based on E.pdf}
}
@inproceedings{aysom23,
title = {Are {{You Spying}} on {{Me}}? \{\vphantom\}{{Large-Scale}}\vphantom\{\} {{Analysis}} on \{\vphantom\}{{IoT}}\vphantom\{\} {{Data Exposure}} through {{Companion Apps}}},
shorttitle = {Are {{You Spying}} on {{Me}}?},
author = {Nan, Yuhong and Wang, Xueqiang and Xing, Luyi and Liao, Xiaojing and Wu, Ruoyu and Wu, Jianliang and Zhang, Yifan and Wang, XiaoFeng},
date = {2023},
pages = {6665--6682},
url = {https://www.usenix.org/conference/usenixsecurity23/presentation/nan},
urldate = {2024-02-25},
eventtitle = {32nd {{USENIX Security Symposium}} ({{USENIX Security}} 23)},
isbn = {978-1-939133-37-3},
langid = {english},
file = {/home/seb/Zotero/storage/M5HNUNW8/Nan et al. - 2023 - Are You Spying on Me Large-Scale Analysis on I.pdf}
}
@article{bashir2017internet,
title = {The {{Internet}} of {{Things}} Testbed: A Survey and Evaluation},
author = {Bashir, Abid H and Gill, Khurram},
date = {2017},
journaltitle = {Future Generation Computer Systems},
shortjournal = {Future Gener. Comput. Syst.},
volume = {78},
pages = {409--421},
publisher = {Elsevier}
}
@online{click,
title = {Welcome to {{Click}} — {{Click Documentation}} (8.1.x)},
url = {https://click.palletsprojects.com/en/8.1.x/},
urldate = {2024-06-30},
file = {/home/seb/Zotero/storage/88MW53XH/8.1.x.html}
}
@unpublished{CommonLanguageFacilitate2023,
title = {Toward a Common Language to Facilitate Reproducible Research and Technology Transfer: Challenges and Solutions},
shorttitle = {Toward a Common Language to Facilitate Reproducible Research and Technology Transfer},
date = {2023-06-28},
doi = {10.5281/zenodo.8105339},
abstract = {The keynote presentation from the 1st ACM conference on reproducibility and replicability (ACM REP'23).The video of this presentation is available at the ACM YouTube channel.Please don't hesitate to provide your feedback via the public Discord server~from the MLCommons Task Force on Automation and Reproducibility and GitHub issues.[ GitHub project~] [ Public Collective Knowledge repository ][ Related reproducibility initiatives ] [ cTuning.org ] [ cKnowledge.org ]During the past 10 years, we have considerably improved the reproducibility of experimental results from published papers by introducing the artifact evaluation process with a unified artifact appendix and reproducibility checklists, Jupyter notebooks, containers, and Git repositories. On the other hand, our experience reproducing more than 200 papers shows that it can take weeks and months of painful and repetitive interactions between teams to reproduce artifacts. This effort includes decrypting numerous README files, examining ad-hoc artifacts and containers, and figuring out how to reproduce computational results. Furthermore, snapshot containers pose a challenge to optimize algorithms' performance, accuracy, power consumption and operational costs across diverse and rapidly evolving software, hardware, and data used in the real world.In this talk, I~explain how our practical artifact evaluation experience and the feedback from researchers and evaluators motivated us to develop a simple, intuitive, technology agnostic, and English-like scripting language called Collective Mind (CM). It helps to automatically adapt any given experiment to any software, hardware, and data while automatically generating unified README files and synthesizing modular containers with a unified API. It is being developed by MLCommons to facilitate reproducible AI/ML Systems research and minimizing manual and repetitive benchmarking and optimization efforts, reduce time and costs for reproducible research, and simplify technology transfer to production. I also present several recent use cases of how CM helps MLCommons, the Student Cluster Competition, and artifact evaluation at ACM/IEEE conferences. I conclude with our development plans, new challenges, possible solutions, and upcoming reproducibility and optimization challenges powered by the MLCommons Collective Knowledge platform and CM:~access.cKnowledge.org.},
keywords = {artifact evaluation,artificial intelligence,automation,chatgpt,cknowledge,collective knowledge,collective mind,competitions,cTuning,llm,llm automation,machine learning,mlcommons,mlperf,optimization challenges,performance,replicability,reproducibility,reusability,systems},
file = {/home/seb/Zotero/storage/AGZTALNV/Fursin - 2023 - Toward a common language to facilitate reproducibl.pdf}
}
@online{coryefelleCorrectingIoTHistory2016,
title = {Correcting the {{IoT History}}},
author = {CoryEfelle},
date = {2016-03-14T22:28:21+00:00},
url = {http://www.chetansharma.com/correcting-the-iot-history/},
urldate = {2024-06-20},
abstract = {In the last 5 years, IoT has entered the industry consciousness. There are varying forecasts calling for tremendous growth and … Continued},
langid = {american},
organization = {Chetan Sharma},
file = {/home/seb/Zotero/storage/LJX88N74/correcting-the-iot-history.html}
}
@inproceedings{dasilvaComRoad2021,
title = {A {{Community Roadmap}} for {{Scientific Workflows Research}} and {{Development}}},
booktitle = {2021 {{IEEE Workshop}} on {{Workflows}} in {{Support}} of {{Large-Scale Science}} ({{WORKS}})},
author = {family=Silva, given=Rafael Ferreira, prefix=da, useprefix=true and Casanova, Henri and Chard, Kyle and Altintas, Ilkay and Badia, Rosa M and Balis, Bartosz and Coleman, Tainã and Coppens, Frederik and Di Natale, Frank and Enders, Bjoern and Fahringer, Thomas and Filgueira, Rosa and Fursin, Grigori and Garijo, Daniel and Goble, Carole and Howell, Dorran and Jha, Shantenu and Katz, Daniel S. and Laney, Daniel and Leser, Ulf and Malawski, Maciej and Mehta, Kshitij and Pottier, Loïc and Ozik, Jonathan and Peterson, J. Luc and Ramakrishnan, Lavanya and Soiland-Reyes, Stian and Thain, Douglas and Wolf, Matthew},
date = {2021-11},
pages = {81--90},
doi = {10.1109/WORKS54523.2021.00016},
abstract = {The landscape of workflow systems for scientific applications is notoriously convoluted with hundreds of seemingly equivalent workflow systems, many isolated research claims, and a steep learning curve. To address some of these challenges and lay the groundwork for transforming workflows research and development, the WorkflowsRI and ExaWorks projects partnered to bring the international workflows community together. This paper reports on discussions and findings from two virtual “Workflows Community Summits” (January and April, 2021). The overarching goals of these workshops were to develop a view of the state of the art, identify crucial research challenges in the workflows community, articulate a vision for potential community efforts, and discuss technical approaches for realizing this vision. To this end, participants identified six broad themes: FAIR computational workflows; AI workflows; exascale challenges; APIs, interoperability, reuse, and standards; training and education; and building a workflows community. We summarize discussions and recommendations for each of these themes.},
eventtitle = {2021 {{IEEE Workshop}} on {{Workflows}} in {{Support}} of {{Large-Scale Science}} ({{WORKS}})},
keywords = {AI workflows,Artificial intelligence,Buildings,community roadmap,Conferences,data management,exascale computing,interoperability,Research and development,Scientific workflows,Stakeholders,Standards,Training},
file = {/home/seb/Zotero/storage/856IVVCZ/da Silva et al. - 2021 - A Community Roadmap for Scientific Workflows Resea.pdf;/home/seb/Zotero/storage/7QR6LPZV/authors.html}
}
@report{dasilvaworkflow2021,
title = {Workflows {{Community Summit}}: {{Bringing}} the {{Scientific Workflows Community Together}}},
shorttitle = {Workflows {{Community Summit}}},
author = {family=Silva, given=Rafael Ferreira, prefix=da, useprefix=true and Casanova, Henri and Chard, Kyle and Laney, Dan and Ahn, Dong and Jha, Shantenu and Goble, Carole and Ramakrishnan, Lavanya and Peterson, Luc and Enders, Bjoern and Thain, Douglas and Altintas, Ilkay and Babuji, Yadu and Badia, Rosa M. and Bonazzi, Vivien and Coleman, Taina and Crusoe, Michael and Deelman, Ewa and Di Natale, Frank and Di Tommaso, Paolo and Fahringer, Thomas and Filgueira, Rosa and Fursin, Grigori and Ganose, Alex and Gruning, Bjorn and Katz, Daniel S. and Kuchar, Olga and Kupresanin, Ana and Ludascher, Bertram and Maheshwari, Ketan and Mattoso, Marta and Mehta, Kshitij and Munson, Todd and Ozik, Jonathan and Peterka, Tom and Pottier, Loic and Randles, Tim and Soiland-Reyes, Stian and Tovar, Benjamin and Turilli, Matteo and Uram, Thomas and Vahi, Karan and Wilde, Michael and Wolf, Matthew and Wozniak, Justin},
date = {2021-03-16},
eprint = {2103.09181},
eprinttype = {arXiv},
eprintclass = {cs},
doi = {10.5281/zenodo.4606958},
abstract = {Scientific workflows have been used almost universally across scientific domains, and have underpinned some of the most significant discoveries of the past several decades. Many of these workflows have high computational, storage, and/or communication demands, and thus must execute on a wide range of large-scale platforms, from large clouds to upcoming exascale high-performance computing (HPC) platforms. These executions must be managed using some software infrastructure. Due to the popularity of workflows, workflow management systems (WMSs) have been developed to provide abstractions for creating and executing workflows conveniently, efficiently, and portably. While these efforts are all worthwhile, there are now hundreds of independent WMSs, many of which are moribund. As a result, the WMS landscape is segmented and presents significant barriers to entry due to the hundreds of seemingly comparable, yet incompatible, systems that exist. As a result, many teams, small and large, still elect to build their own custom workflow solution rather than adopt, or build upon, existing WMSs. This current state of the WMS landscape negatively impacts workflow users, developers, and researchers. The "Workflows Community Summit" was held online on January 13, 2021. The overarching goal of the summit was to develop a view of the state of the art and identify crucial research challenges in the workflow community. Prior to the summit, a survey sent to stakeholders in the workflow community (including both developers of WMSs and users of workflows) helped to identify key challenges in this community that were translated into 6 broad themes for the summit, each of them being the object of a focused discussion led by a volunteer member of the community. This report documents and organizes the wealth of information provided by the participants before, during, and after the summit.},
keywords = {Computer Science - Distributed Parallel and Cluster Computing},
file = {/home/seb/Zotero/storage/JWQWSRVM/da Silva et al. - 2021 - Workflows Community Summit Bringing the Scientifi.pdf;/home/seb/Zotero/storage/4DY745J9/2103.html}
}
@inproceedings{faircsartefacts2022,
title = {Toward Findable, Accessible, Interoperable, and Reusable Cybersecurity Artifacts},
booktitle = {Proceedings of the 15th Workshop on Cyber Security Experimentation and Test},
author = {Balenson, David and Benzel, Terry and Eide, Eric and Emmerich, David and Johnson, David and Mirkovic, Jelena and Tinnel, Laura},
date = {2022},
series = {Cset '22},
pages = {65--70},
publisher = {Association for Computing Machinery},
location = {New York, NY, USA},
doi = {10.1145/3546096.3546104},
abstract = {Researchers in experimental cybersecurity are increasingly sharing the code, data, and other artifacts associated with their studies. This trend is encouraged and rewarded by conferences and journals through practices such as artifact evaluation and badging. While these trends in sharing artifacts are promising, the cybersecurity community is still far from an ecosystem in which artifacts are FAIR: findable, accessible, interoperable, and reusable. The lack of established standards and best practices for sharing and reuse results in artifacts that are often difficult to find and reuse; in addition, the lack of community standards results in artifacts that may be incomplete and low-quality. In this paper we describe our experience in creating an online community hub, called SEARCCH, to promote the sharing and reuse of artifacts for cybersecurity research. Based on our experience, we offer lessons learned: issues that must be addressed to further promote FAIR principles in experimental cybersecurity.},
isbn = {978-1-4503-9684-4},
pagetotal = {6},
keywords = {artifact catalog,cybersecurity artifacts,FAIR principles,reproducibility,SEARCCH}
}
@online{FHSReferencedSpecifications,
title = {{{FHS Referenced Specifications}}},
url = {https://refspecs.linuxfoundation.org/fhs.shtml},
urldate = {2024-06-22},
file = {/home/seb/Zotero/storage/E75NBMV5/fhs.html}
}
@inproceedings{friesssniffing2018,
title = {Multichannel-{{Sniffing-System}} for {{Real-World Analysing}} of {{Wi-Fi-Packets}}},
booktitle = {2018 {{Tenth International Conference}} on {{Ubiquitous}} and {{Future Networks}} ({{ICUFN}})},
author = {Friess, Kristof},
date = {2018-07},
pages = {358--364},
issn = {2165-8536},
doi = {10.1109/ICUFN.2018.8436715},
abstract = {Wireless technologies like Wi-Fi send their data using multiple channels. To analyze an environment and all Wi-Fi packets inside, a sniffing system is needed, which can sniff on all used channels of the wireless technology at the same time. This allows catching most packets on each channel. In this paper, a way to build up a multi-channel-sniffing-system (MCSS) is described. The test system uses several single board computers (SBC) with an external Wi-Fi adapter (USB), 19 SBCs are sniffing nodes (SFN) and one SBC as sending node (SN). The sniffing SBCs are placed in a cycle around the sender so that every node has the same chance to receive the simulated packets from the SN. For the control of all 20 SBCs, a self-developed software is used, which connects from the host to the clients and is used for configuring the experiments. The configuration is sent to each client and will initiate their start, so that their times are also synchronized, for this all clients are synchronised using a time server.},
eventtitle = {2018 {{Tenth International Conference}} on {{Ubiquitous}} and {{Future Networks}} ({{ICUFN}})},
keywords = {Bluetooth,Europe,Hardware,Monitoring,multichannel,node.js,sbc,sniffing,Universal Serial Bus,wifi,Wireless communication,Wireless fidelity},
file = {/home/seb/Zotero/storage/AIPDUX7V/Friess - 2018 - Multichannel-Sniffing-System for Real-World Analys.pdf;/home/seb/Zotero/storage/E38MLQA3/8436715.html}
}
@standard{fsh-home,
title = {3.8.~/Home : {{User}} Home Directories (Optional)},
url = {https://refspecs.linuxfoundation.org/FHS_3.0/fhs/ch03s08.html},
urldate = {2024-06-22},
file = {/home/seb/Zotero/storage/PHTUTULW/ch03s08.html}
}
@article{fursinckorg2021,
title = {Collective Knowledge: Organizing Research Projects as a Database of Reusable Components and Portable Workflows with Common Interfaces},
shorttitle = {Collective Knowledge},
author = {Fursin, Grigori},
date = {2021-03-29},
journaltitle = {Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences},
shortjournal = {Philos. Trans. R. Soc. Math. Phys. Eng. Sci.},
volume = {379},
number = {2197},
pages = {20200211},
publisher = {Royal Society},
doi = {10.1098/rsta.2020.0211},
abstract = {This article provides the motivation and overview of the Collective Knowledge Framework (CK or cKnowledge). The CK concept is to decompose research projects into reusable components that encapsulate research artifacts and provide unified application programming interfaces (APIs), command-line interfaces (CLIs), meta descriptions and common automation actions for related artifacts. The CK framework is used to organize and manage research projects as a database of such components. Inspired by the USB plug and play approach for hardware, CK also helps to assemble portable workflows that can automatically plug in compatible components from different users and vendors (models, datasets, frameworks, compilers, tools). Such workflows can build and run algorithms on different platforms and environments in a unified way using the customizable CK program pipeline with software detection plugins and the automatic installation of missing packages. This article presents a number of industrial projects in which the modular CK approach was successfully validated in order to automate benchmarking, auto-tuning and co-design of efficient software and hardware for machine learning and artificial intelligence in terms of speed, accuracy, energy, size and various costs. The CK framework also helped to automate the artifact evaluation process at several computer science conferences as well as to make it easier to reproduce, compare and reuse research techniques from published papers, deploy them in production, and automatically adapt them to continuously changing datasets, models and systems. The long-term goal is to accelerate innovation by connecting researchers and practitioners to share and reuse all their knowledge, best practices, artifacts, workflows and experimental results in a common, portable and reproducible format at cKnowledge.io. This article is part of the theme issue Reliability and reproducibility in computational science: implementing verification, validation and uncertainty quantification in silico.},
keywords = {DevOps,FAIR principles,portability,reproducibility,research automation,reusability},
file = {/home/seb/Zotero/storage/6DM4S7B7/Fursin - 2021 - Collective knowledge organizing research projects.pdf}
}
@online{go-fair,
title = {{{FAIR Principles}}},
url = {https://www.go-fair.org/fair-principles/},
urldate = {2024-06-22},
abstract = {In 2016, the FAIR Guiding Principles for scientific data management and stewardship~were published in~Scientific Data. The authors intended to provide guidelines to improve the Findability, Accessibility, Interoperability, and Reuse of digital assets. The principles emphasise machine-actionability (i.e., the capacity of… Continue reading →},
langid = {american},
organization = {GO FAIR},
file = {/home/seb/Zotero/storage/MLUAT2GN/fair-principles.html}
}
@article{huang2011testbed,
title = {Testbed for Evaluating Performance of Health Monitoring Systems},
author = {Huang, Qinfen and Liu, Min and Garcia, Alfredo and Reynolds, Matthew},
date = {2011},
journaltitle = {IEEE Transactions on Instrumentation and Measurement},
shortjournal = {IEEE Trans. Instrum. Meas.},
volume = {60},
number = {1},
pages = {114--123},
publisher = {IEEE}
}
@inproceedings{infoexpiot,
title = {Information {{Exposure From Consumer IoT Devices}}: {{A Multidimensional}}, {{Network-Informed Measurement Approach}}},
shorttitle = {Information {{Exposure From Consumer IoT Devices}}},
booktitle = {Proceedings of the {{Internet Measurement Conference}}},
author = {Ren, Jingjing and Dubois, Daniel J. and Choffnes, David and Mandalari, Anna Maria and Kolcun, Roman and Haddadi, Hamed},
date = {2019-10-21},
series = {{{IMC}} '19},
pages = {267--279},
publisher = {Association for Computing Machinery},
location = {New York, NY, USA},
doi = {10.1145/3355369.3355577},
abstract = {Internet of Things (IoT) devices are increasingly found in everyday homes, providing useful functionality for devices such as TVs, smart speakers, and video doorbells. Along with their benefits come potential privacy risks, since these devices can communicate information about their users to other parties over the Internet. However, understanding these risks in depth and at scale is difficult due to heterogeneity in devices' user interfaces, protocols, and functionality. In this work, we conduct a multidimensional analysis of information exposure from 81 devices located in labs in the US and UK. Through a total of 34,586 rigorous automated and manual controlled experiments, we characterize information exposure in terms of destinations of Internet traffic, whether the contents of communication are protected by encryption, what are the IoT-device interactions that can be inferred from such content, and whether there are unexpected exposures of private and/or sensitive information (e.g., video surreptitiously transmitted by a recording device). We highlight regional differences between these results, potentially due to different privacy regulations in the US and UK. Last, we compare our controlled experiments with data gathered from an in situ user study comprising 36 participants.},
isbn = {978-1-4503-6948-0},
file = {/home/seb/Zotero/storage/YT9SKQLS/Ren et al. - 2019 - Information Exposure From Consumer IoT Devices A .pdf}
}
@incollection{iotfundamentals,
title = {{{IoT Fundamentals}}: {{Definitions}}, {{Architectures}}, {{Challenges}}, and {{Promises}}},
booktitle = {Intelligent {{Internet}} of {{Things}}: {{From Device}} to {{Fog}} and {{Cloud}}},
author = {Firouzi, Farshad and Farahani, Bahar and Weinberger, Markus and DePace, Gabriel and Aliee, Fereidoon Shams},
editor = {Firouzi, Farshad and Chakrabarty, Krishnendu and Nassif, Sani},
date = {2020},
pages = {3--50},
publisher = {Springer International Publishing},
location = {Cham},
doi = {10.1007/978-3-030-30367-9_1},
abstract = {The Internet is everywhere and touched almost every corner of the globe affecting our lives in previously unimagined ways. As a living entity, the Internet is constantly evolving, and now, an era of widespread connectivity through various smart devices (i.e., things) that connect with the Internet has begun. This paradigm change is generally referred to as the Internet of Things (IoT). Welcoming IoT will bring significant benefits to economies and businesses as it enables greater innovation and productivity. On the other hand, the rapid adoption of IoT presents new challenges regarding connectivity, security, data processing, and scalability. Because the IoT world is vast and versatile, it cannot be viewed as a single technology. IoT looks more like an umbrella covering many protocols, technologies, and concepts that depend on specific industries. In this chapter, we will seek to look at the history of IoT, more clearly define it, and review its terms and concepts. We will also review vertical IoT markets and higher-level use cases that have successfully adopted IoT solutions. We will also discuss the details of the business implications, business models, and opportunities of IoT. Finally, the complete IoT stack and reference architectures from smart objects, to the networks, to the cloud, and finally the applications where information is leveraged are explained.},
isbn = {978-3-030-30367-9}
}
@inproceedings{iothome2019,
title = {All Things Considered: {{An}} Analysis of {{IoT}} Devices on Home Networks},
booktitle = {28th {{USENIX}} Security Symposium ({{USENIX}} Security 19)},
author = {Kumar, Deepak and Shen, Kelly and Case, Benton and Garg, Deepali and Alperovich, Galina and Kuznetsov, Dmitry and Gupta, Rajarshi and Durumeric, Zakir},
date = {2019-08},
pages = {1169--1185},
publisher = {USENIX Association},
location = {Santa Clara, CA},
url = {https://www.usenix.org/conference/usenixsecurity19/presentation/kumar-deepak},
isbn = {978-1-939133-06-9}
}
@inproceedings{iotInHomes2019,
title = {All {{Things Considered}}: {{An Analysis}} of \{\vphantom\}{{IoT}}\vphantom\{\} {{Devices}} on {{Home Networks}}},
shorttitle = {All {{Things Considered}}},
author = {Kumar, Deepak and Shen, Kelly and Case, Benton and Garg, Deepali and Alperovich, Galina and Kuznetsov, Dmitry and Gupta, Rajarshi and Durumeric, Zakir},
date = {2019},
pages = {1169--1185},
url = {https://www.usenix.org/conference/usenixsecurity19/presentation/kumar-deepak},
urldate = {2024-06-30},
eventtitle = {28th {{USENIX Security Symposium}} ({{USENIX Security}} 19)},
isbn = {978-1-939133-06-9},
langid = {english},
keywords = {adoption,home,iot},
file = {/home/seb/Zotero/storage/73BEXVMZ/Kumar et al. - 2019 - All Things Considered An Analysis of IoT Device.pdf}
}
@article{islamiot2023,
title = {Internet of {{Things}}: {{Device Capabilities}}, {{Architectures}}, {{Protocols}}, and {{Smart Applications}} in {{Healthcare Domain}}},
shorttitle = {Internet of {{Things}}},
author = {Islam, Md. Milon and Nooruddin, Sheikh and Karray, Fakhri and Muhammad, Ghulam},
date = {2023-02},
journaltitle = {IEEE Internet of Things Journal},
shortjournal = {IEEE Internet Things J.},
volume = {10},
number = {4},
pages = {3611--3641},
issn = {2327-4662},
doi = {10.1109/JIOT.2022.3228795},
abstract = {Nowadays, the Internet has spread to practically every country around the world and is having unprecedented effects on peoples lives. The Internet of Things (IoT) is getting more popular and has a high level of interest in both practitioners and academicians in the age of wireless communication due to its diverse applications. The IoT is a technology that enables everyday things to become savvier, everyday computation toward becoming intellectual, and everyday communication to become a little more insightful. In this article, the most common and popular IoT device capabilities, architectures, and protocols are demonstrated in brief to provide a clear overview of the IoT technology to the researchers in this area. The common IoT device capabilities, including hardware (Raspberry Pi, Arduino, and ESP8266) and software (operating systems (OSs), and built-in tools) platforms are described in detail. The widely used architectures that have recently evolved and used are the three-layer architecture, service-oriented architecture, and middleware-based architecture. The popular protocols for IoT are demonstrated which include constrained application protocol, message queue telemetry transport, extensible messaging and presence protocol, advanced message queuing protocol, data distribution service, low power wireless personal area network, Bluetooth low energy, and ZigBee that are frequently utilized to develop smart IoT applications. Additionally, this research provides an in-depth overview of the potential healthcare applications based on IoT technologies in the context of addressing various healthcare concerns. Finally, this article summarizes state-of-the-art knowledge, highlights open issues and shortcomings, and provides recommendations for further studies which would be quite beneficial to anyone with a desire to work in this field and make breakthroughs to get expertise in this area.},
eventtitle = {{{IEEE Internet}} of {{Things Journal}}},
keywords = {Communication protocol,Computer architecture,device capabilities,Hardware,healthcare applications,Internet of Things,Internet of Things (IoT),IoT architecture,Medical services,Protocols,Security,Software},
file = {/home/seb/Zotero/storage/HDMX3ZVW/Islam et al. - 2023 - Internet of Things Device Capabilities, Architect.pdf;/home/seb/Zotero/storage/WDKWMKN9/references.html}
}
@online{mitmproxy,
title = {Mitmproxy - an Interactive {{HTTPS}} Proxy},
url = {https://mitmproxy.org/},
urldate = {2024-06-30},
keywords = {proxy,sniffing,tools},
file = {/home/seb/Zotero/storage/NTUXF55S/mitmproxy.org.html}
}
@standard{OverviewInternetThings2012,
type = {Recommendation},
title = {Overview of the {{Internet}} of Things},
shorttitle = {Y.{{IoT-overview}}},
date = {2012-06-15},
number = {ITU-T Y.4000},
url = {https://handle.itu.int/11.1002/1000/11559},
abstract = {Recommendation ITU-T Y.2060 provides an overview of the Internet of things (IoT). It clarifies the concept and scope of the IoT, identifies the fundamental characteristics and high-level requirements of the IoT and describes the IoT reference model. The ecosystem and business models are also provided in an informative appendix. Former ITU-T Y.2060 renumbered as ITU-T Y.4000 on 2016-02-05 without further modification and without being republished.},
pubstate = {In force}
}
@inproceedings{peekaboo2020,
title = {Peek-a-{{Boo}}: {{I}} See Your Smart Home Activities, Even Encrypted!},
shorttitle = {Peek-a-{{Boo}}},
booktitle = {Proceedings of the 13th {{ACM Conference}} on {{Security}} and {{Privacy}} in {{Wireless}} and {{Mobile Networks}}},
author = {Acar, Abbas and Fereidooni, Hossein and Abera, Tigist and Sikder, Amit Kumar and Miettinen, Markus and Aksu, Hidayet and Conti, Mauro and Sadeghi, Ahmad-Reza and Uluagac, Selcuk},
date = {2020-07-08},
eprint = {1808.02741},
eprinttype = {arXiv},
eprintclass = {cs},
pages = {207--218},
doi = {10.1145/3395351.3399421},
abstract = {A myriad of IoT devices such as bulbs, switches, speakers in a smart home environment allow users to easily control the physical world around them and facilitate their living styles through the sensors already embedded in these devices. Sensor data contains a lot of sensitive information about the user and devices. However, an attacker inside or near a smart home environment can potentially exploit the innate wireless medium used by these devices to exfiltrate sensitive information from the encrypted payload (i.e., sensor data) about the users and their activities, invading user privacy. With this in mind,in this work, we introduce a novel multi-stage privacy attack against user privacy in a smart environment. It is realized utilizing state-of-the-art machine-learning approaches for detecting and identifying the types of IoT devices, their states, and ongoing user activities in a cascading style by only passively sniffing the network traffic from smart home devices and sensors. The attack effectively works on both encrypted and unencrypted communications. We evaluate the efficiency of the attack with real measurements from an extensive set of popular off-the-shelf smart home IoT devices utilizing a set of diverse network protocols like WiFi, ZigBee, and BLE. Our results show that an adversary passively sniffing the traffic can achieve very high accuracy (above 90\%) in identifying the state and actions of targeted smart home devices and their users. To protect against this privacy leakage, we also propose a countermeasure based on generating spoofed traffic to hide the device states and demonstrate that it provides better protection than existing solutions.},
keywords = {BLE,Computer Science - Cryptography and Security,network traffic,privacy,smart-home,wifi,ZigBee},
file = {/home/seb/Zotero/storage/HKM4PAZW/Acar et al. - 2020 - Peek-a-Boo I see your smart home activities, even.pdf;/home/seb/Zotero/storage/ISVLWPED/1808.html}
}
@article{pmsSpinellis2012,
title = {Package {{Management Systems}}},
author = {Spinellis, Diomidis},
date = {2012-03},
journaltitle = {IEEE Software},
shortjournal = {IEEE Softw.},
volume = {29},
number = {2},
pages = {84--86},
issn = {1937-4194},
doi = {10.1109/MS.2012.38},
abstract = {A package management system organizes and simplifies the installation and maintenance of software by standardizing and organizing the production and consumption of software collections. As a software developer, you can benefit from package managers in two ways: through a rich and stable development environment and through friction-free reuse. Promisingly, the structure that package managers bring both to the tools we use in our development process and the libraries we reuse in our products ties nicely with the recent move emphasizing DevOps (development operations) as an integration between software development and IT operations.},
eventtitle = {{{IEEE Software}}},
keywords = {DevOps,Maintenance engineering,module dependencies,package management system,Product management,shared library,Software libraries,Software reusability,software reuse},
file = {/home/seb/Zotero/storage/DA6A82Z4/6155145.html}
}
@online{poetry,
title = {Poetry - {{Python}} Dependency Management and Packaging Made Easy},
url = {https://python-poetry.org/},
urldate = {2024-06-30},
file = {/home/seb/Zotero/storage/BYK5CXZT/python-poetry.org.html}
}
@online{pydantic,
title = {Welcome to {{Pydantic}} - {{Pydantic}}},
url = {https://docs.pydantic.dev/latest/},
urldate = {2024-07-01},
file = {/home/seb/Zotero/storage/FF8XYTKG/latest.html}
}
@online{pythonorg,
title = {Welcome to {{Python}}.Org},
date = {2024-06-27},
url = {https://www.python.org/},
urldate = {2024-06-30},
abstract = {The official home of the Python Programming Language},
langid = {english},
organization = {Python.org},
keywords = {tool},
file = {/home/seb/Zotero/storage/BKHKLAP9/www.python.org.html}
}
@online{recommendedformatrsLOC,
type = {web page},
title = {Recommended {{Formats Statement}} {{Datasets}} | {{Resources}} ({{Preservation}}, {{Library}} of {{Congress}})},
url = {https://www.loc.gov/preservation/resources/rfs/data.html},
urldate = {2024-06-23},
abstract = {Lists technical characteristics of and metadata for datasets that best support the preservation of and long-term access to these creative works. Identifies the formats the Library of Congress prefers or finds acceptable.},
langid = {english},
file = {/home/seb/Zotero/storage/G5K5R8ES/data.html}
}
@article{romanfeatures2013,
title = {On the Features and Challenges of Security and Privacy in Distributed Internet of Things},
author = {Roman, Rodrigo and Zhou, Jianying and Lopez, Javier},
date = {2013-07-05},
journaltitle = {Computer Networks},
shortjournal = {Computer Networks},
series = {Towards a {{Science}} of {{Cyber Security}}},
volume = {57},
number = {10},
pages = {2266--2279},
issn = {1389-1286},
doi = {10.1016/j.comnet.2012.12.018},
abstract = {In the Internet of Things, services can be provisioned using centralized architectures, where central entities acquire, process, and provide information. Alternatively, distributed architectures, where entities at the edge of the network exchange information and collaborate with each other in a dynamic way, can also be used. In order to understand the applicability and viability of this distributed approach, it is necessary to know its advantages and disadvantages not only in terms of features but also in terms of security and privacy challenges. The purpose of this paper is to show that the distributed approach has various challenges that need to be solved, but also various interesting properties and strengths.},
keywords = {connectivity,Distributed Architectures,Internet of Things,iot,network,Security},
file = {/home/seb/Zotero/storage/CNBJ9Q6H/S1389128613000054.html}
}
@online{rrrr2023,
title = {Repeatability, {{Reproducibility}}, {{Replicability}}, {{Reusability}} ({{4R}}) in {{Journals}}' {{Policies}} and {{Software}}/{{Data Management}} in {{Scientific Publications}}: {{A Survey}}, {{Discussion}}, and {{Perspectives}}},
shorttitle = {Repeatability, {{Reproducibility}}, {{Replicability}}, {{Reusability}} ({{4R}}) in {{Journals}}' {{Policies}} and {{Software}}/{{Data Management}} in {{Scientific Publications}}},
author = {Hernández, José Armando and Colom, Miguel},
date = {2023-12-18},
eprint = {2312.11028},
eprinttype = {arXiv},
eprintclass = {cs},
doi = {10.48550/arXiv.2312.11028},
abstract = {With the recognized crisis of credibility in scientific research, there is a growth of reproducibility studies in computer science, and although existing surveys have reviewed reproducibility from various perspectives, especially very specific technological issues, they do not address the author-publisher relationship in the publication of reproducible computational scientific articles. This aspect requires significant attention because it is the basis for reliable research. We have found a large gap between the reproducibility-oriented practices, journal policies, recommendations, publisher artifact Description/Evaluation guidelines, submission guides, technological reproducibility evolution, and its effective adoption to contribute to tackling the crisis. We conducted a narrative survey, a comprehensive overview and discussion identifying the mutual efforts required from Authors, Journals, and Technological actors to achieve reproducibility research. The relationship between authors and scientific journals in their mutual efforts to jointly improve the reproducibility of scientific results is analyzed. Eventually, we propose recommendations for the journal policies, as well as a unified and standardized Reproducibility Guide for the submission of scientific articles for authors. The main objective of this work is to analyze the implementation and experiences of reproducibility policies, techniques and technologies, standards, methodologies, software, and data management tools required for scientific reproducible publications. Also, the benefits and drawbacks of such an adoption, as well as open challenges and promising trends, to propose possible strategies and efforts to mitigate the identified gaps. To this purpose, we analyzed 200 scientific articles, surveyed 16 Computer Science journals, and systematically classified them according to reproducibility strategies, technologies, policies, code citation, and editorial business. We conclude there is still a reproducibility gap in scientific publications, although at the same time also the opportunity to reduce this gap with the joint effort of authors, publishers, and technological providers.},
pubstate = {prepublished},
keywords = {Computer Science - Software Engineering,repeatability,replicability,reproducibility,reusability},
file = {/home/seb/Zotero/storage/TD6WP27L/Hernández and Colom - 2023 - Repeatability, Reproducibility, Replicability, Reu.pdf;/home/seb/Zotero/storage/PQMREEDV/2312.html}
}
@article{sibonitestbed2019,
title = {Security {{Testbed}} for {{Internet-of-Things Devices}}},
author = {Siboni, Shachar and Sachidananda, Vinay and Meidan, Yair and Bohadana, Michael and Mathov, Yael and Bhairav, Suhas and Shabtai, Asaf and Elovici, Yuval},
date = {2019-03},
journaltitle = {IEEE Transactions on Reliability},
shortjournal = {IEEE Trans. Reliab.},
volume = {68},
number = {1},
pages = {23--44},
issn = {1558-1721},
doi = {10.1109/TR.2018.2864536},
abstract = {The Internet of Things (IoT) is a global ecosystem of information and communication technologies aimed at connecting any type of object (thing), at any time, and in any place, to each other and to the Internet. One of the major problems associated with the IoT is the heterogeneous nature of such deployments; this heterogeneity poses many challenges, particularly, in the areas of security and privacy. Specifically, security testing and analysis of IoT devices is considered a very complex task, as different security testing methodologies, including software and hardware security testing approaches, are needed. In this paper, we propose an innovative security testbed framework targeted at IoT devices. The security testbed is aimed at testing all types of IoT devices, with different software/hardware configurations, by performing standard and advanced security testing. Advanced analysis processes based on machine learning algorithms are employed in the testbed in order to monitor the overall operation of the IoT device under test. The architectural design of the proposed security testbed along with a detailed description of the testbed implementation is discussed. The testbed operation is demonstrated on different IoT devices using several specific IoT testing scenarios. The results obtained demonstrate that the testbed is effective at detecting vulnerabilities and compromised IoT devices.},
eventtitle = {{{IEEE Transactions}} on {{Reliability}}},
keywords = {Hardware,Internet of Things,Internet of Things (IoT),IoT devices,privacy,security,Security,Software,Standards,testbed framework,Testing},
file = {/home/seb/Zotero/storage/SVD5VNTV/Siboni et al. - 2019 - Security Testbed for Internet-of-Things Devices.pdf;/home/seb/Zotero/storage/VXRRDTR9/8565917.html}
}
@article{surveytestingmethods2022,
title = {Survey of {{Testing Methods}} and {{Testbed Development Concerning Internet}} of {{Things}}},
author = {Zhu, Shicheng and Yang, Shunkun and Gou, Xiaodong and Xu, Yang and Zhang, Tao and Wan, Yueliang},
date = {2022-03-01},
journaltitle = {Wireless Personal Communications},
shortjournal = {Wireless Pers Commun},
volume = {123},
number = {1},
pages = {165--194},
issn = {1572-834X},
doi = {10.1007/s11277-021-09124-5},
abstract = {The concept of Internet of Things (IoT) was designed to change everyday lives of people via multiple forms of computing and easy deployment of applications. In recent years, the increasing complexity of IoT-ready devices and processes has led to potential risks related to system reliability. Therefore, the comprehensive testing of IoT technology has attracted the attention of many researchers, which promotes the extensive development of IoT testing methods and infrastructure. However, the current research on IoT testing methods and testbeds mainly focuses on specific application scenarios, lacking systematic review and analysis of many applications from different points of view. This paper systematically summarizes the latest testing methods covering different IoT fields and discusses the development status of the existing Internet of things testbed. Findings of this review demonstrate that IoT testing is moving toward larger scale and intelligent testing, and that in near future, IoT test architecture is set to become more standardized and universally applicable with multi-technology convergence—i.e., a combination of big data, cloud computing, and artificial intelligence—being the prime focus of IoT testing.},
langid = {english},
keywords = {Internet of Things,IoT testing,Testbed,Testing method},
file = {/home/seb/Zotero/storage/ZZ6KBCP6/Zhu et al. - 2022 - Survey of Testing Methods and Testbed Development .pdf}
}
@article{tbsmartgrid2013,
title = {Cyber-{{Physical Security Testbeds}}: {{Architecture}}, {{Application}}, and {{Evaluation}} for {{Smart Grid}}},
shorttitle = {Cyber-{{Physical Security Testbeds}}},
author = {Hahn, Adam and Ashok, Aditya and Sridhar, Siddharth and Govindarasu, Manimaran},
date = {2013-06},
journaltitle = {IEEE Transactions on Smart Grid},
shortjournal = {IEEE Trans. Smart Grid},
volume = {4},
number = {2},
pages = {847--855},
issn = {1949-3061},
doi = {10.1109/TSG.2012.2226919},
abstract = {The development of a smarter electric grid will depend on increased deployments of information and communication technology (ICT) to support novel communication and control functions. Unfortunately, this additional dependency also expands the risk from cyber attacks. Designing systems with adequate cyber security depends heavily on the availability of representative environments, such as testbeds, where current issues and future ideas can be evaluated. This paper provides an overview of a smart grid security testbed, including the set of control, communication, and physical system components required to provide an accurate cyber-physical environment. It then identifies various testbed research applications and also identifies how various components support these applications. The PowerCyber testbed at Iowa State University is then introduced, including the architecture, applications, and novel capabilities, such as virtualization, Real Time Digital Simulators (RTDS), and ISEAGE WAN emulation. Finally, several attack scenarios are evaluated using the testbed to explore cyber-physical impacts. In particular, availability and integrity attacks are demonstrated with both isolated and coordinated approaches, these attacks are then evaluated based on the physical system's voltage and rotor angle stability.},
eventtitle = {{{IEEE Transactions}} on {{Smart Grid}}},
keywords = {Computer architecture,cyber security,Cyber-physical systems,ieee,iot,Protocols,Real-time systems,Security,smart grid,Smart grids,Software,Substations,testbed,testbeds},
file = {/home/seb/Zotero/storage/DHKLTKRM/6473865.html}
}
@online{tcpdump,
title = {Home | {{TCPDUMP}} \& {{LIBPCAP}}},
url = {https://www.tcpdump.org/},
urldate = {2024-06-30},
file = {/home/seb/Zotero/storage/SXMBIDLR/www.tcpdump.org.html}
}
@online{testbedOxford,
title = {Test Bed Noun - {{Definition}}, Pictures, Pronunciation and Usage Notes | {{Oxford Advanced Learner}}'s {{Dictionary}} at {{OxfordLearnersDictionaries}}.Com},
url = {https://www.oxfordlearnersdictionaries.com/definition/english/test-bed},
urldate = {2024-06-20}
}
@inproceedings{ukilEmbeddedSecurityInternet2011,
title = {Embedded Security for {{Internet}} of {{Things}}},
booktitle = {2011 2nd {{National Conference}} on {{Emerging Trends}} and {{Applications}} in {{Computer Science}}},
author = {Ukil, Arijit and Sen, Jaydip and Koilakonda, Sripad},
date = {2011-03},
pages = {1--6},
doi = {10.1109/NCETACS.2011.5751382},
abstract = {Internet of Things (IoT) consists of several tiny devices connected together to form a collaborative computing environment. IoT imposes peculiar constraints in terms of connectivity, computational power and energy budget, which make it significantly different from those contemplated by the canonical doctrine of security in distributed systems. In order to circumvent the problem of security in IoT domain, networks and devices need to be secured. In this paper, we consider the embedded device security only, assuming that network security is properly in place. It can be noticed that the existence of tiny computing devices that form ubiquity in IoT domain are very much vulnerable to different security attacks. In this work, we provide the requirements of embedded security, the solutions to resists different attacks and the technology for resisting temper proofing of the embedded devices by the concept of trusted computing. Our paper attempts to address the issue of security for data at rest. Addressing this issue is equivalent to addressing the security issue of the hardware platform. Our work also partially helps in addressing securing data in transit.},
eventtitle = {2011 2nd {{National Conference}} on {{Emerging Trends}} and {{Applications}} in {{Computer Science}}},
keywords = {ARM,Computer architecture,confidentiality,embedded device,Embedded systems,Hardware,Internet of things (IoT),Protocols,security,Security,Smart phones,Trustzone,ubiquitous computing},
file = {/home/seb/Zotero/storage/IQGX2SWB/5751382.html}
}
@thesis{vacuumpie2023,
type = {Master Thgesis},
title = {Private {{Information Exposed}} by the {{Use}} of {{Robot Vacuum Cleaner}} in {{Smart Environments}}},
author = {Ulsmåg, Benjamin Andreas},
date = {2023-01-06},
institution = {{Norwegian University of Science and Technology}},
location = {Gjøvik},
abstract = {Robot vacuum cleaners are popular IoT devices and are deployed in all kinds of smart environments. Integration with IoT systems introduce more security and privacy issues related to the operation of these devices. Vendors have developed smart phone applications where users can personalize cleaning or view informa- tion about the vacuum cleaner. This increase the integration between users life and the robot vacuum cleaner, which potentially exposes private information. In- dustry standards include end-to-end encryption between the application, cloud service and robot vacuum cleaner to secure the private information exchanged. Regardless of encryption, network header metadata is still available through net- work eavesdropping attacks. In this project we investigated the potential private information exposed by this metadata. An Irobot Roomba i7 was deployed in two different smart environments where passive network eavesdropping was conduc- ted during smart feature triggering. Analysis revealed that it was possible to attrib- ute different events triggered on the Irobot Roomba i7, only based on metadata in the Internet traffic capture. Different signature-based detection algorithms are proposed, with a high detection rate. Wi-Fi and Internet capturing metadata were compared and similar patterns were identified, making the detection method ap- plicable for Wi-Fi eavesdropping as well. This thesis covers the implementation, capturing and analysis of network traffic and proposes event detection algorithms.},
langid = {english}
}
@article{vassermanVampireAttacksDraining2013,
title = {Vampire {{Attacks}}: {{Draining Life}} from {{Wireless Ad Hoc Sensor Networks}}},
shorttitle = {Vampire {{Attacks}}},
author = {Vasserman, Eugene Y. and Hopper, Nicholas},
date = {2013-02},
journaltitle = {IEEE Transactions on Mobile Computing},
shortjournal = {IEEE Trans. Mob. Comput.},
volume = {12},
number = {2},
pages = {318--332},
issn = {1558-0660},
doi = {10.1109/TMC.2011.274},
abstract = {Ad hoc low-power wireless networks are an exciting research direction in sensing and pervasive computing. Prior security work in this area has focused primarily on denial of communication at the routing or medium access control levels. This paper explores resource depletion attacks at the routing protocol layer, which permanently disable networks by quickly draining nodes' battery power. These "Vampire” attacks are not specific to any specific protocol, but rather rely on the properties of many popular classes of routing protocols. We find that all examined protocols are susceptible to Vampire attacks, which are devastating, difficult to detect, and are easy to carry out using as few as one malicious insider sending only protocol-compliant messages. In the worst case, a single Vampire can increase network-wide energy usage by a factor of O(N), where N in the number of network nodes. We discuss methods to mitigate these types of attacks, including a new proof-of-concept protocol that provably bounds the damage caused by Vampires during the packet forwarding phase.},
eventtitle = {{{IEEE Transactions}} on {{Mobile Computing}}},
keywords = {ad hoc networks,Ad hoc networks,Denial of service,Energy consumption,Network topology,routing,Routing,Routing protocols,security,sensor networks,Topology,wireless networks},
file = {/home/seb/Zotero/storage/W96J7MD8/Vasserman and Hopper - 2013 - Vampire Attacks Draining Life from Wireless Ad Ho.pdf;/home/seb/Zotero/storage/TY3DMJZZ/6112758.html}
}
@article{vaughan2005use,
title = {The Use of Climate Chambers in Biological Research},
author = {family=Vaughan, given=TL, given-i=TL and family=Battle, given=SC, given-i=SC and family=Walker, given=KL, given-i=KL},
date = {2005},
journaltitle = {Environmental Science \& Technology},
shortjournal = {Environ. Sci. Technol.},
volume = {39},
number = {14},
pages = {5121--5127},
publisher = {ACS Publications}
}
@article{whatissmartdevice2018,
title = {What Is a Smart Device? - a Conceptualisation within the Paradigm of the Internet of Things},
author = {Silverio-Fernández, Manuel and Renukappa, Suresh and Suresh, Subashini},
date = {2018-05-09},
journaltitle = {Visualization in Engineering},
shortjournal = {Visualization in Engineering},
volume = {6},
number = {1},
pages = {3},
issn = {2213-7459},
doi = {10.1186/s40327-018-0063-8},
abstract = {The Internet of Things (IoT) is an interconnected network of objects which range from simple sensors to smartphones and tablets; it is a relatively novel paradigm that has been rapidly gaining ground in the scenario of modern wireless telecommunications with an expected growth of 25 to 50 billion of connected devices for 2020 Due to the recent rise of this paradigm, authors across the literature use inconsistent terms to address the devices present in the IoT, such as mobile device, smart device, mobile technologies or mobile smart device. Based on the existing literature, this paper chooses the term smart device as a starting point towards the development of an appropriate definition for the devices present in the IoT. This investigation aims at exploring the concept and main features of smart devices as well as their role in the IoT. This paper follows a systematic approach for reviewing compendium of literature to explore the current research in this field. It has been identified smart devices as the primary objects interconnected in the network of IoT, having an essential role in this paradigm. The developed concept for defining smart device is based on three main features, namely context-awareness, autonomy and device connectivity. Other features such as mobility and user-interaction were highly mentioned in the literature, but were not considered because of the nature of the IoT as a network mainly oriented to device-to-device connectivity whether they are mobile or not and whether they interact with people or not. What emerges from this paper is a concept which can be used to homogenise the terminology used on further research in the Field of digitalisation and smart technologies.}
}
@article{wilkinson_fair_2016,
title = {The {{FAIR Guiding Principles}} for Scientific Data Management and Stewardship},
author = {Wilkinson, Mark D. and Swertz, Morris A. and family=al., prefix=et, useprefix=true},
date = {2016-03-15},
journaltitle = {Scientific Data},
shortjournal = {Sci Data},
volume = {3},
number = {1},
pages = {160018},
publisher = {Nature Publishing Group},
issn = {2052-4463},
doi = {10.1038/sdata.2016.18},
abstract = {There is an urgent need to improve the infrastructure supporting the reuse of scholarly data. A diverse set of stakeholders—representing academia, industry, funding agencies, and scholarly publishers—have come together to design and jointly endorse a concise and measureable set of principles that we refer to as the FAIR Data Principles. The intent is that these may act as a guideline for those wishing to enhance the reusability of their data holdings. Distinct from peer initiatives that focus on the human scholar, the FAIR Principles put specific emphasis on enhancing the ability of machines to automatically find and use the data, in addition to supporting its reuse by individuals. This Comment is the first formal publication of the FAIR Principles, and includes the rationale behind them, and some exemplar implementations in the community.},
langid = {english},
keywords = {Publication characteristics,Research data},
file = {/home/seb/Zotero/storage/LDIYYE8H/Wilkinson et al. - 2016 - The FAIR Guiding Principles for scientific data ma.pdf}
}
@online{wiresharkorg,
title = {Wireshark · {{Go Deep}}},
url = {https://www.wireshark.org/},
urldate = {2024-06-30},
file = {/home/seb/Zotero/storage/SZ3UZZG4/www.wireshark.org.html}
}
@article{zander2014survey,
title = {A Survey of Testbeds and Experimental Research Infrastructures for Wireless Networks},
author = {Zander, Justus and Zinner, Thomas and Bifulco, Roberto and Carle, Georg},
date = {2014},
journaltitle = {IEEE Communications Surveys \& Tutorials},
shortjournal = {IEEE Commun. Surv. Tutor.},
volume = {15},
number = {4},
pages = {1231--1246},
publisher = {IEEE},
keywords = {iot,springer,survey,testbed}
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 284 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 289 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 54 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 51 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 96 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 20 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 106 KiB

View File

@ -0,0 +1,334 @@
// University theme
// Originally contributed by Pol Dellaiera - https://github.com/drupol
#import "@preview/touying:0.4.2": *
#let slide(
self: none,
title: auto,
subtitle: auto,
header: auto,
footer: auto,
display-current-section: auto,
..args,
) = {
if title != auto {
self.uni-title = title
}
if subtitle != auto {
self.uni-subtitle = subtitle
}
if header != auto {
self.uni-header = header
}
if footer != auto {
self.uni-footer = footer
}
if display-current-section != auto {
self.uni-display-current-section = display-current-section
}
(self.methods.touying-slide)(
..args.named(),
self: self,
title: title,
setting: body => {
show: args.named().at("setting", default: body => body)
body
},
..args.pos(),
)
}
#let title-slide(self: none, ..args) = {
self = utils.empty-page(self)
let info = self.info + args.named()
info.authors = {
let authors = if "authors" in info { info.authors } else { info.author }
if type(authors) == array { authors } else { (authors,) }
}
let content = {
if info.logo != none {
align(center, pad(1em, info.logo))
}
align(center + horizon, {
block(
inset: 0em,
breakable: false,
{
text(size: 2em, fill: self.colors.primary, strong(info.title))
if info.subtitle != none {
parbreak()
text(size: 1.2em, fill: self.colors.primary, info.subtitle)
}
}
)
set text(size: .8em)
grid(
columns: (1fr,) * calc.min(info.authors.len(), 3),
column-gutter: 1em,
row-gutter: 1em,
..info.authors.map(author => text(fill: black, author))
)
v(1em)
if info.institution != none {
parbreak()
text(size: .9em, info.institution)
}
if info.date != none {
parbreak()
text(size: .8em, utils.info-date(self))
}
})
}
(self.methods.touying-slide)(self: self, repeat: none, content)
}
#let matrix-slide(self: none, columns: none, rows: none, ..bodies) = {
self = utils.empty-page(self)
(self.methods.touying-slide)(self: self, composer: (..bodies) => {
let bodies = bodies.pos()
let columns = if type(columns) == int {
(1fr,) * columns
} else if columns == none {
(1fr,) * bodies.len()
} else {
columns
}
let num-cols = columns.len()
let rows = if type(rows) == int {
(1fr,) * rows
} else if rows == none {
let quotient = calc.quo(bodies.len(), num-cols)
let correction = if calc.rem(bodies.len(), num-cols) == 0 { 0 } else { 1 }
(1fr,) * (quotient + correction)
} else {
rows
}
let num-rows = rows.len()
if num-rows * num-cols < bodies.len() {
panic("number of rows (" + str(num-rows) + ") * number of columns (" + str(num-cols) + ") must at least be number of content arguments (" + str(bodies.len()) + ")")
}
let cart-idx(i) = (calc.quo(i, num-cols), calc.rem(i, num-cols))
let color-body(idx-body) = {
let (idx, body) = idx-body
let (row, col) = cart-idx(idx)
let color = if calc.even(row + col) { white } else { silver }
set align(center + horizon)
rect(inset: .5em, width: 100%, height: 100%, fill: color, body)
}
let content = grid(
columns: columns, rows: rows,
gutter: 0pt,
..bodies.enumerate().map(color-body)
)
content
}, ..bodies)
}
#let focus-slide(self: none, background-color: none, background-img: none, body) = {
let background-color = if background-img == none and background-color == none {
rgb(self.colors.primary)
} else {
background-color
}
self = utils.empty-page(self)
self.page-args += (
fill: self.colors.primary-dark,
margin: 1em,
..(if background-color != none { (fill: background-color) }),
..(if background-img != none { (background: {
set image(fit: "stretch", width: 100%, height: 100%)
background-img
})
}),
)
set text(fill: white, weight: "bold", size: 2em)
(self.methods.touying-slide)(self: self, repeat: none, align(horizon, body))
}
#let outline-slide(self: none, ..args) = {
(self.methods.slide)(self: self, heading(level: 2, self.outline-title) + parbreak() + (self.methods.touying-outline)(self: self, cover: false))
}
#let new-section-slide(self: none, short-title: auto, title) = {
self = utils.empty-page(self)
let content(self) = {
set align(horizon)
show: pad.with(20%)
set text(size: 1.5em, fill: self.colors.primary, weight: "bold")
states.current-section-with-numbering(self)
v(-.5em)
block(height: 2pt, width: 100%, spacing: 0pt, utils.call-or-display(self, self.uni-progress-bar))
}
(self.methods.touying-slide)(self: self, repeat: none, section: (title: title, short-title: short-title), content)
}
#let new-section-slide(self: none, short-title: auto, title) = {
(self.methods.slide)(self: self, section: (title: title, short-title: short-title), heading(level: 2, self.outline-title) + parbreak() + (self.methods.touying-outline)(self: self))
}
#let d-outline(self: none, enum-args: (:), list-args: (:), cover: true) = states.touying-progress-with-sections(dict => {
let (current-sections, final-sections) = dict
current-sections = current-sections.filter(section => section.loc != none)
final-sections = final-sections.filter(section => section.loc != none)
let current-index = current-sections.len() - 1
let d-cover(i, body) = if i != current-index and cover {
(self.methods.d-cover)(self: self, body)
} else {
body
}
set enum(..enum-args)
set list(..enum-args)
set text(fill: self.colors.primary)
for (i, section) in final-sections.enumerate() {
d-cover(i, {
enum.item(i + 1, [#link(section.loc, section.title)<touying-link>] + if section.children.filter(it => it.kind != "slide").len() > 0 {
let subsections = section.children.filter(it => it.kind != "slide")
set text(fill: self.colors.tertiary, size: 0.9em)
list(
..subsections.map(subsection => [#link(subsection.loc, subsection.title)<touying-link>])
)
})
})
parbreak()
}
})
#let slides(self: none, title-slide: true, slide-level: 1, ..args) = {
if title-slide {
(self.methods.title-slide)(self: self)
}
(self.methods.touying-slides)(self: self, slide-level: slide-level, ..args)
}
#let register(
self: themes.default.register(),
aspect-ratio: "16-9",
progress-bar: true,
display-current-section: true,
footer-columns: (25%, 1fr, 25%),
footer-a: self => self.info.author,
footer-b: self => if self.info.short-title == auto { self.info.title } else { self.info.short-title },
footer-c: self => {
h(1fr)
utils.info-date(self)
h(1fr)
states.slide-counter.display() + " / " + states.last-slide-number
h(1fr)
},
..args,
) = {
// color theme
self = (self.methods.colors)(
self: self,
primary: rgb("#2D373C"),
secondary: rgb("#A5D7D2"),
tertiary: rgb("#46505A"),
minthell: rgb("#D2EBE9"),
unibasred: rgb("#D20537")
)
// save the variables for later use
self.outline-title = [Outline]
self.uni-enable-progress-bar = progress-bar
self.uni-progress-bar = self => states.touying-progress(ratio => {
grid(
columns: (ratio * 100%, 1fr),
rows: 5pt,
components.cell(fill: self.colors.primary),
components.cell(fill: self.colors.secondary)
)
})
self.uni-display-current-section = display-current-section
self.uni-title = none
self.uni-subtitle = none
self.uni-footer = self => {
let cell(fill: none, it) = rect(
width: 100%, height: 100%, inset: 1mm, outset: 0mm, fill: fill, stroke: none,
align(horizon, text(fill: white, it))
)
show: block.with(width: 100%, height: auto, fill: self.colors.secondary)
grid(
columns: footer-columns,
rows: (1.5em, auto),
cell(fill: self.colors.primary, utils.call-or-display(self, footer-a)),
cell(fill: self.colors.secondary, utils.call-or-display(self, footer-b)),
cell(fill: self.colors.tertiary, utils.call-or-display(self, footer-c)),
)
}
self.uni-header = self => {
if self.uni-title != none {
block(inset: (x: .5em),
grid(
columns: 1,
gutter: .1em,
grid(
columns: (2fr, 2fr, 2fr),
align: (left+horizon, center+horizon, right+horizon),
align(horizon + left, text(fill: self.colors.primary, weight: "bold", size: 1.1em, self.uni-title)),
image(height: 1em,"logo-en.svg"),
if self.uni-display-current-section {
align(horizon + right, text(fill: self.colors.primary.lighten(30%), states.current-section-with-numbering(self)))
}
),
text(fill: self.colors.secondary, size: .8em, self.uni-subtitle)
)
)
}
}
// set page
let header(self) = {
set align(top)
grid(
fill: self.colors.minthell,
rows: (auto, auto),
row-gutter:.0mm,
if self.uni-enable-progress-bar {
utils.call-or-display(self, self.uni-progress-bar)
},
utils.call-or-display(self, self.uni-header)
)
}
let footer(self) = {
set text(size: .5em)
set align(center + bottom)
utils.call-or-display(self, self.uni-footer)
}
self.page-args += (
paper: "presentation-" + aspect-ratio,
header: header,
footer: footer,
header-ascent: 0em,
footer-descent: 0em,
margin: (top: 2.5em, bottom: 1.25em, x: 2em),
)
// register methods
self.methods.slide = slide
self.methods.title-slide = title-slide
self.methods.new-section-slide = new-section-slide
self.methods.touying-new-section-slide = new-section-slide
self.methods.focus-slide = focus-slide
self.methods.matrix-slide = matrix-slide
self.methods.slides = slides
self.methods.outline-slide = outline-slide
self.methods.touying-outline = (self: none, enum-args: (:), ..args) => {
states.touying-outline(self: self, enum-args: (tight: false,) + enum-args, ..args)
}
self.methods.touying-outline = d-outline
self.methods.d-outline = d-outline
self.methods.d-cover = (self: none, body) => {
utils.cover-with-rect(fill: utils.update-alpha(
constructor: rgb, self.page-args.fill, 50%), body)
}
//self.methods.alert = (self: none, it) => text(fill: self.colors.unibasred, it)
self.methods.init = (self: none, body) => {
set text(size: 20pt)
set heading(outlined: true)
show footnote.entry: set text(size: .6em)
body
}
self
}

View File

@ -0,0 +1,577 @@
@article{abuwaragaTestbed2020,
title = {Design and Implementation of Automated {{IoT}} Security Testbed},
author = {Abu Waraga, Omnia and Bettayeb, Meriem and Nasir, Qassim and Abu Talib, Manar},
date = {2020-01-01},
journaltitle = {Computers \& Security},
shortjournal = {Computers \& Security},
volume = {88},
pages = {101648},
issn = {0167-4048},
doi = {10.1016/j.cose.2019.101648},
abstract = {The emergence of technology associated with the Internet of Things (IoT) is reshaping our lives, while simultaneously raising many issues due to their low level of security, which attackers can exploit for malicious purposes. This research paper conducts a comprehensive analysis of previous studies on IoT device security with a focus on the various tools used to test IoT devices and the vulnerabilities that were found. Additionally, the paper contains a survey of IoT-based security testbeds in the research literature. In this research study, we introduce an open source platform for identifying weaknesses in IoT networks and communications. The platform is easily modifiable and extendible to enable the addition of new security assessment tests and functionalities. It automates security evaluation, allowing for testing without human intervention. The testbed reports the security problems of the tested devices and can detect all attacks made against the devices. It is also designed to monitor communications within the testbed and with connected devices, enabling the system to abort if malicious activity is detected. To demonstrate the capabilities of the proposed IoT security testbed, it is used to examine the vulnerabilities of two IoT devices: a wireless camera and a smart bulb.},
keywords = {Automated testbed architecture,Internet of Things,IoT testbed,Vulnerability assessment},
file = {/home/seb/Zotero/storage/U3D2SCU4/S0167404819301920.html}
}
@article{al-hawawrehDevelopingSecurityTestbed2021,
title = {Developing a {{Security Testbed}} for {{Industrial Internet}} of {{Things}}},
author = {Al-Hawawreh, Muna and Sitnikova, Elena},
date = {2021-04},
journaltitle = {IEEE Internet of Things Journal},
shortjournal = {IEEE Internet Things J.},
volume = {8},
number = {7},
pages = {5558--5573},
issn = {2327-4662},
doi = {10.1109/JIOT.2020.3032093},
abstract = {While achieving security for Industrial Internet of Things (IIoT) is a critical and nontrivial task, more attention is required for brownfield IIoT systems. This is a consequence of long life cycles of their legacy devices which were initially designed without considering security and IoT connectivity, but they are now becoming more connected and integrated with emerging IoT technologies and messaging communication protocols. Deploying today's methodologies and solutions in brownfield IIoT systems is not viable, as security solutions must co-exist and fit these systems' requirements. This necessitates a realistic standardized IIoT testbed that can be used as an optimal format to measure the credibility of security solutions of IIoT networks, analyze IIoT attack landscapes and extract threat intelligence. Developing a testbed for brownfield IIoT systems is considered a significant challenge as these systems are comprised of legacy, heterogeneous devices, communication layers and applications that need to be implemented holistically to achieve high fidelity. In this article, we propose a new generic end-to-end IIoT security testbed, with a particular focus on the brownfield system and provide details of the testbed's architectural design and the implementation process. The proposed testbed can be easily reproduced and reconfigured to support the testing activities of new processes and various security scenarios. The proposed testbed operation is demonstrated on different connected devices, communication protocols and applications. The experiments demonstrate that this testbed is effective in terms of its operation and security testing. A comparison with existing testbeds, including a table of features is provided.},
eventtitle = {{{IEEE Internet}} of {{Things Journal}}},
keywords = {Brownfield,ieee,Industrial Internet of Things (IIoT),iot,Protocols,Resilience,Security,security testing,Sensors,testbed,Testing},
file = {/home/seb/Zotero/storage/7JFQCP4C/Al-Hawawreh and Sitnikova - 2021 - Developing a Security Testbed for Industrial Inter.pdf;/home/seb/Zotero/storage/U9SM7UYK/9233425.html}
}
@inproceedings{alyamiwifi2022,
title = {{{WiFi-based IoT Devices Profiling Attack}} Based on {{Eavesdropping}} of {{Encrypted WiFi Traffic}}},
booktitle = {2022 {{IEEE}} 19th {{Annual Consumer Communications}} \& {{Networking Conference}} ({{CCNC}})},
author = {Alyami, Mnassar and Alharbi, Ibrahim and Zou, Cliff and Solihin, Yan and Ackerman, Karl},
date = {2022-01-08},
pages = {385--392},
publisher = {IEEE},
location = {Las Vegas, NV, USA},
doi = {10.1109/CCNC49033.2022.9700674},
abstract = {Recent research has shown that in-network observers of WiFi communication (i.e., observers who have joined the WiFi network) can obtain much information regarding the types, user identities, and activities of Internet-of-Things (IoT) devices in the network. What has not been explored is the question of how much information can be inferred by an out-ofnetwork observer who does not have access to the WiFi network. This attack scenario is more realistic and much harder to defend against, thus imposes a real threat to user privacy. In this paper, we investigate privacy leakage derived from an out-of-network traffic eavesdropper on the encrypted WiFi traffic of popular IoT devices. We instrumented a testbed of 12 popular IoT devices and evaluated multiple machine learning methods for fingerprinting and inferring what IoT devices exist in a WiFi network. By only exploiting the WiFi frame header information, we have achieved 95\% accuracy in identifying the devices and often their working status. This study demonstrates that information leakage and privacy attack is a real threat for WiFi networks and IoT applications.},
eventtitle = {2022 {{IEEE}} 19th {{Annual Consumer Communications}} \& {{Networking Conference}} ({{CCNC}})},
isbn = {978-1-66543-161-3},
langid = {english},
file = {/home/seb/Zotero/storage/7A9CFI4D/Alyami et al. - 2022 - WiFi-based IoT Devices Profiling Attack based on E.pdf}
}
@inproceedings{aysom23,
title = {Are {{You Spying}} on {{Me}}? \{\vphantom\}{{Large-Scale}}\vphantom\{\} {{Analysis}} on \{\vphantom\}{{IoT}}\vphantom\{\} {{Data Exposure}} through {{Companion Apps}}},
shorttitle = {Are {{You Spying}} on {{Me}}?},
author = {Nan, Yuhong and Wang, Xueqiang and Xing, Luyi and Liao, Xiaojing and Wu, Ruoyu and Wu, Jianliang and Zhang, Yifan and Wang, XiaoFeng},
date = {2023},
pages = {6665--6682},
url = {https://www.usenix.org/conference/usenixsecurity23/presentation/nan},
urldate = {2024-02-25},
eventtitle = {32nd {{USENIX Security Symposium}} ({{USENIX Security}} 23)},
isbn = {978-1-939133-37-3},
langid = {english},
file = {/home/seb/Zotero/storage/M5HNUNW8/Nan et al. - 2023 - Are You Spying on Me Large-Scale Analysis on I.pdf}
}
@article{bashir2017internet,
title = {The {{Internet}} of {{Things}} Testbed: A Survey and Evaluation},
author = {Bashir, Abid H and Gill, Khurram},
date = {2017},
journaltitle = {Future Generation Computer Systems},
shortjournal = {Future Gener. Comput. Syst.},
volume = {78},
pages = {409--421},
publisher = {Elsevier}
}
@online{click,
title = {Welcome to {{Click}} — {{Click Documentation}} (8.1.x)},
url = {https://click.palletsprojects.com/en/8.1.x/},
urldate = {2024-06-30},
file = {/home/seb/Zotero/storage/88MW53XH/8.1.x.html}
}
@unpublished{CommonLanguageFacilitate2023,
title = {Toward a Common Language to Facilitate Reproducible Research and Technology Transfer: Challenges and Solutions},
shorttitle = {Toward a Common Language to Facilitate Reproducible Research and Technology Transfer},
date = {2023-06-28},
doi = {10.5281/zenodo.8105339},
abstract = {The keynote presentation from the 1st ACM conference on reproducibility and replicability (ACM REP'23).The video of this presentation is available at the ACM YouTube channel.Please don't hesitate to provide your feedback via the public Discord server~from the MLCommons Task Force on Automation and Reproducibility and GitHub issues.[ GitHub project~] [ Public Collective Knowledge repository ][ Related reproducibility initiatives ] [ cTuning.org ] [ cKnowledge.org ]During the past 10 years, we have considerably improved the reproducibility of experimental results from published papers by introducing the artifact evaluation process with a unified artifact appendix and reproducibility checklists, Jupyter notebooks, containers, and Git repositories. On the other hand, our experience reproducing more than 200 papers shows that it can take weeks and months of painful and repetitive interactions between teams to reproduce artifacts. This effort includes decrypting numerous README files, examining ad-hoc artifacts and containers, and figuring out how to reproduce computational results. Furthermore, snapshot containers pose a challenge to optimize algorithms' performance, accuracy, power consumption and operational costs across diverse and rapidly evolving software, hardware, and data used in the real world.In this talk, I~explain how our practical artifact evaluation experience and the feedback from researchers and evaluators motivated us to develop a simple, intuitive, technology agnostic, and English-like scripting language called Collective Mind (CM). It helps to automatically adapt any given experiment to any software, hardware, and data while automatically generating unified README files and synthesizing modular containers with a unified API. It is being developed by MLCommons to facilitate reproducible AI/ML Systems research and minimizing manual and repetitive benchmarking and optimization efforts, reduce time and costs for reproducible research, and simplify technology transfer to production. I also present several recent use cases of how CM helps MLCommons, the Student Cluster Competition, and artifact evaluation at ACM/IEEE conferences. I conclude with our development plans, new challenges, possible solutions, and upcoming reproducibility and optimization challenges powered by the MLCommons Collective Knowledge platform and CM:~access.cKnowledge.org.},
keywords = {artifact evaluation,artificial intelligence,automation,chatgpt,cknowledge,collective knowledge,collective mind,competitions,cTuning,llm,llm automation,machine learning,mlcommons,mlperf,optimization challenges,performance,replicability,reproducibility,reusability,systems},
file = {/home/seb/Zotero/storage/AGZTALNV/Fursin - 2023 - Toward a common language to facilitate reproducibl.pdf}
}
@online{coryefelleCorrectingIoTHistory2016,
title = {Correcting the {{IoT History}}},
author = {CoryEfelle},
date = {2016-03-14T22:28:21+00:00},
url = {http://www.chetansharma.com/correcting-the-iot-history/},
urldate = {2024-06-20},
abstract = {In the last 5 years, IoT has entered the industry consciousness. There are varying forecasts calling for tremendous growth and … Continued},
langid = {american},
organization = {Chetan Sharma},
file = {/home/seb/Zotero/storage/LJX88N74/correcting-the-iot-history.html}
}
@inproceedings{dasilvaComRoad2021,
title = {A {{Community Roadmap}} for {{Scientific Workflows Research}} and {{Development}}},
booktitle = {2021 {{IEEE Workshop}} on {{Workflows}} in {{Support}} of {{Large-Scale Science}} ({{WORKS}})},
author = {family=Silva, given=Rafael Ferreira, prefix=da, useprefix=true and Casanova, Henri and Chard, Kyle and Altintas, Ilkay and Badia, Rosa M and Balis, Bartosz and Coleman, Tainã and Coppens, Frederik and Di Natale, Frank and Enders, Bjoern and Fahringer, Thomas and Filgueira, Rosa and Fursin, Grigori and Garijo, Daniel and Goble, Carole and Howell, Dorran and Jha, Shantenu and Katz, Daniel S. and Laney, Daniel and Leser, Ulf and Malawski, Maciej and Mehta, Kshitij and Pottier, Loïc and Ozik, Jonathan and Peterson, J. Luc and Ramakrishnan, Lavanya and Soiland-Reyes, Stian and Thain, Douglas and Wolf, Matthew},
date = {2021-11},
pages = {81--90},
doi = {10.1109/WORKS54523.2021.00016},
abstract = {The landscape of workflow systems for scientific applications is notoriously convoluted with hundreds of seemingly equivalent workflow systems, many isolated research claims, and a steep learning curve. To address some of these challenges and lay the groundwork for transforming workflows research and development, the WorkflowsRI and ExaWorks projects partnered to bring the international workflows community together. This paper reports on discussions and findings from two virtual “Workflows Community Summits” (January and April, 2021). The overarching goals of these workshops were to develop a view of the state of the art, identify crucial research challenges in the workflows community, articulate a vision for potential community efforts, and discuss technical approaches for realizing this vision. To this end, participants identified six broad themes: FAIR computational workflows; AI workflows; exascale challenges; APIs, interoperability, reuse, and standards; training and education; and building a workflows community. We summarize discussions and recommendations for each of these themes.},
eventtitle = {2021 {{IEEE Workshop}} on {{Workflows}} in {{Support}} of {{Large-Scale Science}} ({{WORKS}})},
keywords = {AI workflows,Artificial intelligence,Buildings,community roadmap,Conferences,data management,exascale computing,interoperability,Research and development,Scientific workflows,Stakeholders,Standards,Training},
file = {/home/seb/Zotero/storage/856IVVCZ/da Silva et al. - 2021 - A Community Roadmap for Scientific Workflows Resea.pdf;/home/seb/Zotero/storage/7QR6LPZV/authors.html}
}
@report{dasilvaworkflow2021,
title = {Workflows {{Community Summit}}: {{Bringing}} the {{Scientific Workflows Community Together}}},
shorttitle = {Workflows {{Community Summit}}},
author = {family=Silva, given=Rafael Ferreira, prefix=da, useprefix=true and Casanova, Henri and Chard, Kyle and Laney, Dan and Ahn, Dong and Jha, Shantenu and Goble, Carole and Ramakrishnan, Lavanya and Peterson, Luc and Enders, Bjoern and Thain, Douglas and Altintas, Ilkay and Babuji, Yadu and Badia, Rosa M. and Bonazzi, Vivien and Coleman, Taina and Crusoe, Michael and Deelman, Ewa and Di Natale, Frank and Di Tommaso, Paolo and Fahringer, Thomas and Filgueira, Rosa and Fursin, Grigori and Ganose, Alex and Gruning, Bjorn and Katz, Daniel S. and Kuchar, Olga and Kupresanin, Ana and Ludascher, Bertram and Maheshwari, Ketan and Mattoso, Marta and Mehta, Kshitij and Munson, Todd and Ozik, Jonathan and Peterka, Tom and Pottier, Loic and Randles, Tim and Soiland-Reyes, Stian and Tovar, Benjamin and Turilli, Matteo and Uram, Thomas and Vahi, Karan and Wilde, Michael and Wolf, Matthew and Wozniak, Justin},
date = {2021-03-16},
eprint = {2103.09181},
eprinttype = {arXiv},
eprintclass = {cs},
doi = {10.5281/zenodo.4606958},
abstract = {Scientific workflows have been used almost universally across scientific domains, and have underpinned some of the most significant discoveries of the past several decades. Many of these workflows have high computational, storage, and/or communication demands, and thus must execute on a wide range of large-scale platforms, from large clouds to upcoming exascale high-performance computing (HPC) platforms. These executions must be managed using some software infrastructure. Due to the popularity of workflows, workflow management systems (WMSs) have been developed to provide abstractions for creating and executing workflows conveniently, efficiently, and portably. While these efforts are all worthwhile, there are now hundreds of independent WMSs, many of which are moribund. As a result, the WMS landscape is segmented and presents significant barriers to entry due to the hundreds of seemingly comparable, yet incompatible, systems that exist. As a result, many teams, small and large, still elect to build their own custom workflow solution rather than adopt, or build upon, existing WMSs. This current state of the WMS landscape negatively impacts workflow users, developers, and researchers. The "Workflows Community Summit" was held online on January 13, 2021. The overarching goal of the summit was to develop a view of the state of the art and identify crucial research challenges in the workflow community. Prior to the summit, a survey sent to stakeholders in the workflow community (including both developers of WMSs and users of workflows) helped to identify key challenges in this community that were translated into 6 broad themes for the summit, each of them being the object of a focused discussion led by a volunteer member of the community. This report documents and organizes the wealth of information provided by the participants before, during, and after the summit.},
keywords = {Computer Science - Distributed Parallel and Cluster Computing},
file = {/home/seb/Zotero/storage/JWQWSRVM/da Silva et al. - 2021 - Workflows Community Summit Bringing the Scientifi.pdf;/home/seb/Zotero/storage/4DY745J9/2103.html}
}
@inproceedings{faircsartefacts2022,
title = {Toward Findable, Accessible, Interoperable, and Reusable Cybersecurity Artifacts},
booktitle = {Proceedings of the 15th Workshop on Cyber Security Experimentation and Test},
author = {Balenson, David and Benzel, Terry and Eide, Eric and Emmerich, David and Johnson, David and Mirkovic, Jelena and Tinnel, Laura},
date = {2022},
series = {Cset '22},
pages = {65--70},
publisher = {Association for Computing Machinery},
location = {New York, NY, USA},
doi = {10.1145/3546096.3546104},
abstract = {Researchers in experimental cybersecurity are increasingly sharing the code, data, and other artifacts associated with their studies. This trend is encouraged and rewarded by conferences and journals through practices such as artifact evaluation and badging. While these trends in sharing artifacts are promising, the cybersecurity community is still far from an ecosystem in which artifacts are FAIR: findable, accessible, interoperable, and reusable. The lack of established standards and best practices for sharing and reuse results in artifacts that are often difficult to find and reuse; in addition, the lack of community standards results in artifacts that may be incomplete and low-quality. In this paper we describe our experience in creating an online community hub, called SEARCCH, to promote the sharing and reuse of artifacts for cybersecurity research. Based on our experience, we offer lessons learned: issues that must be addressed to further promote FAIR principles in experimental cybersecurity.},
isbn = {978-1-4503-9684-4},
pagetotal = {6},
keywords = {artifact catalog,cybersecurity artifacts,FAIR principles,reproducibility,SEARCCH}
}
@online{FHSReferencedSpecifications,
title = {{{FHS Referenced Specifications}}},
url = {https://refspecs.linuxfoundation.org/fhs.shtml},
urldate = {2024-06-22},
file = {/home/seb/Zotero/storage/E75NBMV5/fhs.html}
}
@inproceedings{friesssniffing2018,
title = {Multichannel-{{Sniffing-System}} for {{Real-World Analysing}} of {{Wi-Fi-Packets}}},
booktitle = {2018 {{Tenth International Conference}} on {{Ubiquitous}} and {{Future Networks}} ({{ICUFN}})},
author = {Friess, Kristof},
date = {2018-07},
pages = {358--364},
issn = {2165-8536},
doi = {10.1109/ICUFN.2018.8436715},
abstract = {Wireless technologies like Wi-Fi send their data using multiple channels. To analyze an environment and all Wi-Fi packets inside, a sniffing system is needed, which can sniff on all used channels of the wireless technology at the same time. This allows catching most packets on each channel. In this paper, a way to build up a multi-channel-sniffing-system (MCSS) is described. The test system uses several single board computers (SBC) with an external Wi-Fi adapter (USB), 19 SBCs are sniffing nodes (SFN) and one SBC as sending node (SN). The sniffing SBCs are placed in a cycle around the sender so that every node has the same chance to receive the simulated packets from the SN. For the control of all 20 SBCs, a self-developed software is used, which connects from the host to the clients and is used for configuring the experiments. The configuration is sent to each client and will initiate their start, so that their times are also synchronized, for this all clients are synchronised using a time server.},
eventtitle = {2018 {{Tenth International Conference}} on {{Ubiquitous}} and {{Future Networks}} ({{ICUFN}})},
keywords = {Bluetooth,Europe,Hardware,Monitoring,multichannel,node.js,sbc,sniffing,Universal Serial Bus,wifi,Wireless communication,Wireless fidelity},
file = {/home/seb/Zotero/storage/AIPDUX7V/Friess - 2018 - Multichannel-Sniffing-System for Real-World Analys.pdf;/home/seb/Zotero/storage/E38MLQA3/8436715.html}
}
@standard{fsh-home,
title = {3.8.~/Home : {{User}} Home Directories (Optional)},
url = {https://refspecs.linuxfoundation.org/FHS_3.0/fhs/ch03s08.html},
urldate = {2024-06-22},
file = {/home/seb/Zotero/storage/PHTUTULW/ch03s08.html}
}
@article{fursinckorg2021,
title = {Collective Knowledge: Organizing Research Projects as a Database of Reusable Components and Portable Workflows with Common Interfaces},
shorttitle = {Collective Knowledge},
author = {Fursin, Grigori},
date = {2021-03-29},
journaltitle = {Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences},
shortjournal = {Philos. Trans. R. Soc. Math. Phys. Eng. Sci.},
volume = {379},
number = {2197},
pages = {20200211},
publisher = {Royal Society},
doi = {10.1098/rsta.2020.0211},
abstract = {This article provides the motivation and overview of the Collective Knowledge Framework (CK or cKnowledge). The CK concept is to decompose research projects into reusable components that encapsulate research artifacts and provide unified application programming interfaces (APIs), command-line interfaces (CLIs), meta descriptions and common automation actions for related artifacts. The CK framework is used to organize and manage research projects as a database of such components. Inspired by the USB plug and play approach for hardware, CK also helps to assemble portable workflows that can automatically plug in compatible components from different users and vendors (models, datasets, frameworks, compilers, tools). Such workflows can build and run algorithms on different platforms and environments in a unified way using the customizable CK program pipeline with software detection plugins and the automatic installation of missing packages. This article presents a number of industrial projects in which the modular CK approach was successfully validated in order to automate benchmarking, auto-tuning and co-design of efficient software and hardware for machine learning and artificial intelligence in terms of speed, accuracy, energy, size and various costs. The CK framework also helped to automate the artifact evaluation process at several computer science conferences as well as to make it easier to reproduce, compare and reuse research techniques from published papers, deploy them in production, and automatically adapt them to continuously changing datasets, models and systems. The long-term goal is to accelerate innovation by connecting researchers and practitioners to share and reuse all their knowledge, best practices, artifacts, workflows and experimental results in a common, portable and reproducible format at cKnowledge.io. This article is part of the theme issue Reliability and reproducibility in computational science: implementing verification, validation and uncertainty quantification in silico.},
keywords = {DevOps,FAIR principles,portability,reproducibility,research automation,reusability},
file = {/home/seb/Zotero/storage/6DM4S7B7/Fursin - 2021 - Collective knowledge organizing research projects.pdf}
}
@online{go-fair,
title = {{{FAIR Principles}}},
url = {https://www.go-fair.org/fair-principles/},
urldate = {2024-06-22},
abstract = {In 2016, the FAIR Guiding Principles for scientific data management and stewardship~were published in~Scientific Data. The authors intended to provide guidelines to improve the Findability, Accessibility, Interoperability, and Reuse of digital assets. The principles emphasise machine-actionability (i.e., the capacity of… Continue reading →},
langid = {american},
organization = {GO FAIR},
file = {/home/seb/Zotero/storage/MLUAT2GN/fair-principles.html}
}
@article{huang2011testbed,
title = {Testbed for Evaluating Performance of Health Monitoring Systems},
author = {Huang, Qinfen and Liu, Min and Garcia, Alfredo and Reynolds, Matthew},
date = {2011},
journaltitle = {IEEE Transactions on Instrumentation and Measurement},
shortjournal = {IEEE Trans. Instrum. Meas.},
volume = {60},
number = {1},
pages = {114--123},
publisher = {IEEE}
}
@inproceedings{infoexpiot,
title = {Information {{Exposure From Consumer IoT Devices}}: {{A Multidimensional}}, {{Network-Informed Measurement Approach}}},
shorttitle = {Information {{Exposure From Consumer IoT Devices}}},
booktitle = {Proceedings of the {{Internet Measurement Conference}}},
author = {Ren, Jingjing and Dubois, Daniel J. and Choffnes, David and Mandalari, Anna Maria and Kolcun, Roman and Haddadi, Hamed},
date = {2019-10-21},
series = {{{IMC}} '19},
pages = {267--279},
publisher = {Association for Computing Machinery},
location = {New York, NY, USA},
doi = {10.1145/3355369.3355577},
abstract = {Internet of Things (IoT) devices are increasingly found in everyday homes, providing useful functionality for devices such as TVs, smart speakers, and video doorbells. Along with their benefits come potential privacy risks, since these devices can communicate information about their users to other parties over the Internet. However, understanding these risks in depth and at scale is difficult due to heterogeneity in devices' user interfaces, protocols, and functionality. In this work, we conduct a multidimensional analysis of information exposure from 81 devices located in labs in the US and UK. Through a total of 34,586 rigorous automated and manual controlled experiments, we characterize information exposure in terms of destinations of Internet traffic, whether the contents of communication are protected by encryption, what are the IoT-device interactions that can be inferred from such content, and whether there are unexpected exposures of private and/or sensitive information (e.g., video surreptitiously transmitted by a recording device). We highlight regional differences between these results, potentially due to different privacy regulations in the US and UK. Last, we compare our controlled experiments with data gathered from an in situ user study comprising 36 participants.},
isbn = {978-1-4503-6948-0},
file = {/home/seb/Zotero/storage/YT9SKQLS/Ren et al. - 2019 - Information Exposure From Consumer IoT Devices A .pdf}
}
@incollection{iotfundamentals,
title = {{{IoT Fundamentals}}: {{Definitions}}, {{Architectures}}, {{Challenges}}, and {{Promises}}},
booktitle = {Intelligent {{Internet}} of {{Things}}: {{From Device}} to {{Fog}} and {{Cloud}}},
author = {Firouzi, Farshad and Farahani, Bahar and Weinberger, Markus and DePace, Gabriel and Aliee, Fereidoon Shams},
editor = {Firouzi, Farshad and Chakrabarty, Krishnendu and Nassif, Sani},
date = {2020},
pages = {3--50},
publisher = {Springer International Publishing},
location = {Cham},
doi = {10.1007/978-3-030-30367-9_1},
abstract = {The Internet is everywhere and touched almost every corner of the globe affecting our lives in previously unimagined ways. As a living entity, the Internet is constantly evolving, and now, an era of widespread connectivity through various smart devices (i.e., things) that connect with the Internet has begun. This paradigm change is generally referred to as the Internet of Things (IoT). Welcoming IoT will bring significant benefits to economies and businesses as it enables greater innovation and productivity. On the other hand, the rapid adoption of IoT presents new challenges regarding connectivity, security, data processing, and scalability. Because the IoT world is vast and versatile, it cannot be viewed as a single technology. IoT looks more like an umbrella covering many protocols, technologies, and concepts that depend on specific industries. In this chapter, we will seek to look at the history of IoT, more clearly define it, and review its terms and concepts. We will also review vertical IoT markets and higher-level use cases that have successfully adopted IoT solutions. We will also discuss the details of the business implications, business models, and opportunities of IoT. Finally, the complete IoT stack and reference architectures from smart objects, to the networks, to the cloud, and finally the applications where information is leveraged are explained.},
isbn = {978-3-030-30367-9}
}
@inproceedings{iothome2019,
title = {All Things Considered: {{An}} Analysis of {{IoT}} Devices on Home Networks},
booktitle = {28th {{USENIX}} Security Symposium ({{USENIX}} Security 19)},
author = {Kumar, Deepak and Shen, Kelly and Case, Benton and Garg, Deepali and Alperovich, Galina and Kuznetsov, Dmitry and Gupta, Rajarshi and Durumeric, Zakir},
date = {2019-08},
pages = {1169--1185},
publisher = {USENIX Association},
location = {Santa Clara, CA},
url = {https://www.usenix.org/conference/usenixsecurity19/presentation/kumar-deepak},
isbn = {978-1-939133-06-9}
}
@inproceedings{iotInHomes2019,
title = {All {{Things Considered}}: {{An Analysis}} of \{\vphantom\}{{IoT}}\vphantom\{\} {{Devices}} on {{Home Networks}}},
shorttitle = {All {{Things Considered}}},
author = {Kumar, Deepak and Shen, Kelly and Case, Benton and Garg, Deepali and Alperovich, Galina and Kuznetsov, Dmitry and Gupta, Rajarshi and Durumeric, Zakir},
date = {2019},
pages = {1169--1185},
url = {https://www.usenix.org/conference/usenixsecurity19/presentation/kumar-deepak},
urldate = {2024-06-30},
eventtitle = {28th {{USENIX Security Symposium}} ({{USENIX Security}} 19)},
isbn = {978-1-939133-06-9},
langid = {english},
keywords = {adoption,home,iot},
file = {/home/seb/Zotero/storage/73BEXVMZ/Kumar et al. - 2019 - All Things Considered An Analysis of IoT Device.pdf}
}
@article{islamiot2023,
title = {Internet of {{Things}}: {{Device Capabilities}}, {{Architectures}}, {{Protocols}}, and {{Smart Applications}} in {{Healthcare Domain}}},
shorttitle = {Internet of {{Things}}},
author = {Islam, Md. Milon and Nooruddin, Sheikh and Karray, Fakhri and Muhammad, Ghulam},
date = {2023-02},
journaltitle = {IEEE Internet of Things Journal},
shortjournal = {IEEE Internet Things J.},
volume = {10},
number = {4},
pages = {3611--3641},
issn = {2327-4662},
doi = {10.1109/JIOT.2022.3228795},
abstract = {Nowadays, the Internet has spread to practically every country around the world and is having unprecedented effects on peoples lives. The Internet of Things (IoT) is getting more popular and has a high level of interest in both practitioners and academicians in the age of wireless communication due to its diverse applications. The IoT is a technology that enables everyday things to become savvier, everyday computation toward becoming intellectual, and everyday communication to become a little more insightful. In this article, the most common and popular IoT device capabilities, architectures, and protocols are demonstrated in brief to provide a clear overview of the IoT technology to the researchers in this area. The common IoT device capabilities, including hardware (Raspberry Pi, Arduino, and ESP8266) and software (operating systems (OSs), and built-in tools) platforms are described in detail. The widely used architectures that have recently evolved and used are the three-layer architecture, service-oriented architecture, and middleware-based architecture. The popular protocols for IoT are demonstrated which include constrained application protocol, message queue telemetry transport, extensible messaging and presence protocol, advanced message queuing protocol, data distribution service, low power wireless personal area network, Bluetooth low energy, and ZigBee that are frequently utilized to develop smart IoT applications. Additionally, this research provides an in-depth overview of the potential healthcare applications based on IoT technologies in the context of addressing various healthcare concerns. Finally, this article summarizes state-of-the-art knowledge, highlights open issues and shortcomings, and provides recommendations for further studies which would be quite beneficial to anyone with a desire to work in this field and make breakthroughs to get expertise in this area.},
eventtitle = {{{IEEE Internet}} of {{Things Journal}}},
keywords = {Communication protocol,Computer architecture,device capabilities,Hardware,healthcare applications,Internet of Things,Internet of Things (IoT),IoT architecture,Medical services,Protocols,Security,Software},
file = {/home/seb/Zotero/storage/HDMX3ZVW/Islam et al. - 2023 - Internet of Things Device Capabilities, Architect.pdf;/home/seb/Zotero/storage/WDKWMKN9/references.html}
}
@online{mitmproxy,
title = {Mitmproxy - an Interactive {{HTTPS}} Proxy},
url = {https://mitmproxy.org/},
urldate = {2024-06-30},
keywords = {proxy,sniffing,tools},
file = {/home/seb/Zotero/storage/NTUXF55S/mitmproxy.org.html}
}
@standard{OverviewInternetThings2012,
type = {Recommendation},
title = {Overview of the {{Internet}} of Things},
shorttitle = {Y.{{IoT-overview}}},
date = {2012-06-15},
number = {ITU-T Y.4000},
url = {https://handle.itu.int/11.1002/1000/11559},
abstract = {Recommendation ITU-T Y.2060 provides an overview of the Internet of things (IoT). It clarifies the concept and scope of the IoT, identifies the fundamental characteristics and high-level requirements of the IoT and describes the IoT reference model. The ecosystem and business models are also provided in an informative appendix. Former ITU-T Y.2060 renumbered as ITU-T Y.4000 on 2016-02-05 without further modification and without being republished.},
pubstate = {In force}
}
@inproceedings{peekaboo2020,
title = {Peek-a-{{Boo}}: {{I}} See Your Smart Home Activities, Even Encrypted!},
shorttitle = {Peek-a-{{Boo}}},
booktitle = {Proceedings of the 13th {{ACM Conference}} on {{Security}} and {{Privacy}} in {{Wireless}} and {{Mobile Networks}}},
author = {Acar, Abbas and Fereidooni, Hossein and Abera, Tigist and Sikder, Amit Kumar and Miettinen, Markus and Aksu, Hidayet and Conti, Mauro and Sadeghi, Ahmad-Reza and Uluagac, Selcuk},
date = {2020-07-08},
eprint = {1808.02741},
eprinttype = {arXiv},
eprintclass = {cs},
pages = {207--218},
doi = {10.1145/3395351.3399421},
abstract = {A myriad of IoT devices such as bulbs, switches, speakers in a smart home environment allow users to easily control the physical world around them and facilitate their living styles through the sensors already embedded in these devices. Sensor data contains a lot of sensitive information about the user and devices. However, an attacker inside or near a smart home environment can potentially exploit the innate wireless medium used by these devices to exfiltrate sensitive information from the encrypted payload (i.e., sensor data) about the users and their activities, invading user privacy. With this in mind,in this work, we introduce a novel multi-stage privacy attack against user privacy in a smart environment. It is realized utilizing state-of-the-art machine-learning approaches for detecting and identifying the types of IoT devices, their states, and ongoing user activities in a cascading style by only passively sniffing the network traffic from smart home devices and sensors. The attack effectively works on both encrypted and unencrypted communications. We evaluate the efficiency of the attack with real measurements from an extensive set of popular off-the-shelf smart home IoT devices utilizing a set of diverse network protocols like WiFi, ZigBee, and BLE. Our results show that an adversary passively sniffing the traffic can achieve very high accuracy (above 90\%) in identifying the state and actions of targeted smart home devices and their users. To protect against this privacy leakage, we also propose a countermeasure based on generating spoofed traffic to hide the device states and demonstrate that it provides better protection than existing solutions.},
keywords = {BLE,Computer Science - Cryptography and Security,network traffic,privacy,smart-home,wifi,ZigBee},
file = {/home/seb/Zotero/storage/HKM4PAZW/Acar et al. - 2020 - Peek-a-Boo I see your smart home activities, even.pdf;/home/seb/Zotero/storage/ISVLWPED/1808.html}
}
@article{pmsSpinellis2012,
title = {Package {{Management Systems}}},
author = {Spinellis, Diomidis},
date = {2012-03},
journaltitle = {IEEE Software},
shortjournal = {IEEE Softw.},
volume = {29},
number = {2},
pages = {84--86},
issn = {1937-4194},
doi = {10.1109/MS.2012.38},
abstract = {A package management system organizes and simplifies the installation and maintenance of software by standardizing and organizing the production and consumption of software collections. As a software developer, you can benefit from package managers in two ways: through a rich and stable development environment and through friction-free reuse. Promisingly, the structure that package managers bring both to the tools we use in our development process and the libraries we reuse in our products ties nicely with the recent move emphasizing DevOps (development operations) as an integration between software development and IT operations.},
eventtitle = {{{IEEE Software}}},
keywords = {DevOps,Maintenance engineering,module dependencies,package management system,Product management,shared library,Software libraries,Software reusability,software reuse},
file = {/home/seb/Zotero/storage/DA6A82Z4/6155145.html}
}
@online{poetry,
title = {Poetry - {{Python}} Dependency Management and Packaging Made Easy},
url = {https://python-poetry.org/},
urldate = {2024-06-30},
file = {/home/seb/Zotero/storage/BYK5CXZT/python-poetry.org.html}
}
@online{pydantic,
title = {Welcome to {{Pydantic}} - {{Pydantic}}},
url = {https://docs.pydantic.dev/latest/},
urldate = {2024-07-01},
file = {/home/seb/Zotero/storage/FF8XYTKG/latest.html}
}
@online{pythonorg,
title = {Welcome to {{Python}}.Org},
date = {2024-06-27},
url = {https://www.python.org/},
urldate = {2024-06-30},
abstract = {The official home of the Python Programming Language},
langid = {english},
organization = {Python.org},
keywords = {tool},
file = {/home/seb/Zotero/storage/BKHKLAP9/www.python.org.html}
}
@online{recommendedformatrsLOC,
type = {web page},
title = {Recommended {{Formats Statement}} {{Datasets}} | {{Resources}} ({{Preservation}}, {{Library}} of {{Congress}})},
url = {https://www.loc.gov/preservation/resources/rfs/data.html},
urldate = {2024-06-23},
abstract = {Lists technical characteristics of and metadata for datasets that best support the preservation of and long-term access to these creative works. Identifies the formats the Library of Congress prefers or finds acceptable.},
langid = {english},
file = {/home/seb/Zotero/storage/G5K5R8ES/data.html}
}
@article{romanfeatures2013,
title = {On the Features and Challenges of Security and Privacy in Distributed Internet of Things},
author = {Roman, Rodrigo and Zhou, Jianying and Lopez, Javier},
date = {2013-07-05},
journaltitle = {Computer Networks},
shortjournal = {Computer Networks},
series = {Towards a {{Science}} of {{Cyber Security}}},
volume = {57},
number = {10},
pages = {2266--2279},
issn = {1389-1286},
doi = {10.1016/j.comnet.2012.12.018},
abstract = {In the Internet of Things, services can be provisioned using centralized architectures, where central entities acquire, process, and provide information. Alternatively, distributed architectures, where entities at the edge of the network exchange information and collaborate with each other in a dynamic way, can also be used. In order to understand the applicability and viability of this distributed approach, it is necessary to know its advantages and disadvantages not only in terms of features but also in terms of security and privacy challenges. The purpose of this paper is to show that the distributed approach has various challenges that need to be solved, but also various interesting properties and strengths.},
keywords = {connectivity,Distributed Architectures,Internet of Things,iot,network,Security},
file = {/home/seb/Zotero/storage/CNBJ9Q6H/S1389128613000054.html}
}
@online{rrrr2023,
title = {Repeatability, {{Reproducibility}}, {{Replicability}}, {{Reusability}} ({{4R}}) in {{Journals}}' {{Policies}} and {{Software}}/{{Data Management}} in {{Scientific Publications}}: {{A Survey}}, {{Discussion}}, and {{Perspectives}}},
shorttitle = {Repeatability, {{Reproducibility}}, {{Replicability}}, {{Reusability}} ({{4R}}) in {{Journals}}' {{Policies}} and {{Software}}/{{Data Management}} in {{Scientific Publications}}},
author = {Hernández, José Armando and Colom, Miguel},
date = {2023-12-18},
eprint = {2312.11028},
eprinttype = {arXiv},
eprintclass = {cs},
doi = {10.48550/arXiv.2312.11028},
abstract = {With the recognized crisis of credibility in scientific research, there is a growth of reproducibility studies in computer science, and although existing surveys have reviewed reproducibility from various perspectives, especially very specific technological issues, they do not address the author-publisher relationship in the publication of reproducible computational scientific articles. This aspect requires significant attention because it is the basis for reliable research. We have found a large gap between the reproducibility-oriented practices, journal policies, recommendations, publisher artifact Description/Evaluation guidelines, submission guides, technological reproducibility evolution, and its effective adoption to contribute to tackling the crisis. We conducted a narrative survey, a comprehensive overview and discussion identifying the mutual efforts required from Authors, Journals, and Technological actors to achieve reproducibility research. The relationship between authors and scientific journals in their mutual efforts to jointly improve the reproducibility of scientific results is analyzed. Eventually, we propose recommendations for the journal policies, as well as a unified and standardized Reproducibility Guide for the submission of scientific articles for authors. The main objective of this work is to analyze the implementation and experiences of reproducibility policies, techniques and technologies, standards, methodologies, software, and data management tools required for scientific reproducible publications. Also, the benefits and drawbacks of such an adoption, as well as open challenges and promising trends, to propose possible strategies and efforts to mitigate the identified gaps. To this purpose, we analyzed 200 scientific articles, surveyed 16 Computer Science journals, and systematically classified them according to reproducibility strategies, technologies, policies, code citation, and editorial business. We conclude there is still a reproducibility gap in scientific publications, although at the same time also the opportunity to reduce this gap with the joint effort of authors, publishers, and technological providers.},
pubstate = {prepublished},
keywords = {Computer Science - Software Engineering,repeatability,replicability,reproducibility,reusability},
file = {/home/seb/Zotero/storage/TD6WP27L/Hernández and Colom - 2023 - Repeatability, Reproducibility, Replicability, Reu.pdf;/home/seb/Zotero/storage/PQMREEDV/2312.html}
}
@article{sibonitestbed2019,
title = {Security {{Testbed}} for {{Internet-of-Things Devices}}},
author = {Siboni, Shachar and Sachidananda, Vinay and Meidan, Yair and Bohadana, Michael and Mathov, Yael and Bhairav, Suhas and Shabtai, Asaf and Elovici, Yuval},
date = {2019-03},
journaltitle = {IEEE Transactions on Reliability},
shortjournal = {IEEE Trans. Reliab.},
volume = {68},
number = {1},
pages = {23--44},
issn = {1558-1721},
doi = {10.1109/TR.2018.2864536},
abstract = {The Internet of Things (IoT) is a global ecosystem of information and communication technologies aimed at connecting any type of object (thing), at any time, and in any place, to each other and to the Internet. One of the major problems associated with the IoT is the heterogeneous nature of such deployments; this heterogeneity poses many challenges, particularly, in the areas of security and privacy. Specifically, security testing and analysis of IoT devices is considered a very complex task, as different security testing methodologies, including software and hardware security testing approaches, are needed. In this paper, we propose an innovative security testbed framework targeted at IoT devices. The security testbed is aimed at testing all types of IoT devices, with different software/hardware configurations, by performing standard and advanced security testing. Advanced analysis processes based on machine learning algorithms are employed in the testbed in order to monitor the overall operation of the IoT device under test. The architectural design of the proposed security testbed along with a detailed description of the testbed implementation is discussed. The testbed operation is demonstrated on different IoT devices using several specific IoT testing scenarios. The results obtained demonstrate that the testbed is effective at detecting vulnerabilities and compromised IoT devices.},
eventtitle = {{{IEEE Transactions}} on {{Reliability}}},
keywords = {Hardware,Internet of Things,Internet of Things (IoT),IoT devices,privacy,security,Security,Software,Standards,testbed framework,Testing},
file = {/home/seb/Zotero/storage/SVD5VNTV/Siboni et al. - 2019 - Security Testbed for Internet-of-Things Devices.pdf;/home/seb/Zotero/storage/VXRRDTR9/8565917.html}
}
@article{surveytestingmethods2022,
title = {Survey of {{Testing Methods}} and {{Testbed Development Concerning Internet}} of {{Things}}},
author = {Zhu, Shicheng and Yang, Shunkun and Gou, Xiaodong and Xu, Yang and Zhang, Tao and Wan, Yueliang},
date = {2022-03-01},
journaltitle = {Wireless Personal Communications},
shortjournal = {Wireless Pers Commun},
volume = {123},
number = {1},
pages = {165--194},
issn = {1572-834X},
doi = {10.1007/s11277-021-09124-5},
abstract = {The concept of Internet of Things (IoT) was designed to change everyday lives of people via multiple forms of computing and easy deployment of applications. In recent years, the increasing complexity of IoT-ready devices and processes has led to potential risks related to system reliability. Therefore, the comprehensive testing of IoT technology has attracted the attention of many researchers, which promotes the extensive development of IoT testing methods and infrastructure. However, the current research on IoT testing methods and testbeds mainly focuses on specific application scenarios, lacking systematic review and analysis of many applications from different points of view. This paper systematically summarizes the latest testing methods covering different IoT fields and discusses the development status of the existing Internet of things testbed. Findings of this review demonstrate that IoT testing is moving toward larger scale and intelligent testing, and that in near future, IoT test architecture is set to become more standardized and universally applicable with multi-technology convergence—i.e., a combination of big data, cloud computing, and artificial intelligence—being the prime focus of IoT testing.},
langid = {english},
keywords = {Internet of Things,IoT testing,Testbed,Testing method},
file = {/home/seb/Zotero/storage/ZZ6KBCP6/Zhu et al. - 2022 - Survey of Testing Methods and Testbed Development .pdf}
}
@article{tbsmartgrid2013,
title = {Cyber-{{Physical Security Testbeds}}: {{Architecture}}, {{Application}}, and {{Evaluation}} for {{Smart Grid}}},
shorttitle = {Cyber-{{Physical Security Testbeds}}},
author = {Hahn, Adam and Ashok, Aditya and Sridhar, Siddharth and Govindarasu, Manimaran},
date = {2013-06},
journaltitle = {IEEE Transactions on Smart Grid},
shortjournal = {IEEE Trans. Smart Grid},
volume = {4},
number = {2},
pages = {847--855},
issn = {1949-3061},
doi = {10.1109/TSG.2012.2226919},
abstract = {The development of a smarter electric grid will depend on increased deployments of information and communication technology (ICT) to support novel communication and control functions. Unfortunately, this additional dependency also expands the risk from cyber attacks. Designing systems with adequate cyber security depends heavily on the availability of representative environments, such as testbeds, where current issues and future ideas can be evaluated. This paper provides an overview of a smart grid security testbed, including the set of control, communication, and physical system components required to provide an accurate cyber-physical environment. It then identifies various testbed research applications and also identifies how various components support these applications. The PowerCyber testbed at Iowa State University is then introduced, including the architecture, applications, and novel capabilities, such as virtualization, Real Time Digital Simulators (RTDS), and ISEAGE WAN emulation. Finally, several attack scenarios are evaluated using the testbed to explore cyber-physical impacts. In particular, availability and integrity attacks are demonstrated with both isolated and coordinated approaches, these attacks are then evaluated based on the physical system's voltage and rotor angle stability.},
eventtitle = {{{IEEE Transactions}} on {{Smart Grid}}},
keywords = {Computer architecture,cyber security,Cyber-physical systems,ieee,iot,Protocols,Real-time systems,Security,smart grid,Smart grids,Software,Substations,testbed,testbeds},
file = {/home/seb/Zotero/storage/DHKLTKRM/6473865.html}
}
@online{tcpdump,
title = {Home | {{TCPDUMP}} \& {{LIBPCAP}}},
url = {https://www.tcpdump.org/},
urldate = {2024-06-30},
file = {/home/seb/Zotero/storage/SXMBIDLR/www.tcpdump.org.html}
}
@online{testbedOxford,
title = {Test Bed Noun - {{Definition}}, Pictures, Pronunciation and Usage Notes | {{Oxford Advanced Learner}}'s {{Dictionary}} at {{OxfordLearnersDictionaries}}.Com},
url = {https://www.oxfordlearnersdictionaries.com/definition/english/test-bed},
urldate = {2024-06-20}
}
@inproceedings{ukilEmbeddedSecurityInternet2011,
title = {Embedded Security for {{Internet}} of {{Things}}},
booktitle = {2011 2nd {{National Conference}} on {{Emerging Trends}} and {{Applications}} in {{Computer Science}}},
author = {Ukil, Arijit and Sen, Jaydip and Koilakonda, Sripad},
date = {2011-03},
pages = {1--6},
doi = {10.1109/NCETACS.2011.5751382},
abstract = {Internet of Things (IoT) consists of several tiny devices connected together to form a collaborative computing environment. IoT imposes peculiar constraints in terms of connectivity, computational power and energy budget, which make it significantly different from those contemplated by the canonical doctrine of security in distributed systems. In order to circumvent the problem of security in IoT domain, networks and devices need to be secured. In this paper, we consider the embedded device security only, assuming that network security is properly in place. It can be noticed that the existence of tiny computing devices that form ubiquity in IoT domain are very much vulnerable to different security attacks. In this work, we provide the requirements of embedded security, the solutions to resists different attacks and the technology for resisting temper proofing of the embedded devices by the concept of trusted computing. Our paper attempts to address the issue of security for data at rest. Addressing this issue is equivalent to addressing the security issue of the hardware platform. Our work also partially helps in addressing securing data in transit.},
eventtitle = {2011 2nd {{National Conference}} on {{Emerging Trends}} and {{Applications}} in {{Computer Science}}},
keywords = {ARM,Computer architecture,confidentiality,embedded device,Embedded systems,Hardware,Internet of things (IoT),Protocols,security,Security,Smart phones,Trustzone,ubiquitous computing},
file = {/home/seb/Zotero/storage/IQGX2SWB/5751382.html}
}
@thesis{vacuumpie2023,
type = {Master Thgesis},
title = {Private {{Information Exposed}} by the {{Use}} of {{Robot Vacuum Cleaner}} in {{Smart Environments}}},
author = {Ulsmåg, Benjamin Andreas},
date = {2023-01-06},
institution = {{Norwegian University of Science and Technology}},
location = {Gjøvik},
abstract = {Robot vacuum cleaners are popular IoT devices and are deployed in all kinds of smart environments. Integration with IoT systems introduce more security and privacy issues related to the operation of these devices. Vendors have developed smart phone applications where users can personalize cleaning or view informa- tion about the vacuum cleaner. This increase the integration between users life and the robot vacuum cleaner, which potentially exposes private information. In- dustry standards include end-to-end encryption between the application, cloud service and robot vacuum cleaner to secure the private information exchanged. Regardless of encryption, network header metadata is still available through net- work eavesdropping attacks. In this project we investigated the potential private information exposed by this metadata. An Irobot Roomba i7 was deployed in two different smart environments where passive network eavesdropping was conduc- ted during smart feature triggering. Analysis revealed that it was possible to attrib- ute different events triggered on the Irobot Roomba i7, only based on metadata in the Internet traffic capture. Different signature-based detection algorithms are proposed, with a high detection rate. Wi-Fi and Internet capturing metadata were compared and similar patterns were identified, making the detection method ap- plicable for Wi-Fi eavesdropping as well. This thesis covers the implementation, capturing and analysis of network traffic and proposes event detection algorithms.},
langid = {english}
}
@article{vassermanVampireAttacksDraining2013,
title = {Vampire {{Attacks}}: {{Draining Life}} from {{Wireless Ad Hoc Sensor Networks}}},
shorttitle = {Vampire {{Attacks}}},
author = {Vasserman, Eugene Y. and Hopper, Nicholas},
date = {2013-02},
journaltitle = {IEEE Transactions on Mobile Computing},
shortjournal = {IEEE Trans. Mob. Comput.},
volume = {12},
number = {2},
pages = {318--332},
issn = {1558-0660},
doi = {10.1109/TMC.2011.274},
abstract = {Ad hoc low-power wireless networks are an exciting research direction in sensing and pervasive computing. Prior security work in this area has focused primarily on denial of communication at the routing or medium access control levels. This paper explores resource depletion attacks at the routing protocol layer, which permanently disable networks by quickly draining nodes' battery power. These "Vampire” attacks are not specific to any specific protocol, but rather rely on the properties of many popular classes of routing protocols. We find that all examined protocols are susceptible to Vampire attacks, which are devastating, difficult to detect, and are easy to carry out using as few as one malicious insider sending only protocol-compliant messages. In the worst case, a single Vampire can increase network-wide energy usage by a factor of O(N), where N in the number of network nodes. We discuss methods to mitigate these types of attacks, including a new proof-of-concept protocol that provably bounds the damage caused by Vampires during the packet forwarding phase.},
eventtitle = {{{IEEE Transactions}} on {{Mobile Computing}}},
keywords = {ad hoc networks,Ad hoc networks,Denial of service,Energy consumption,Network topology,routing,Routing,Routing protocols,security,sensor networks,Topology,wireless networks},
file = {/home/seb/Zotero/storage/W96J7MD8/Vasserman and Hopper - 2013 - Vampire Attacks Draining Life from Wireless Ad Ho.pdf;/home/seb/Zotero/storage/TY3DMJZZ/6112758.html}
}
@article{vaughan2005use,
title = {The Use of Climate Chambers in Biological Research},
author = {family=Vaughan, given=TL, given-i=TL and family=Battle, given=SC, given-i=SC and family=Walker, given=KL, given-i=KL},
date = {2005},
journaltitle = {Environmental Science \& Technology},
shortjournal = {Environ. Sci. Technol.},
volume = {39},
number = {14},
pages = {5121--5127},
publisher = {ACS Publications}
}
@article{whatissmartdevice2018,
title = {What Is a Smart Device? - a Conceptualisation within the Paradigm of the Internet of Things},
author = {Silverio-Fernández, Manuel and Renukappa, Suresh and Suresh, Subashini},
date = {2018-05-09},
journaltitle = {Visualization in Engineering},
shortjournal = {Visualization in Engineering},
volume = {6},
number = {1},
pages = {3},
issn = {2213-7459},
doi = {10.1186/s40327-018-0063-8},
abstract = {The Internet of Things (IoT) is an interconnected network of objects which range from simple sensors to smartphones and tablets; it is a relatively novel paradigm that has been rapidly gaining ground in the scenario of modern wireless telecommunications with an expected growth of 25 to 50 billion of connected devices for 2020 Due to the recent rise of this paradigm, authors across the literature use inconsistent terms to address the devices present in the IoT, such as mobile device, smart device, mobile technologies or mobile smart device. Based on the existing literature, this paper chooses the term smart device as a starting point towards the development of an appropriate definition for the devices present in the IoT. This investigation aims at exploring the concept and main features of smart devices as well as their role in the IoT. This paper follows a systematic approach for reviewing compendium of literature to explore the current research in this field. It has been identified smart devices as the primary objects interconnected in the network of IoT, having an essential role in this paradigm. The developed concept for defining smart device is based on three main features, namely context-awareness, autonomy and device connectivity. Other features such as mobility and user-interaction were highly mentioned in the literature, but were not considered because of the nature of the IoT as a network mainly oriented to device-to-device connectivity whether they are mobile or not and whether they interact with people or not. What emerges from this paper is a concept which can be used to homogenise the terminology used on further research in the Field of digitalisation and smart technologies.}
}
@article{wilkinson_fair_2016,
title = {The {{FAIR Guiding Principles}} for Scientific Data Management and Stewardship},
author = {Wilkinson, Mark D. and Swertz, Morris A. and family=al., prefix=et, useprefix=true},
date = {2016-03-15},
journaltitle = {Scientific Data},
shortjournal = {Sci Data},
volume = {3},
number = {1},
pages = {160018},
publisher = {Nature Publishing Group},
issn = {2052-4463},
doi = {10.1038/sdata.2016.18},
abstract = {There is an urgent need to improve the infrastructure supporting the reuse of scholarly data. A diverse set of stakeholders—representing academia, industry, funding agencies, and scholarly publishers—have come together to design and jointly endorse a concise and measureable set of principles that we refer to as the FAIR Data Principles. The intent is that these may act as a guideline for those wishing to enhance the reusability of their data holdings. Distinct from peer initiatives that focus on the human scholar, the FAIR Principles put specific emphasis on enhancing the ability of machines to automatically find and use the data, in addition to supporting its reuse by individuals. This Comment is the first formal publication of the FAIR Principles, and includes the rationale behind them, and some exemplar implementations in the community.},
langid = {english},
keywords = {Publication characteristics,Research data},
file = {/home/seb/Zotero/storage/LDIYYE8H/Wilkinson et al. - 2016 - The FAIR Guiding Principles for scientific data ma.pdf}
}
@online{wiresharkorg,
title = {Wireshark · {{Go Deep}}},
url = {https://www.wireshark.org/},
urldate = {2024-06-30},
file = {/home/seb/Zotero/storage/SZ3UZZG4/www.wireshark.org.html}
}
@article{zander2014survey,
title = {A Survey of Testbeds and Experimental Research Infrastructures for Wireless Networks},
author = {Zander, Justus and Zinner, Thomas and Bifulco, Roberto and Carle, Georg},
date = {2014},
journaltitle = {IEEE Communications Surveys \& Tutorials},
shortjournal = {IEEE Commun. Surv. Tutor.},
volume = {15},
number = {4},
pages = {1231--1246},
publisher = {IEEE},
keywords = {iot,springer,survey,testbed}
}

33
thesis/.gitignore vendored Normal file
View File

@ -0,0 +1,33 @@
*.acn
*.acr
*.alg
*.aux
*.bbl
*.blg
*.dvi
*.fdb_latexmk
*.glg
*.glo
*.gls
*.idx
*.ilg
*.ind
*.ist
*.lof
*.log
*.lot
*.maf
*.mtc
*.mtc0
*.nav
*.nlo
*.out
*.pdfsync
*.ps
*.snm
*.synctex.gz
*.toc
*.vrb
*.xdy
*.tdo
*.texpadtmp

Binary file not shown.

Binary file not shown.

162
thesis/Back/AppendixA.tex Normal file
View File

@ -0,0 +1,162 @@
% !TEX root = ../Thesis.tex
\chapter{Appendix A}
\section{Command Line Examples}\label{example:pre-post}
\subsection{Pre and post scripts}
In this example, the \verb|--unsafe| option allows not to specify a IP or MAC address.
\verb|default| is the device name used and \verb|-c 10| tells \iottb that we only want to capture 10 packets.
\begin{minted}{bash}
# Command:
$ iottb sniff --pre='/usr/bin/echo "pre"' --post='/usr/bin/echo "post"' \
default --unsafe -c 10
# Stdout:
Testbed [Info]
Running pre command /usr/bin/echo "pre"
pre
Using canonical device name default
Found device at path /home/seb/iottb.db/default
Using filter None
Files will be placed in /home/seb/iottb.db/default/sniffs/2024-06-30/cap0002-2101
Capture has id dcdf1e0b-6c4d-4f01-ba16-f42a04131fbe
Capture setup complete!
Capture complete. Saved to default_dcdf1e0b-6c4d-4f01-ba16-f42a04131fbe.pcap
tcpdump took 2.12 seconds.
Ensuring correct ownership of created files.
Saving metadata.
END SNIFF SUBCOMMAND
Running post script /usr/bin/echo "post"
post
\end{minted}
The contents of the 'sniff' directory for the default device after this capture has completed:
\begin{minted}{bash}
sniffs/2024-06-30/cap0002-2101
$ tree
.
|-- capture_metadata.json
|-- default_dcdf1e0b-6c4d-4f01-ba16-f42a04131fbe.pcap
|-- stderr_dcdf1e0b-6c4d-4f01-ba16-f42a04131fbe.log
L__ stdout_dcdf1e0b-6c4d-4f01-ba16-f42a04131fbe.log
\end{minted}
and the metadata file contains (\verb|\| only used for fitting into this document):\\
\verb|# capture_metadata.json|\\
\begin{minted}{json}
{
"device": "default",
"device_id": "default",
"capture_id": "dcdf1e0b-6c4d-4f01-ba16-f42a04131fbe",
"capture_date_iso": "2024-06-30T21:01:31.496870",
"invoked_command": "sudo tcpdump -# -n -c 10 -w \
/home/seb/iottb.db \
/default/sniffs/2024-06-30 \
/cap0002-2101/default_dcdf1e0b-6c4d-4f01-ba16-f42a04131fbe.pcap",
"capture_duration": 2.117154359817505,
"generic_parameters": {
"flags": "-# -n",
"kwargs": "-c 10",
"filter": null
},
"non_generic_parameters": {
"kwargs": "-w \
/home/seb/iottb.db/default/sniffs/2024-06-30 \
/cap0002-2101 \
/default_dcdf1e0b-6c4d-4f01-ba16-f42a04131fbe.pcap",
"filter": null
},
"features": {
"interface": null,
"address": null
},
"resources": {
"pcap_file": "default_dcdf1e0b-6c4d-4f01-ba16-f42a04131fbe.pcap",
"stdout_log": "stdout_dcdf1e0b-6c4d-4f01-ba16-f42a04131fbe.log",
"stderr_log": "stderr_dcdf1e0b-6c4d-4f01-ba16-f42a04131fbe.log",
"pre": "/usr/bin/echo \"pre\"",
"post": "/usr/bin/echo \"post\""
},
"environment": {
"capture_dir": "cap0002-2101",
"database": "iottb.db",
"capture_base_dir": "/home/seb/iottb.db/default/sniffs/2024-06-30",
"capture_dir_abs_path": \
"/home/seb/iottb.db/default/sniffs/2024-06-30/cap0002-2101"
}
}
\end{minted}
\section{Canonical Name}
\begin{listing}[!ht]
\inputminted[firstline=12, lastline=40]{python}{string_processing.py}
\caption{Shows how the canonical name is created.}
\label{lst:dev-canonical}
\end{listing}
\section{Add Device Example}
\subsection{Configuration File}\label{appendixA:add-dev-cfg}
\begin{listing}[!ht]
\inputminted[linenos, breaklines]{python}{appendixa-after-add-device-dir.txt}
\caption{Directory and file contents after adding two devices.}
\label{lst:appendix:appendixa:config-file}
\end{listing}
\section{Debug Flag Standard Output}
\begin{figure}
\centering
\begin{minted}{bash}
iottb -vvv --debug sniff roomba --unsafe -c 10
<_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'>
INFO - main - cli - 48 - Starting execution.
INFO - iottb_config - __init__ - 24 - Initializing Config object
WARNING - iottb_config - warn - 21 - DatabaseLocations are DatabaseLocationMap in the class iottb.models.iottb_config
INFO - iottb_config - load_config - 57 - Loading configuration file
INFO - iottb_config - load_config - 62 - Config file exists, opening.
DEBUG - main - cli - 52 - Verbosity: 3
DEBUG - main - cli - 54 - Debug: True
INFO - sniff - validate_sniff - 37 - Validating sniff...
INFO - sniff - sniff - 91 - sniff command invoked
DEBUG - sniff - sniff - 98 - Config loaded: <iottb.models.iottb_config.IottbConfig object at 0x7f16197d5e50>
DEBUG - sniff - sniff - 104 - Full db path is /home/seb/showcase
INFO - string_processing - make_canonical_name - 20 - Normalizing name roomba
DEBUG - string_processing - make_canonical_name - 38 - Canonical name: roomba
DEBUG - string_processing - make_canonical_name - 39 - Aliases: ['roomba']
Testbed [I]
Using canonical device name roomba
Found device at path /home/seb/showcase/roomba
INFO - sniff - sniff - 152 - Generic filter None
Using filter None
DEBUG - sniff - sniff - 160 - Previous captures <generator object Path.glob at 0x7f16194ec590>
DEBUG - sniff - sniff - 162 - Capture count is 4
DEBUG - sniff - sniff - 165 - capture_dir: cap0004-0310
Files will be placed in /home/seb/showcase/roomba/sniffs/2024-07-01/cap0004-0310
DEBUG - sniff - sniff - 172 - successfully created capture directory
Capture has id 59153b53-c49d-44de-99d2-b5a3490df29a
DEBUG - sniff - sniff - 185 - Full pcap file path is /home/seb/showcase/roomba/sniffs/2024-07-01/cap0004-0310/roomba_59153b53-c49d-44de-99d2-b5a3490df29a.pcap
INFO - sniff - sniff - 186 - pcap file name is roomba_59153b53-c49d-44de-99d2-b5a3490df29a.pcap
INFO - sniff - sniff - 187 - stdout log file is stdout_59153b53-c49d-44de-99d2-b5a3490df29a.log
INFO - sniff - sniff - 188 - stderr log file is stderr_59153b53-c49d-44de-99d2-b5a3490df29a.log
DEBUG - sniff - sniff - 191 - pgid 260696
DEBUG - sniff - sniff - 192 - ppid 12862
DEBUG - sniff - sniff - 193 - (real, effective, saved) user id: (1000, 1000, 1000)
DEBUG - sniff - sniff - 194 - (real, effective, saved) group id: (1000, 1000, 1000)
DEBUG - sniff - sniff - 209 - Flags: -# -n
DEBUG - sniff - sniff - 217 - verbosity string to pass to tcpdump: -vvv
DEBUG - sniff - sniff - 228 - KW args: -c 10
DEBUG - sniff - sniff - 237 - Non transferable (special) kw args: -w /home/seb/showcase/roomba/sniffs/2024-07-01/cap0004-0310/roomba_59153b53-c49d-44de-99d2-b5a3490df29a.pcap
INFO - sniff - sniff - 246 - tcpdump command: sudo tcpdump -# -n -vvv -c 10 -w /home/seb/showcase/roomba/sniffs/2024-07-01/cap0004-0310/roomba_59153b53-c49d-44de-99d2-b5a3490df29a.pcap
Capture setup complete!
DEBUG - sniff - sniff - 259 -
stdout: <_io.TextIOWrapper name='/home/seb/showcase/roomba/sniffs/2024-07-01/cap0004-0310/stdout_59153b53-c49d-44de-99d2-b5a3490df29a.log' mode='w' encoding='UTF-8'>.
stderr: <_io.TextIOWrapper name='/home/seb/showcase/roomba/sniffs/2024-07-01/cap0004-0310/stderr_59153b53-c49d-44de-99d2-b5a3490df29a.log' mode='w' encoding='UTF-8'>.
Capture complete. Saved to roomba_59153b53-c49d-44de-99d2-b5a3490df29a.pcap
tcpdump took 1.11 seconds.
Ensuring correct ownership of created files.
Saving metadata.
END SNIFF SUBCOMMAND
\end{minted}
\caption{Output with max verbosity and debug flag set.}
\label{fig:example-debug-output}
\end{figure}

16
thesis/Back/AppendixB.tex Normal file
View File

@ -0,0 +1,16 @@
\chapter{Appendix B}
\section{Software Requirements}\label{sec:software-req}
\iottbsc was developed on the \textit{Linux}\footnote{\url{kernel.org}} operating system \textit{Fedora 40}\footnote{\url{https://fedoraproject.org/workstation/}}. It has not been tested on any other platform.
\iottbsc is implemented in a Python\footnote{\url{python.org}} package \iottb, which has been developed with Python version 3.12.
\subsection{Runtime Dependencies}
\begin{itemize}
\item Poetry\footnote{\url{https://python-poetry.org/}}, version 1.8.3. Used for packaging and dependency management.
\item Click\footnote{\url{https://click.palletsprojects.com/en/8.1.x/}}, version 8.1, is a library which enables parameter handling through decorated functions.
\end{itemize}
\subsection{Testing Dependencies}
\begin{itemize}
\item Pytest\footnote{\url{https://docs.pytest.org/en/8.2.x/}}, versions 8.2. Although not many exist.
\end{itemize}

145
thesis/Back/CommandRef.tex Normal file
View File

@ -0,0 +1,145 @@
\chapter{Appendix D}\label{appendix:cmdref}
\section{\iottb}\label{cmdref:iottb}
\begin{verbatim}
Usage: iottb [OPTIONS] COMMAND [ARGS]...
Options:
-v, --verbosity Set verbosity [default: 0; 0<=x<=3]
-d, --debug Enable debug mode
--dry-run [default: True]
--cfg-file PATH Path to iottb config file [default:
$HOME/.config/iottb/iottb.cfg]
--help Show this message and exit.
Commands:
add-device Add a device to a database
init-db
rm-cfg Removes the cfg file from the filesystem.
rm-dbs Removes ALL(!) databases from the filesystem if...
set-key-in-table-to Edit config or metadata files.
show-all Show everything: configuration, databases, and...
show-cfg Show the current configuration context
sniff Sniff packets with tcpdump
\end{verbatim}
\subsection{Initialize Database}\label{cmdref:init-db}
\begin{verbatim}
Usage: iottb init-db [OPTIONS]
Options:
-d, --dest PATH Location to put (new) iottb database
-n, --name TEXT Name of new database. [default: iottb.db]
--update-default / --no-update-default
If new db should be set as the new default
[default: update-default]
--help Show this message and exit.
\end{verbatim}
\subsection{Add device}\label{cmdref:add-device}
\begin{verbatim}
Usage: iottb add-device [OPTIONS]
Add a device to a database
Options:
--dev, --device-name TEXT The name of the device to be added. If this
string contains spaces or other special
characters normalization is
performed to derive a canonical name [required]
--db, --database DIRECTORY Database in which to add this device. If not
specified use default from config. [env var:
IOTTB_DB]
--guided Add device interactively [env var:
IOTTB_GUIDED_ADD]
--help Show this message and exit.
\end{verbatim}
\subsection{Capture traffic with \textit{tcpdump}}\label{cmdref:sniff}
\begin{verbatim}
Usage: iottb sniff [OPTIONS] [TCPDUMP-ARGS] [DEVICE]
Sniff packets with tcpdump
Options:
Testbed sources:
--db, --database TEXT Database of device. Only needed if not current
default. [env var: IOTTB_DB]
--app TEXT Companion app being used during capture
Runtime behaviour:
--unsafe Disable checks for otherwise required options.
[env var: IOTTB_UNSAFE]
--guided [env var: IOTTB_GUIDED]
--pre TEXT Script to be executed before main command is
started.
--post TEXT Script to be executed upon completion of main
command.
Tcpdump options:
-i, --interface TEXT Network interface to capture on.If not specified
tcpdump tries to find and appropriate one.
[env var: IOTTB_CAPTURE_INTERFACE]
-a, --address TEXT IP or MAC address to filter packets by.
[env var: IOTTB_CAPTURE_ADDRESS]
-I, --monitor-mode Put interface into monitor mode.
--ff TEXT tcpdump filter as string or file path.
[env var: IOTTB_CAPTURE_FILTER]
-#, --print-pacno Print packet number at beginning of line. True by
default. [default: True]
-e, --print-ll Print link layer headers. True by default.
-c, --count INTEGER Number of packets to capture. [default: 1000]
--help Show this message and exit.
\end{verbatim}
\section{Utility commands}\label{cmdref:sec:utils}
Utility Commands mostly for development and have not yet been integrated into the standard workflow.
\subsection{Remove Configuration}\label{cmdref:rm-cfg}
\begin{verbatim}
Usage: iottb rm-cfg [OPTIONS]
Removes the cfg file from the filesystem.
This is mostly a utility during development. Once non-standard database
locations are implemented, deleting this would lead to iottb not being able
to find them anymore.
Options:
--yes Confirm the action without prompting.
--help Show this message and exit.
\end{verbatim}
\subsection{Remove Database}\label{cmdref:rm-dbs}
\begin{verbatim}
Usage: iottb rm-dbs [OPTIONS]
Removes ALL(!) databases from the filesystem if they're empty.
Development utility currently unfit for use.
Options:
--yes Confirm the action without prompting.
--help Show this message and exit.
\end{verbatim}
\subsection{Display Configuration File}\label{cmdref:show-cfg}
\begin{verbatim}
Usage: iottb show-cfg [OPTIONS]
Show the current configuration context
Options:
--cfg-file PATH Path to the config file [default:
/home/seb/.config/iottb/iottb.cfg]
-pp Pretty Print
--help Show this message and exit
\end{verbatim}
\subsection{"Show All"}\label{cmdref:show-all}
\begin{verbatim}
Usage: iottb show-all [OPTIONS]
Show everything: configuration, databases, and device metadata
Options:
--help Show this message and exit.
\end{verbatim}

Binary file not shown.

Binary file not shown.

View File

@ -0,0 +1,44 @@
% !TEX root = ../Thesis.tex
\chapter{Introduction}\label{introduction}
\iot devices are becoming increasingly prevalent in modern homes, offering a range of benefits such as controlling home lighting, remote video monitoring, and automated cleaning \citep{iothome2019}.
These conveniences are made possible by the sensors and networked communication capabilities embedded within these devices.
However, these features also pose significant privacy and security risks \citep{islamiot2023}.
IoT devices are often integrated into home networks and communicate over the internet with external servers, potentially enabling surveillance or unauthorized data sharing without the user's knowledge or consent \citep{infoexpiot}. Moreover, even in the absence of malicious intent by the manufacturer, these devices are still vulnerable to programming bugs and other security failures \citep{peekaboo2020}.
\medskip
Security researchers focused on the security and privacy of such \iot devices rely on various utilities and tools for conducting research.
These tools are often glued together in scripts with arbitrary decisions about file naming and data structuring.
Such impromptu scripts typically have a narrow range of application, making them difficult to reuse across different projects. Consequently, useful parts are manually extracted and incorporated into new scripts for each project, exacerbating the problem.
\medskip
This approach leads to scattered data, highly tailored scripts, and a lack of standardized methods for sharing or reproducing experiments. The absence of standardized tools and practices results in inconsistencies in data collection and storage, making it difficult to maintain compatibility across projects.
Furthermore, the lack of conventions about file naming and data structuring leads to issues in finding and accessing the data.
For research groups, these issues are further compounded during the onboarding of new members, who must navigate this fragmented landscape and often create their own ad-hoc solutions, perpetuating the cycle of inefficiency and inconsistency.
\medskip
To systematically and reproducibly study the privacy and security of IoT devices, an easy-to-use testbed that automates and standardizes various aspects of experimenting with IoT devices is needed.
\section{Motivation}\label{sec:motivation}
The primary motivation behind this project is to address the challenges faced by security researchers in the field of IoT device security and privacy.
The scattered nature of data, the lack of standardized tools, and the ad-hoc methods used for data collection or processing, are an obstacle for researchers who want to produce valid and reproducible results \citep{fursinckorg2021}.
A standardized testbed, enabling a more systematic approach to collecting and analyzing network data from \iot devices, can help make tedious and error-prone aspects of conducting experiments on \iot devices more bearable, while at the same time enhancing the quality of the data, by adhering to interoperability standards by establishing data collection and storage standards.
This bachelor project is specifically informed by the needs of the PET research group at the University of Basel, who will utilize it to run IoT device experiments, and as a foundation to build more extensive tooling.
\section{Goal}\label{sec:goal}
The goal of this project is to design and implement a testbed for IoT device experiments. To aid reproducibility, there are two main objectives:
First, the testbed should automate key aspects of running experiments with IoT devices, particularly the setup and initialization of data collection processes as well as some basic post-collection data processing.
Secondly, the testbed should standardize how data from experiments is stored. This includes standardizing data and metadata organization, establishing a naming scheme, and defining necessary data formats.
A more detailed description to how this is adapted for this project follows in \cref{ch:adaptation}.
\section{Outline}
This report documents the design and implementation of an \iot testbed.
In the remainder of the text, the typographically formatted string "\iottbsc" refers to this projects' conception of testbed, whereas "\iottb" specifically denotes the Python package which is the implementation artifact from this project.
This report outlines the general goals of a testbed, details the specific functionalities of \iottbsc, and explains how the principles of automation and standardization are implemented.
We begin by giving some background on the most immediately useful concepts.
\cref{ch:adaptation} derives requirements for \iottbsc starting from first principles and concludes by delineating the scope considered for implementation, which is described in \cref{ch4}.
In \cref{ch:5-eval} we evaluate \iottbsc, and more specifically, the \iottb software package against the requirements stated in \cref{ch:adaptation}.
We conclude in \cref{ch:conclusion} with an outlook on further development for \iottbsc.

View File

@ -0,0 +1,48 @@
% !TEX root = ../Thesis.tex
\chapter{Background}
This section provides the necessary background to understand the foundational concepts related to IoT devices, testbeds, and data principles that inform the design and implementation of \iottbsc.
\section{Internet of Things}
The \iot refers to the connection of “things” other than traditional computers to the internet. The decreasing size of microprocessors has enabled their integration into smaller and smaller objects. Today, objects like security cameras, home lighting, or children's toys may contain a processor and embedded software that enables them to interact with the internet. The Internet of Things encompasses objects whose purpose has a physical dimension, such as using sensors to measure the physical world or functioning as simple controllers. When these devices can connect to the internet, they are considered part of the Internet of Things and are referred to as \textbf{IoT devices} (see \citet{whatissmartdevice2018} and \citet{iotfundamentals}).
\section{Testbed}
A testbed is a controlled environment set up to perform experiments and tests on new technologies. The concept is used across various fields such as aviation, science, and industry. Despite the varying contexts, all testbeds share the common goal of providing a stable, controlled environment to evaluate the performance and characteristics of the object of interest.
Examples of testbeds include:
\begin{enumerate}
\item \textbf{Industry and Engineering}: In industry and engineering, the term \emph{platform} is often used to describe a starting point for product development. A platform in this context can be considered a testbed where various components and technologies are integrated and tested together before final deployment.
\item \textbf{Natural Sciences}: In the natural sciences, laboratories serve as testbeds by providing controlled environments for scientific experiments. For example, climate chambers are used to study the effects of different environmental conditions on biological samples (e.g., in \citet{vaughan2005use}). Another example is the use of wind tunnels in aerodynamics research to simulate and study the effects of airflow over models of aircraft or other structures.
\item \textbf{Computing}: In computing, specifically within software testing, a suite of unit tests, integrated development environments (IDEs), and other tools could be considered as a testbed. This setup helps in identifying and resolving potential issues before deployment. By controlling parameters of the environment, a testbed can ensure that the software behaves as expected under specified conditions, which is essential for reliable and consistent testing.
\item \textbf{Interdisciplinary}: Testbeds can take on considerable scales. For instance, in \citet{tbsmartgrid2013} provides insight into the aspects of a testbed for a smart electric grid.
This testbed is composed out of multiple systems, — an electrical grid, internet, and communication provision — which in their own right are already complex environments.
The testbed must, via simulation or prototyping, provide control mechanisms, communication, and physical system components.
\end{enumerate}
\section{FAIR Data Principles}
\label{concept:fair}
The \emph{FAIR Data Principles} were first introduced by \citet{wilkinson_fair_2016} with the intention to improve the reusability of scientific data. The principles address \textbf{F}indability, \textbf{A}ccessibility, \textbf{I}nteroperability, and \textbf{R}eusability. Data storage designers may use these principles as a guide when designing data storage systems intended to hold data for easy reuse.
For a more detailed description, see \citep{go-fair}.
\section{Network Traffic}\label{sec:network-traffic}
Studying \iot devices fundamentally involves understanding their network traffic behavior.
This is because network traffic contains (either explicitly or implicitly embedded in it) essential information of interest.
Here are key reasons why network traffic is essential in the context of \iot device security:
\begin{enumerate}
\item \textbf{Communication Patterns}: Network traffic captures the communication patterns between IoT devices and external servers or other devices within the network. By analyzing these patterns, researchers can understand how data flows in and out of the device, which is critical for evaluating performance and identifying any unauthorized communications or unintended leaking of sensitive information.
\item \textbf{Protocol Analysis:} Examining the protocols used by IoT devices helps in understanding how they operate. Different devices might use various communication protocols, and analyzing these can reveal insights into their compatibility, efficiency, and security. Protocol analysis can also uncover potential misconfigurations or deviations from expected behavior.
\item \textbf{Flow Monitoring:} Network traffic analysis is a cornerstone of security research. It allows researchers to identify potential security threats such as data breaches, unauthorized access, and malware infections. By monitoring traffic, one can detect anomalies that may indicate security incidents or vulnerabilities within the device.
\item \textbf{Information Leakage}: \iot devices are often deployed in a home environment and connect to the network through wireless technologies \citep{iothome2019}. This allows an adversary to passively observe traffic. While often this traffic is encrypted, the network flow can leak sensitive information, which is extracted through more complex analysis of communication traffic and Wi-Fi packets \citep{friesssniffing2018}, \citep{infoexpiot}. In some cases, the adversary can determine the state of the smart environment and their users \citep{peekaboo2020}.
\end{enumerate}
\section{(Network) Packet Capture}
Network \textit{packet capture} \footnote{also known as \emph{packet sniffing}, \emph{network traffic capture}, or just \emph{sniffing}. The latter is often used when referring to nefarious practices.} fundamentally describes the act or process of intercepting and storing data packets traversing a network. It is the principal technique used for studying the behavior and communication patterns of devices on a network. For the reasons mentioned in \cref{sec:network-traffic}, packet capturing is the main data collection mechanism used in \iot device security research, and also the one considered for this project.
\section{Automation Recipes}
\todoRevise()
Automation recipes can be understood as a way of defining a sequence of steps needed for a process.
In the field of machine learning, \textit{Collective Mind}\footnote{\url{https://github.com/mlcommons/ck}} provides a small framework to define reusable recipes for building, running, benchmarking and optimizing machine learning applications.
A key aspect of these recipes some platform-independent, which has enabled wider testing and benchmarking of machine learning models. Even if a given recipe is not yet platform independent, it can be supplemented with user-specific scripts which handle the platform specifics. Furthermore, it is possible to create a new recipe from the old recipe and the new script, which, when made accessible, essentially has extended the applicability of the recipe \citet{friesssniffing2018}.
Automation recipes express the fact that some workflow is automated irrespective of the underlying tooling. A simple script or application can be considered an recipe (or part of)

View File

@ -0,0 +1,141 @@
\chapter{Adaptation}\label{ch:adaptation}
In this chapter, we outline the considerations made during the development of the IoT testbed, \iottbsc.
Starting from first principles, we derive the requirements for our testbed and finally establish the scope for \iottbsc.
The implemented testbed which results from this analysis, the software package \iottb, is discussed in \cref{ch4}.\\
\section{Principal Objectives}\label{sec:principles-and-objectives}
The stated goal for this bachelor project (see \cref{sec:goal}), is to create a testbed for \iot devices, which automates aspects of the involved workflow, with the aim of increasing reproducibility, standardization, and compatibility of tools and data across project boundaries.
We specify two key objectives supporting this goal:
\begin{enumerate}[label=\textit{Objective \arabic*}]
\item \textbf{Automation Recipes:}\label{obj:recipies} The testbed should support specification and repeated execution of important aspects of experiments with IoT devices, such as data collection and analysis (see \citep{fursinckorg2021})
\item \textbf{FAIR Data Storage:}\label{obj:fair} The testbed should store data in accordance with the FAIR \citep{go-fair} principles.
\end{enumerate}
\section{Requirements Analysis}\label{sec:requirements}
In this section, we present the results of the requirements analysis based on the principal objectives.
The requirements derived for \ref{obj:recipies} are presented in \cref{table:auto_recipe_requirements}.
\cref{table:fair_data_storage_requirements} we present requirements based on \ref{obj:fair}.
\begin{table}[H]
\centering
\caption{Automation Recipes Requirements}
\label{table:auto_recipe_requirements}
\begin{minipage}{\textwidth}
\begin{enumerate}[label=\textit{R1.\arabic*}]
\item \label{req:auto_install_tools} \textbf{Installation of Tools}: Support installation of necessary tools like \textit{mitmproxy} \cite{mitmproxy}, \textit{Wireshark} \cite{wiresharkorg} or \textit{tcpdump} \cite{tcpdump}).
\textit{Reasoning:}
There are various tools used for data collection and specifically packet capture.
Automating the installation of necessary tools ensures that all required software is available and configured correctly without manual intervention. This reduces the risk of human error during setup and guarantees that the testbed environment is consistently prepared for use. Many platforms, notably most common Linux distributions, come with package managers which provide a simple command-line interface for installing software while automatically handling dependencies. This allows tools to be quickly installed, making it a \textit{lower priority} requirement for \iottbsc.
\item \label{req:auto_config_start} \textbf{Configuration and Start of Data Collection}: Automate the configuration and start of data collection processes. Specific subtasks include:
\begin{enumerate}
\item Automate wireless hotspot management on capture device.
\item Automatic handling of network capture, including the collection of relevant metadata.
\end{enumerate}
\textit{Reasoning:}
Data collection is a central step in the experimentation workflow. Configuration is time-consuming and prone to error, suggesting automating this process is useful.As mentioned in \cref{sec:motivation}, current practices lead to incompatible data and difficult to reuse scripts.
Automating the configuration and start of data collection processes ensures a standardized approach, reducing the potential for user error
and thereby increasing data compatibility and efficient use of tools. Automating this process must be a central aspect of \iottbsc.
\item \label{req:auto_data_processing} \textbf{Data Processing}: Automate data processing tasks.
\textit{Reasoning:} Some network capture tools produce output in a binary format. To make the data available to other processes, often the data must be transformed in some way.
Data processing automation ensures that the collected data is processed uniformly and efficiently, enhancing it reusability and interoperability. Processing steps may include cleaning, transforming, and analyzing the data, which are essential steps to derive meaningful insights. Automated data processing saves time and reduces the potential for human error. It ensures that data handling procedures are consistent, which is crucial for comparing results across different experiments and ensuring the validity of findings.
\item \label{req:auto_reproducibility} \textbf{Reproducibility}: Ensure that experiments can be repeated with the same setup and configuration.
\textit{Reasoning:} A precondition to reproducible scientific results is the ability to run experiments repeatedly with all relevant aspects are set up and configured identically.
\item \label{req:auto_execution_control} \textbf{Execution Control}: Provide mechanisms for controlling the execution of automation recipes (e.g., start, stop, status checks).
\textit{Reasoning:} Control mechanisms are essential for managing the execution of automated tasks. This includes starting, stopping, and monitoring the status of these tasks to ensure they are completed successfully.
\item \label{req:auto_error_logging} \textbf{Error Handling and Logging}: Include robust error handling and logging to facilitate debugging to enhance reusability.
\textit{Reasoning:} Effective error handling and logging improve the robustness and reliability of the testbed.Automation recipes may contain software with incompatible logging mechanisms.
To facilitate development and troubleshooting, a unified and principled logging important for \iottbsc.
\item \label{req:auto_documentation} \textbf{Documentation}: Provide clear documentation and examples for creating and running automation recipes.
\end{enumerate}
\end{minipage}
\end{table}
\begin{table}[H]
\centering
\caption{FAIR Data Storage Requirements}
\label{table:fair_data_storage_requirements}
\begin{minipage}{\textwidth}
\begin{enumerate}[label=\textit{R2.\arabic*}]
\item \label{req:fair_data_meta_inventory} \textbf{Data and Metadata Inventory}: \iottbsc should provide an inventory of data and metadata that typically need to be recorded (e.g., raw traffic, timestamps, device identifiers).
\textit{Reasoning:} Providing a comprehensive inventory of data and metadata ensures that data remains findable after collection. Including metadata increases interpretability and gives context necessary for extracting reproducible results.
\item \label{req:fair_data_formats} \textbf{Data Formats and Schemas}: Define standardized data formats and schemas.
\textit{Reasoning:} Standardized data formats and schemas ensure consistency and interoperability.
\item \label{req:fair_file_naming} \textbf{File Naming and Directory Hierarchy}: Establish clear file naming conventions and directory hierarchies. for organized data storage.
\textit{Reasoning:} This enhances findability and accessibility.
\item \label{req:fair_preservation} \textbf{Data Preservation Practices}: Implement best practices for data preservation, including recommendations from authoritative sources like the Library of Congress \citep{recommendedformatrsLOC}.
\textit{Reasoning:} Implementing best practices for data preservation can mitigate data degradation and ensures integrity of data over time. This ensures long-term accessibility and reusability.
\item \label{req:fair_accessibility} \textbf{Accessibility Controls}: Ensure data accessibility with appropriate permissions and access controls.
\item \label{req:fair_interoperability} \textbf{Interoperability Standards}: Use widely supported formats and protocols to facilitate data exchange and interoperability.
\item \label{req:fair_reusability} \textbf{Reusability Documentation}: Provide detailed metadata to support data reuse by other researchers.
\end{enumerate}
\end{minipage}
\end{table}
We return to these when we evaluate \iottbsc in \cref{ch:5-eval}.
\section{Scope}\label{sec:scope}
This section defines the scope of the testbed \iottbsc.
To guide the implementation of the software component of this bachelor project, \iottb,
we focus on a specific set of requirements that align with the scope of a bachelor project.
While the identified requirements encompass a broad range of considerations, we have prioritized those that are most critical to achieving the primary objectives of the project.
For this project, we delineate our scope regarding the principal objectives as follows:
\begin{itemize}
\item \ref{obj:recipies}: \iottb focuses on complying with \ref{req:auto_config_start}, \ref{req:auto_reproducibility}.
\item \ref{obj:fair}: \iottb ensures FAIR data storage implicitly, with the main focus lying on \ref{req:fair_data_formats}, \ref{req:fair_data_meta_inventory}, \ref{req:fair_file_naming}.
\end{itemize}
\subsection{Model Environment}\label{sec:assumed-setup}
In this section, we describe the environment model assumed as the basis for conduction \iot device experiments.
This mainly involves delineating the network topology. Considerations are taken to make this environment, over which the \iottb testbed software has no control, easy reproducible \citep{vacuumpie2023}.\\
We assume that the \iot device generally requires a Wi-Fi connection.
This implies that the environment is configured to reliably capture network traffic without disrupting the \iot device's connectivity. This involves setting up a machine with internet access (wired or wireless) and possibly one Wi-Fi card supporting AP mode to act as the \ap for the \iot device under test \citep{surveytestingmethods2022}.
Additionally, the setup must enable bridging the IoT-AP network to the internet to ensure \iot device.\\
Specifically, the assumed setup for network traffic capture includes the following components:
\begin{enumerate}
\item \textbf{IoT Device:} The device under investigation, connected to a network.
\item \textbf{Capture Device:} A computer or dedicated hardware device configured to intercept and record network traffic. This is where \iottb runs.
\item \textbf{Wi-Fi \ap:} The \ap through which the \iot device gets network access.
\item \textbf{Router/ Internet gateway:} The network must provide internet access.
\item \textbf{Switch or software bridge:} At least either a switch or an \os with software bridge support must be available to be able to implement one of the setups described in \cref{fig:cap-setup1} and \cref{fig:cap-setup2}.
\item \textbf{Software:} tcpdump is needed for network capture.
\end{enumerate}
\newpage
\begin{figure}[!ht]
\centering
\includegraphics[width=0.75\linewidth]{Figures/network-setup1.png}
\caption{Capture setup with separate Capture Device and AP}
\label{fig:cap-setup1}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[width=0.75\linewidth]{Figures/setup2.png}
\caption{Capture setup where the capture device doubles as the \ap for the \iot device.}
\label{fig:cap-setup2}
\end{figure}
\newpage

Some files were not shown because too many files have changed in this diff Show More