1Department of Computer Science, Western University, London, ON, Canada
2Department of Electrical and Computer Engineering, Idaho State University, Pocatello, ID, USA
3School of Informatics, Kochi University of Technology, Kochi, Japan
The ubiquitous availability of WiFi-enabled devices in our homes welcomes rapid adoption of WiFi sensing technology for smart home applications. WiFi sensing is a privacy-preserving, resource-aware sensing technology that requires no additional deployment cost and overcomes limitations of traditional sensing modalities. In this study we adopt a two-phase evaluation framework for WiFi sensing across multiple smart home applicationsβroom occupancy detection, human activity recognition, and indoor localization. In Phase 1 (feature representation), we compare our proposed rolling-variance transform against raw amplitude and SHARP-based phase sanitisation, showing that rolling variance achieves up to 1.6Γ higher accuracy without requiring phase information. In Phase 2 (learning model), we compare traditional ML and deep learning architectures, demonstrating that DL modelsβparticularly 1D-CNNβbest exploit the temporal structure in rolling-variance features.
A novel time-invariant feature representation extracted from CSI amplitude alone β no phase information needed. Achieves up to 1.6Γ higher accuracy than raw amplitude on challenging HAR tasks.
Rolling variance computes in 0.03s vs. SHARP's 3.5s β a 100Γ speedup β while delivering superior accuracy across all evaluated tasks.
Systematic comparison of feature representations (Phase 1) and learning architectures (Phase 2) across 4 datasets and 3 smart-home tasks.
Four labeled CSI datasets from ESP32-C6 across home and office environments, plus a real-time collection tool β all publicly released.
Figure 1: End-to-end system overview showing signal model, data pipeline, and three preprocessing approaches.
The rolling variance acts as a high-pass envelope detector: slow baseline drift is suppressed while activity-induced fluctuations are amplified. Computed efficiently via cumulative sums in O(N) time.
| Dataset | Env. | Task | # Cls | Split | Recorded | Packets | Duration |
|---|---|---|---|---|---|---|---|
| Home HAR (train) | Home | HAR | 7 | Session holdout | Oct 2025 | 2.7M | 232 min |
| Home HAR (test) | Home | HAR | 7 | ~3.5 mo gap | Feb 2026 | 2.7M | 233 min |
| Home Occ. (train) | Home | Occ. | 3 | Temporal split | Feb 2026 | 1.1M | 100 min |
| Home Occ. (test) | Home | Occ. | 3 | same-session | Feb 2026 | 0.6M | 50 min |
| Office HAR | Office | HAR | 4 | % split | Oct 2025 | 0.8M | 66 min |
| Office Loc. (train) | Office | Loc. | 4 | File holdout | Oct 2025 | 0.9M | 67 min |
| Office Loc. (test) | Office | Loc. | 4 | File holdout | Oct 2025 | 0.7M | 57 min |
| Parameter | SHARP | Ours |
|---|---|---|
| Monitored channel | 802.11ac ch. 42 | 802.11n HT20 |
| OFDM sample duration, T | 3.2 Γ 10β»βΆ s | 3.2 Γ 10β»βΆ s |
| No. OFDM sub-channels, M | 256 (245 used) | 64 (52 used) |
| Subcarrier spacing, Ξf | 312.5 kHz | 312.5 kHz |
| No. monitoring antennas | 4 | 1 |
| Dataset | # Cls | Silhouette β | Fisher β |
|---|---|---|---|
| Home HAR | 7 | β0.155 | 0.164 |
| Home Occ. | 3 | 0.412 | 0.594 |
| Office HAR | 4 | 0.407 | 4.030 |
| Office Loc. | 4 | 0.800 | 23.446 |
| Reference Baselines | |||
| Iris | 3 | 0.513 | 6.632 |
| MNIST | 10 | 0.063 | 0.334 |
| CIFAR-10 | 10 | β0.058 | 0.088 |
| Home HAR (7 cls) | Home Occ. (3 cls) | Office HAR (4 cls) | Office Loc. (4 cls) | |||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Pipeline | Model | L* | Acc | Pipeline | Model | L* | Acc | Pipeline | Model | L* | Acc | Pipeline | Model | L* | Acc | |||
| Amplitude | RF | 2k | .269 | Amplitude | RF | 500 | 1.00 | Amplitude | RF | 500 | .815 | Amplitude | RF | 500 | .906 | |||
| Amplitude | XGB | 1k | .292 | Amplitude | XGB | 500 | 1.00 | Amplitude | XGB | 2k | .833 | Amplitude | XGB | 1k | .904 | |||
| Roll. Var. | RF | 500 | .463 | Roll. Var. | RF | 2k | .978 | Roll. Var. | RF | 2k | .933 | Roll. Var. | RF | 500 | .891 | |||
| Roll. Var. | XGB | 2k | .446 | Roll. Var. | XGB | 500 | .987 | Roll. Var. | XGB | 1k | .908 | Roll. Var. | XGB | 500 | .868 | |||
| W | Architecture | Home HAR | Home Occ. | Office HAR | Office Loc. | ||||
|---|---|---|---|---|---|---|---|---|---|
| L* | Acc | L* | Acc | L* | Acc | L* | Acc | ||
| 20 | 1D-CNN | 1k | .529 | 1k | .993 | 1k | .942 | 2k | .957 |
| 1D-Conv-LSTM | 500 | .510 | 2k | 1.00 | 500 | .933 | 1k | .967 | |
| MLP | 1k | .351 | 500 | .958 | 500 | .836 | 1k | .890 | |
| 200 | 1D-CNN | 2k | .513 | 2k | 1.00 | 500 | .941 | 1k | .957 |
| 1D-Conv-LSTM | 2k | .525 | 1k | 1.00 | 500 | .933 | 500 | .889 | |
| MLP | 1k | .350 | 500 | .989 | 500 | .882 | 1k | .924 | |
| 2000 | 1D-CNN | 500 | .355 | 1k | .996 | 500 | .824 | 1k | .934 |
| 1D-Conv-LSTM | 500 | .468 | 500 | .994 | 500 | .924 | 500 | .916 | |
| MLP | 500 | .143 | 500 | .989 | 500 | .483 | 1k | .935 | |
All metrics rendered natively from experimental CSVs. Best result per configuration highlighted.
Best accuracy (max over window lengths & models) per feature pipeline. Rolling variance outperforms phase-dependent methods on the hardest tasks.
Best accuracy per DL architecture across all rolling-variance windows and window lengths. Conv1D leads on HAR; CNN-LSTM tops localization.
Figure 3: 1D-CNN architecture achieving best overall results on HAR tasks.
| Method | Wall (s) | CPU (s) |
|---|---|---|
| Rolling Var. (W=20) | 0.031 | 0.031 |
| Rolling Var. (W=200) | 0.027 | 0.016 |
| Rolling Var. (W=2000) | 0.027 | 0.031 |
| SHARP Sanitisation | 3.508 | 3.516 |
Rolling variance is 100Γ faster than SHARP!
| Model | Train (s) | Infer (s) | Acc. |
|---|---|---|---|
| RandomForest | 20.38 | 0.36 | .449 |
| XGBoost | 1795.80 | 0.34 | .476 |
| 1D-CNN | 185.12 | 0.89 | .454 |
| MLP | 245.90 | 0.39 | .382 |
| W | Home HAR | Home Occ. | Off. HAR | Off. Loc. |
|---|---|---|---|---|
| 20 | 0.529 | 0.993 | 0.942 | 0.957 |
| 200 | 0.512 | 1.000 | 0.941 | 0.957 |
| 2000 | 0.355 | 0.996 | 0.824 | 0.933 |
| L | Home HAR | Home Occ. | Off. HAR | Off. Loc. |
|---|---|---|---|---|
| 500 | 0.507 | 0.984 | 0.937 | 0.943 |
| 1000 | 0.529 | 0.993 | 0.942 | 0.953 |
| 2000 | 0.504 | 0.982 | 0.917 | 0.957 |
1.6Γ higher accuracy with rolling variance vs. raw amplitude on challenging HAR tasks
100Γ faster than SHARP-based phase sanitisation
1D-CNN best exploits temporal structure, reaching 52.9% on 7-class Home HAR
No phase information needed β works with amplitude-only CSI
Future work: Cross-environment transfer learning, multi-subject generalization, and on-device deployment on ESP32 microcontrollers.