The RX reader thread sets setblocking(0) and the TX writer (via aprslib
sendall) sets setblocking(1) on the same socket without synchronization.
This race condition causes partial writes where other stations' APRS-IS
stream data gets concatenated onto retransmitted packets.
Add a shared _socket_lock between send() and _socket_readlines() so the
socket blocking mode is never changed by one thread while the other is
mid-operation.
APRSDClient no longer has a .client property after the driver refactor
(commit 1c39546). Instantiating APRSDClient() is sufficient to trigger
connection via auto_connect=True.
Older versions persisted BeaconPackets to packettrack.json. On restart
these zombie beacons would be retransmitted by the scheduler. Now
PacketTrack.load() strips any BeaconPackets from the persisted data.
Workaround: delete ~/.config/aprsd/packettrack.json before restarting.
BeaconPackets are now skipped in PacketTrack — they are fire-and-forget
and never receive an ack, so tracking them caused the scheduler to
re-transmit them as unwanted duplicates.
AckPackets already being tracked are no longer reset when the same
message arrives via multiple digipeater paths, which was restarting
the retry counter and flooding RF with duplicate acks.
Added timing guards in both scheduler loops to prevent threadpool race
conditions where multiple workers could fire before send_count was
incremented.
Add stats_store_interval config option to control how frequently
the statsstore.json file is written to disk. Default remains 10
seconds for backward compatibility.
This allows reducing disk I/O in production deployments and
can help avoid potential file corruption issues when external
processes read the stats file.
The singleton's max_delta was being modified by test_init_custom_stale_timeout
and not restored, causing test_is_stale_connection_false to fail because
it expected 2 minutes but got 60 seconds.
The APRSISDriver uses @singleton decorator which transforms the class
into a function. The test was incorrectly trying to use __new__ which
doesn't work with decorated singletons. Instead, re-initialize the
existing instance after changing the config.
Add a new 'stale_timeout' configuration option to the aprs_network config
group that allows users to customize how long to wait before considering
an APRS-IS connection stale.
Problem:
The stale connection threshold was hardcoded to 2 minutes. In environments
with frequent network hiccups or when using certain APRS-IS servers that
may drop connections silently, 2 minutes can be too long to wait before
reconnecting, resulting in significant data loss.
Solution:
- Add 'stale_timeout' option to aprsd/conf/client.py with default of 120s
- Update APRSISDriver.__init__ to use the config value
- Maintain backward compatibility by defaulting to 120s if not configured
- Update tests to handle the new configuration option
Usage:
[aprs_network]
stale_timeout = 60 # Reconnect after 60 seconds without data
The default remains 120 seconds (2 minutes) for backward compatibility.
Fixes CVE-2026-21441 (8.9 High severity) - decompression-bomb safeguards
of the streaming API were bypassed when HTTP redirects were followed.
Closes#210
- Replace time.sleep(1.5) with thread_list.join_non_daemon(timeout=5.0)
- Remove unused import time since time.sleep is no longer used
- Remove outdated commented-out code
- Improve log message (removed '10 seconds' reference)
- Set self.period=CONF.aprs_registry.frequency_seconds in __init__
- Remove counter-based conditional (loop every N seconds pattern)
- Replace time.sleep(1) with self.wait()
- Remove _loop_cnt tracking (use inherited loop_count from base)
- Remove unused time import
- APRSDRXThread: Replace time.sleep with self.wait for interruptible waits
- APRSDRXThread.stop(): Use _shutdown_event.set() instead of thread_stop
- APRSDRXThread: Error recovery waits check for shutdown signal
- APRSDFilterThread: Use queue timeout with self.period for interruptible wait
- Remove unused time import
- Update tests to use new Event-based API
- Add daemon=True class attribute (subclasses override to False)
- Add period=1 class attribute for wait interval
- Replace thread_stop boolean with _shutdown_event (threading.Event)
- Add wait() method for interruptible sleeps
- Update tests for new Event-based API
BREAKING: thread_stop boolean replaced with _shutdown_event.
Code checking thread.thread_stop directly must use thread._shutdown_event.is_set()
The SimpleJSONEncoder didn't handle dataclasses like UnknownPacket,
causing a TypeError when saving stats to disk. Added support for
dataclasses using dataclasses.asdict().
- Refactor duplicate plugin discovery code into aprsd/utils/package.py
- Fix inconsistent --profile option in listen.py (now uses common_options)
- Add common_options decorator to completion command for consistency
- Improve healthcheck error message for missing APRSClientStats
- Consolidate signal handler in listen.py to use shared one from main.py
SECURITY FIX: Replace pickle.load() with json.load() to eliminate
remote code execution vulnerability from malicious pickle files.
Changes:
- Update ObjectStoreMixin to use JSON instead of pickle
- Add PacketJSONDecoder to reconstruct Packet objects from JSON
- Change file extension from .p to .json
- Add warning when old pickle files detected
- Add OrderedDict restoration for PacketList
- Update all tests to work with JSON format
Users with existing pickle files must run:
aprsd dev migrate-pickle
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Considering how little processing each plugin has, it's
a bit overkill for now to have a threaded processing of plugins.
Also had issues where the help plugin was responding when it shouldn't.