Skip to content
Snippets Groups Projects
Commit 8a9e98bc authored by Jake Read's avatar Jake Read
Browse files

tests to 250 bytes and 0p8 Mbit, linux test next

parent 480ce539
Branches
Tags
No related merge requests found
...@@ -10,27 +10,16 @@ To start, I spun up ... ...@@ -10,27 +10,16 @@ To start, I spun up ...
![128byte](images/2023-12-12_ingest-histogram-single-source-pck-128.png) ![128byte](images/2023-12-12_ingest-histogram-single-source-pck-128.png)
I would want to improve these by relating instantaneous real-data-rates with packet length and time deltas... i.e. for a 128 byte packet, supposing a 1ms interval, we have only `128k Byte / sec` whereas the spec looks for `296k Byte / sec` (for all the data). ![250byte](images/2023-12-12_ingest-histogram-single-source-pck-250.png)
So, not done at all: So, we do win some speed when we increase packet sizes, but we have a clear trend around ~ 0.6 -> 0.9 Mbits/s ... which is not awesome; our spec for the data printer requires about 4 Mbits/s of real data rate.
- usb packet length is actually 64 bytes, so zero-delimited would mean we should see a step size around ~ 60 bytes, not 64 as tested ?
- plot data-rates as well as per-packet delays
- get COBS up to 512, or whatever the underlying USB packet size is ? how does that plot ?
- what about multiple devices ?
- what about flow control ?
- how does it compare on a linux machine ? (the same machine, with linux)
- how about a raspberry pi ?
I think the big point will be that we will want to measure these links in-situ, to do intelligent (feedback!) systems design.
## 2023 12 20 ## 2023 12 20
So - yeah, I want to keep working through this today, and I'm going to bundle code in here as well. So - yeah, I want to keep working through this today, and I'm going to bundle code in here as well.
Firstly, it seems like USB 2.0 Full Speed has core frame sizes of 512 bytes, which we should endeavour to use - finishing our little diagrams above. That means this NanoCOBS implementation as well? I wanted to see about improving with nanocobs and maybe packing more data (up to 512 encoded bytes rather than 255), but I don't think that's the bottleneck - instead I suspect that we can try linux instead of windows... and then anyways move on to i.e. ethernet tests or multiple devices / async patterns etc etc etc, so, let's see about linux, running the same code.
OK so I'm going to see if I can't swap in nanocobs, then test again at 32 byte packets;
......
...@@ -7,7 +7,7 @@ import matplotlib.pyplot as plt ...@@ -7,7 +7,7 @@ import matplotlib.pyplot as plt
ser = CobsUsbSerial("COM23") ser = CobsUsbSerial("COM23")
stamp_count = 10000 stamp_count = 10000
pck_len = 128 pck_len = 250
stamps = np.zeros(stamp_count) stamps = np.zeros(stamp_count)
......
...@@ -65,7 +65,10 @@ boolean COBSUSBSerial::clearToRead(void){ ...@@ -65,7 +65,10 @@ boolean COBSUSBSerial::clearToRead(void){
} }
void COBSUSBSerial::send(uint8_t* packet, size_t len){ void COBSUSBSerial::send(uint8_t* packet, size_t len){
// ship it! blind! // we have a max: we need to stuff into 255,
// and we have a trailing zero and the first key
if(len > 253) len = 253;
// ship that,
size_t encodedLen = cobsEncode(packet, len, txBuffer); size_t encodedLen = cobsEncode(packet, len, txBuffer);
// stuff 0 byte, // stuff 0 byte,
txBuffer[encodedLen] = 0; txBuffer[encodedLen] = 0;
......
...@@ -40,7 +40,7 @@ void loop() { ...@@ -40,7 +40,7 @@ void loop() {
// tx a stamp AFAP // tx a stamp AFAP
if(cobs.clearToSend()){ if(cobs.clearToSend()){
chunk.u = micros(); chunk.u = micros();
cobs.send(chunk.bytes, 128); cobs.send(chunk.bytes, 250);
digitalWrite(PIN_LED_G, !digitalRead(PIN_LED_G)); digitalWrite(PIN_LED_G, !digitalRead(PIN_LED_G));
} }
// blink to see hangups // blink to see hangups
......
images/2023-12-12_ingest-histogram-single-source-pck-250.png

32.6 KiB

0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment