From ba6dd77c216149d663ea095ba4e5d77fcf78501e Mon Sep 17 00:00:00 2001 From: Rasmus Dahlberg Date: Sun, 26 Mar 2023 17:58:05 +0200 Subject: Add more debug notes --- README.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/README.md b/README.md index b28dac4..04aafa9 100644 --- a/README.md +++ b/README.md @@ -130,20 +130,20 @@ To get back into a normal state, try: used here in a subsequent run. For example, with `-C 60s` and an average of 100 domains/s, it would be wise to roll-back _at least_ 6000 lines. -Get in touch if you know a fix, e.g., based on `ulimit` and `sysctl` tinkering. - More debug notes: - My system is not fully utilized wrt. CPU/MEM/BW; an odd thing is that it seems to work fine to run multiple onion-grab instances as separate commands, e.g., 3x `-w 280` to get up to ~225 Mbps utilization (max). - Adding a fourth instance gets me into the same problem as documented above. - Tinkering with with options in http.Transport doesn't seem help. - Using multiple http.Client doesn't help (e.g., one per worker) - An odd thing is that after errors, it appears that only DNS is dead. E.g., `curl https://www.rgdd.se` fails while `curl --resolve www.rgdd.se:443:213.164.207.87` succeeds. Replacing the system's DNS with a - local unbound process doesn't seem to help though. + local unbound process doesn't seem to help though. (It appears that no UDP + connections are going through.) + - Tinkering with the options in `ulimit -a` and `sysctl -a` is probably the + right approach, but so far have not been able to make that work. ## Contact -- cgit v1.2.3