Investigation of Level Sensor Anomalies
Investigation of Level Sensor Anomalies
As discussed at WMT I have been investigating the anomalies that were observed in both the Sump and Wendy Butts Pi readings last Tuesday.
Using the newly-built Stage Butts Pi Interface and the 'blue-capped' Hall Effect Sensor, I have been running universal_monitor.py since approximately 15:00 UTC on Wednesday afternoon (there is no SumpPi or Real Time Clock, so the date stamp says it's the 8th).
First of all, I set the magnet at a number of different levels to ensure that I got the expected results. I then placed the magnet at about 580 mm and rebooted the Pi.
Initially, the returned value was '0' as expected. I then placed the magnet over the 600 mm level on the sensor and that value was duly returned. I then moved the magnet to 800 mm and observed that being read correctly. I then moved the magnet to midway between 775 mm and 800 mm and got a dot. The sensor has returned dots ever since (see the attached fragment of the current Hall Effect Probe Results file (G6:M0-2019-06-08 15:34:36.csv) in the compressed archive).
As far as the log is concerned, everything seems normal so far (see the fragment of the log attached in the compressed archive).
I'll report again in due course.
Using the newly-built Stage Butts Pi Interface and the 'blue-capped' Hall Effect Sensor, I have been running universal_monitor.py since approximately 15:00 UTC on Wednesday afternoon (there is no SumpPi or Real Time Clock, so the date stamp says it's the 8th).
First of all, I set the magnet at a number of different levels to ensure that I got the expected results. I then placed the magnet at about 580 mm and rebooted the Pi.
Initially, the returned value was '0' as expected. I then placed the magnet over the 600 mm level on the sensor and that value was duly returned. I then moved the magnet to 800 mm and observed that being read correctly. I then moved the magnet to midway between 775 mm and 800 mm and got a dot. The sensor has returned dots ever since (see the attached fragment of the current Hall Effect Probe Results file (G6:M0-2019-06-08 15:34:36.csv) in the compressed archive).
As far as the log is concerned, everything seems normal so far (see the fragment of the log attached in the compressed archive).
I'll report again in due course.
- Attachments
-
- SSumpPi_Testing.zip
- (1001 Bytes) Downloaded 103 times
Terry
Re: Investigation of Level Sensor Anomalies
I've just spotted something that is different. When I first setup this test, I ran top and saw that python was consuming very little in the way of resources. I'm powering the setup using my bench PSU and I also noticed that the current consumption was approximately 460 mA.
This morning I happened to notice that the current consumption had risen to about 560 mA, eg 20% higher, so I got to wondering if the increased power was going to the Pi or the Sensor so I ran top again. Here is what I got:
I caught that when the CPU usage was fairly low; python is currently hovering around 98%.
Any ideas why?
This morning I happened to notice that the current consumption had risen to about 560 mA, eg 20% higher, so I got to wondering if the increased power was going to the Pi or the Sensor so I ran top again. Here is what I got:
Code: Select all
top - 14:44:36 up 1 day, 23:11, 2 users, load average: 1.47, 1.41, 1.41
Tasks: 62 total, 1 running, 38 sleeping, 0 stopped, 0 zombie
%Cpu(s): 81.2 us, 18.7 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem : 443844 total, 254012 free, 34204 used, 155628 buff/cache
KiB Swap: 102396 total, 102396 free, 0 used. 352856 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
357 pi 20 0 52368 14428 5032 S 81.8 3.3 2820:51 python3
15376 pi 20 0 8108 3320 2800 R 13.6 0.7 0:00.08 top
1 root 20 0 9640 5924 4800 S 0.0 1.3 0:43.23 systemd
2 root 20 0 0 0 0 S 0.0 0.0 0:00.01 kthreadd
4 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 kworker/0:0H
6 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 mm_percpu_wq
7 root 20 0 0 0 0 S 0.0 0.0 0:15.63 ksoftirqd/0
8 root 20 0 0 0 0 S 0.0 0.0 0:00.01 kdevtmpfs
9 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 netns
11 root 20 0 0 0 0 S 0.0 0.0 0:00.12 khungtaskd
12 root 20 0 0 0 0 S 0.0 0.0 0:00.00 oom_reaper
13 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 writeback
14 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kcompactd0
15 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 crypto
16 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 kblockd
17 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 watchdogd
18 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 rpciod
19 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 xprtiod
20 root 20 0 0 0 0 I 0.0 0.0 0:02.13 kworker/u2:1
22 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kswapd0
23 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 nfsiod
33 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 kthrotld
34 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 iscsi_eh
35 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 dwc_otg
36 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 DWC Notificatio
37 root 1 -19 0 0 0 S 0.0 0.0 0:00.00 vchiq-slot/0
38 root 1 -19 0 0 0 S 0.0 0.0 0:00.00 vchiq-recy/0
39 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 vchiq-sync/0
40 root 20 0 0 0 0 S 0.0 0.0 0:00.00 vchiq-keep/0
41 root 10 -10 0 0 0 S 0.0 0.0 0:00.00 SMIO
44 root 20 0 0 0 0 S 0.0 0.0 0:14.99 mmcqd/0
45 root 20 0 0 0 0 S 0.0 0.0 0:07.47 jbd2/mmcblk0p2-
46 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 ext4-rsv-conver
47 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 ipv6_addrconf
80 root 20 0 20592 5284 4872 S 0.0 1.2 0:04.57 systemd-journal
100 root 20 0 14336 3212 2624 S 0.0 0.7 0:01.42 systemd-udevd
110 root 20 0 0 0 0 I 0.0 0.0 0:03.39 kworker/u2:2
167 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 kworker/0:1H
175 systemd+ 20 0 17268 3956 3520 S 0.0 0.9 1:13.37 systemd-timesyn
201 root 20 0 7372 4328 3892 S 0.0 1.0 0:01.01 systemd-logind
204 root 20 0 22856 2908 2236 S 0.0 0.7 0:00.68 rsyslogd
205 root 20 0 5284 2412 2204 S 0.0 0.5 0:00.70 cron
208 nobody 20 0 5280 2540 2308 S 0.0 0.6 0:02.85 thd
214 root 20 0 27592 1364 1224 S 0.0 0.3 2:05.05 rngd
225 avahi 20 0 6388 3124 2792 S 0.0 0.7 0:00.91 avahi-daemon
229 message+ 20 0 6492 3324 2916 S 0.0 0.7 0:00.77 dbus-daemon
232 avahi 20 0 6388 1480 1180 S 0.0 0.3 0:00.00 avahi-daemon
236 root 20 0 9988 4076 3696 S 0.0 0.9 0:02.03 wpa_supplicant
242 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 cfg80211
305 root 20 0 2928 1836 1512 S 0.0 0.4 0:01.06 dhcpcd
314 root 20 0 10196 5268 4728 S 0.0 1.2 0:00.12 sshd
322 root 20 0 3952 1996 1868 S 0.0 0.4 0:00.03 agetty
323 root 20 0 5868 2892 2440 S 0.0 0.7 0:00.24 login
330 pi 20 0 9648 5612 4952 S 0.0 1.3 0:00.21 systemd
333 pi 20 0 11304 2864 1628 S 0.0 0.6 0:00.00 (sd-pam)
338 pi 20 0 5852 3776 2768 S 0.0 0.9 0:00.53 bash
Any ideas why?
Terry
Re: Investigation of Level Sensor Anomalies
No, I shall have a look. What command are you running to start the software?
Hamish
Re: Investigation of Level Sensor Anomalies
I just let the system boot and it is started by the commands in .bashrc, eg
Where "ID" in this case is G6.
Code: Select all
if [ $(tty) == /dev/tty1 ]; then
cd rivercontrolsystem
./universal_monitor.py --id "ID"
fi
Terry
Re: Investigation of Level Sensor Anomalies
I am now running this on my pi 1 to see what happens. I will reply if/when the load goes up.
Hamish
Re: Investigation of Level Sensor Anomalies
I should ask: is this pi running solo, or is it in a network with other pis?
Hamish
Re: Investigation of Level Sensor Anomalies
Good point. It is in a network with a stripped down SumpPi which has a A/D converter connected and nothing else.
I just did top on SumpPi and got python at around 20%. Interestingly, SumpPi is showing an uptime of just over 1 day whereas SButtsPi is showing 2 days. I'm not sure how that can be, because I thought I started everything at the same time.
I'll keep monitoring both.
I just did top on SumpPi and got python at around 20%. Interestingly, SumpPi is showing an uptime of just over 1 day whereas SButtsPi is showing 2 days. I'm not sure how that can be, because I thought I started everything at the same time.
I'll keep monitoring both.
Terry
Re: Investigation of Level Sensor Anomalies
Okay. I will set up for a test here, but without the A2D, seeing as I don't have one
Are you running the code we currently have deployed at WMT? I will also have a look at the code this afternoon to see what's going on. Kind of a wide search area, seeing as it could be literally anything, but there are a few potential culprits.
Are you running the code we currently have deployed at WMT? I will also have a look at the code this afternoon to see what's going on. Kind of a wide search area, seeing as it could be literally anything, but there are a few potential culprits.
Hamish