I have got some messages about installing PIL library on a Mac assuming it is a Windows-only
library. It is not.
I have never used Python Image Library before, but when I designed 4xiDraw drawing machine I used one Inkscape plug-in that was intended for laser engraving. I hacked it just a bit to make it work with my machine. The plug-in itself is a simple two-file thing you need to copy to your ~/.config/Inkscape/extensions/ but the PIL library may be a bit of a challenge on some systems.
It is important to use 0.91 version of Inkscape as the plugin will not work with an older version. Other than that it should work in Windows, OSX and Linux.
What worked for me are the following commands on an opened terminal session:
There are multiple ways of learning what is the IP address your Raspberry Pi is obtaining from a
router. The most obvious one is to use the router's DHCP client list. Another one is to use a HDMI display as your RPi will report its IP address while booting.
The former requires administrative access to the router, which may not be possible on certain networks and the latter is only possible if you can connect the display to the RPi and you have a display available. What I am going to propose requires no special rights over the network gear not any additional hardware.
One of the things you can do over a network is to broadcast a message (in fact this is the foundation of the DHCP protocol for a computer to find a suitable DHCP sever over the network without previous knowledge of it). Sending a UDP broadcast message allows any other system on the network to hear it. And if that message is received each receiver knows immediately the sender's IP address.
So here is what I do to perform this periodic broadcast from the RPi
from socket import *
from time import sleep
s.setsockopt(SOL_SOCKET, SO_BROADCAST, 1)
while (1) :
Any computer on the network can tell the IP address of the sender of this special message so it will effectively learn about it. I tried to use the netcat command but while it works it fails to shown the sender's address so I used tcpdump instead.
sudo tcpdump -i en0 port 25555
That did it for me. Once I learn my RPi address on a network I can go ahead and ssh to it for any maintenance task I want. My main interest here is for nodes that are installed on networks different than my home network.
I am working on a project that required some computing power and commanding an Arduino UNO running GRBL. Things have changed quite a bit from
original plan, so because radio reception was awful, the original plan of using dump1090 with a USB dongle had to be ditched. Of course I only learned that once I have it working nicely on the Raspberry Pi 3.
Plan B would be to use the Ethernet network interface. Once that was working we realized it was not possible on our target installation.
Plan C was to use wifi. And while it is really simple to get it working with stock Jesse, I found a way to waste my time when I added spaces in between variables and equal signs in /etc/wpa_supplicant/wpa_supplicant.conf. So now you have been warned.
# Include files from /etc/network/interfaces.d:
iface lo inet loopback
iface eth0 inet manual
iface wlan0 inet dhcp
Once wifi connection was working and the rest of the project code was up and running I wanted to have it all to autostart. However, I always like to be able to peek into each program to see how that is going on.
If I had chosen to start each program off /etc/rc.local I had the problem that I won't be able to see what is going on with the terminal output of each program unless I redirect its output to a file, but that usually uses up disk space and eventually needs to be deleted and wears the flash memory.
Instead, I decided to use screen command to launch each program. And, as usual, it did not work at first try. Several things are slightly different there than from bash command line:
root user in the one running the show there, if you launch something it will be under that user (unless you do something about it)
commands and files locations need to be specified carefully, as there is no $PATH value to rely on.
you can always use su - username -c "command" to launch a program under other user credentials
any program you launch here needs to end quickly as you are delaying the boot process, if you need to launch a long program it has to be on the background (use &)
Once I understood all these points I was able to launch my commands using lines like this:
su - pi -c "/usr/bin/screen -d -m /home/pi/mycommand"
And while you do not see the & at the end of the line, it is the -d parameter of screen what takes care of running it as a daemon process and not blocking the script flow.
The good thing is that later, I can log into the RPi3 and use screen -r to connect to the virtual terminal each program is running on, to see if the output is right and everything is working as expected or not. It really works well for me.
While some friends were waiting for their first Printrbot off Kickstarter I had already built one with the parts Brook Drumm posted on Thingiverse. That was quite a while ago. It was a cute little machine that I sold to a fellow reprapper. Last year, while I was working on a closed-loop DC motor controller for replacing the steppers of our 3D printers, Brook Drumm offered to help and sent me a free Printrbot Simple Metal, fully assembled to be used as a testbed printer for such type of motion control. I had a difficult time trying to convince the taxman that the printer was really a gift and eventually I had to gave in and pay some custom taxes although that was not really a product I was buying and the sender was charging me no money for that.
I was surprised of how compact the thing appeared and how smooth and solid all the axis moved. However, the 3d printer was not intended to be used as a 3d printer and the first thing I had to do was to partially disassemble a brand new unit and to start to adapt the other type of motor.
Adapting another motor while keeping the functionality of the printer was a dead end given my limited mechanical skills and how tight fit all the parts on this model of printer. I managed to get a motor working on Y axis but then I would lose Z-axis functionality as it was not possible to get both axis working with the type of motor I was using.
At then end, the mechanical limitations made the task to lay in my project's table for a long time without any new development. I want to stress that when I mentioned the problem I was having to Brook and the inability to get the test of operation he offered to send another model but I already felt embarrassed enough for not achieving the original goal to go on that path.
So one day I decided it was about time that I reassemble the Simple with its original motor and get it work as a regular 3d printer. Once it was up and running I did a sample printing while the printer sat on a stool in my office and I left the printer unattended for a coffee break with some colleagues. When I returned I discovered to my horror that the printer had fallen on the floor due to the carriages acceleration (metal feet of the frame on a hard plastic lab-type stool was a deadly combination). To make things worse the USB cable had almost ripped apart the micro-USB socket. My attempts to fix it did not succeed, so I ended up cutting the micro USB connector off and soldering the four wires directly (incidentally I discovered the schematic had the wiring to D+ and D- reversed on the Reprap site).
Luckily the torn USB connector was the only damage occurred to the printer after the fall. Once fixed it is working I hope as new and giving very good prints using PLA. While very good output quality, I have to admit it is not as good as the i3 MK2, but one of the reasons maybe that the Simple feels faster (in fact it uses default higher accelerations on XY axis than the Prusa i3) which can account for both the faster printing and the small difference in output quality.
All in all, I am impressed at how well this little printer handles. One minor detail is that while the printer profile is small, you need to have good clearance on the sides and on the back so X and Y carriages can move freely.
I can understand why this model has so many good reviews, it is just not intended for tinkering much as all the parts are held in a really compact space.
My 4xiDraw project has been a source of inspiration for other projects. A while ago I mentioned how to add wireless connectivity to a serial-based device, but for a subject I teach I wanted to get a bit deeper on the details about stepper motor timing generation for trapezoidal (or any other) speed profile.
While this functionality is implemented in every CNC or 3D printer controller software, most of them are based on GRBL development, which is efficient but not easy to grasp on a first look. There are many different but related algorithms working together there.
Just by chance I bought a Wemos D1 board that replaces the Arduino UNO Atmega 328 by an ESP8266 but keeps the UNO form factor. It was a weird proposal but I bought anyway as we all know that anything that stamps wifi on it makes it a better product.
I have used the ESP8266 in the past, through the Arduino IDE, but I have never needed to achieve any realtime operation. But once I checked that CNCshield board could be used (and it will work ok) together with Wemos D1, I set my mind to replace the Arduino UNO of my 4xiDraw by the wifi-enabled Wemos sibling.
The good news was that the ESP8266 32 bit processor and generous flash memory space will allow me to get decent performance without much effort on my part while coding. However, I was not so sure about getting good real-time performance on the stepper's step signal.
Timing is everything
Steppers are picky motors. They do not rotate unless the controller keeps sending step signals at the proper pace. Using a fixed rate is the simple alternative, but physics get in the way and this approach is somehow limited. So instead of using a fixed speed, a more common approach is to use a variable speed, starting from a low speed and ramping up to a cruise speed to later decelerate back to a stop.
Given that the speed of a stepper is directly proportional to the rate of the step signal, we need to create a signal whose rate will increase [linearly] and decrease. But for coding purposes, we need to establish the time period in-between the different steps.
Unfortunately, given the reverse nature of frequency and period, a linear increase in frequency (speed) does not translate into a linear decrease of the period. Failing to make this point, programmers are in for a big disappointment.
There is a very interesting application note by ATMEL that goes into a lot of detail on how you can calculate such timer intervals without much computing cost. This is what GRBL and Marlin and Smoothieware do.
But for a 3D printing or CNC machine we are interested on accurately controlling the position of each move too. That means that for each basic movement, a straight line is drawn from an initial point to a destination point on a multidimensional space. A certain distance to be covered over an axis comes immediately to a given number of steps. It is that total distance what will be traversed using a so called trapezoidal speed pattern that will smooth acceleration and deceleration phases so hopefully motion will happen without missing any steps. This helps the motors to reach much higher speeds than what would be possible when using only a fixed speed, which will contribute to lower 3D printing or machining times too.
For a stepper motor, the zero speed corresponds to an infinity period in-between steps. We would consider a non-zero initial speed, than will give us a value T0. This value will be decreasing at each step while accelerating and it will remain the same while motor cruises to start increasing again to slow down the stepper till it stops when no more pulses are provided to the step signal of the motor driver.
The time to cover the distance of a step can be formulated as T0 = sqrt ( 2 / accel ) and the period difference between one step n and step n+1 can be expressed as Tn+1 = Tn * ( sqrt( n + 1 ) - sqrt( n ) ). So we could use that for iteratively calculate the new time interval till the next step. Unfortunately, a couple of square roots take time to calculate, even more if using an 8-bit processor.
Luckily, a series expansion of the expression above allows us to obtain a simpler relationship that can be easily calculated so now Tn = Tn-1 - 2 * Tn-1 / ( 4 * n + 1 )
Each step there is a slight speed increase, so when the desired maximum speed is reached, no more increases happen. So that last Tn value is kept for the period of all the steps while cruising until the deceleration phase starts, that can use the same sequence of numbers (1..n) but now these will be negative numbers going from -n to -1.
What is left is to determine the amount of pulses for the acceleration and deceleration phases. For simplicity I considered the same acceleration for both of them, so they will need the same amount of steps. The remaining steps, if any, will be traveled at the maximum speed. Please note that for short movements it may not be possible to reach the desired maximum speed (feedrate) so half of the time will be used accelerating and half of the time decelerating to/from a speed lower than the maximum one.
Timer0 on ESP
One thing I have not used before was a timer on the ESP. I assumed the Servo library would use one but I did not dig into the details. However, now I wanted to make sure the timing I was carefully calculating for each step would not be disturbed by other tasks the processor might get into when communicating wirelessly.
My plan was to use a timer so each new step will be scheduled with the help of the timer, that will cause an interrupt at the right time of the next step. Being interrupt-driven should help getting the timing right.
Once configured a new call to timer0_write(ESP.getCycleCount() + 80 * microseconds) will schedule a new interrupt after that number of microseconds from now. The code of the interrupt will calculated and schedule the time on the new interrupt, plus it will perform the motion on any of the steppers, determining the end of acceleration, the begin of the deceleration or the end of the move.
However, I have found I cannot stop the timer0 from running without getting a watchdog interrupt afterwards, so I depend on whether or not there are more steps to be processed to run the motion related code or a dummy new interrupt is scheduled every 10 milliseconds, just to keep the ball rolling and avoid the watchdog from complaining. Maybe there is a better way but I settled with what worked first.
I only needed to get a servo working, so the obvious choice was to use the servo library, but for reasons unknown, it won't work. Not even when defining SERVO_EXCLUDE_TIMER0 to prevent it to use timer0. But that is not really a big deal as I can use some time in the main loop to create a pulse of the desired width (1.5 .. 2 msec) and to refreshed it every 20 msec or so. And so I did and it seems to be working nicely as shown in the sample video below:
The code that was driving the motion was being created in my desktop computer by this line of code: (while true; do X=$((RANDOM % 100)); Y=$((RANDOM %100)); Z=$((RANDOM % 1000 + 1000)); echo "M3S$Z"; echo "G1 X$X Y$Y"; sleep 2; done ) |nc -u 192.168.4.1 9999
You can get the project code and some extra details, like a logic-analyzer trace and source code from here. Almost forgot: I have based my project on Dan's code from MarginalClever.com
In the past I tried a Bluetooth link for sending g-code wirelessly to a 3D printer. It works ok but it seems a bit slow so eventually small stops happen while printing (buffer empties). Wifi was an expensive option at the time so I forgot about it.
Recently, the availability of the excellent ESP-link firmware together with NodeMCU/ESP12E boards for less than $5 painted a different scenario and while I was not on an immediate need of it I decided to give it a try during my summer holidays.
That firmware could be used with smaller and cheaper ESP8266 boards but I have found much more convenient to use (as they include their own voltage regulator) the so-called Nodemcu, just $1 more or so. These boards pack a 32bit SoC with 4Mbytes of flash and, lately, they are even supported through the Arduino IDE.
In order to keep the printer still usable through USB connected to a computer I patched Marlin so I could use an additional serial port for the wifi connection. The problem was that I already was using Serial2 for another purpose, so added code for simultaneously handling Serial3. Luckily the modification by TerawattIndustries showed how to add an additional serial port to be used for Bluetooth module. I had used that in the past to add an additional serial port to be used with some new G-code commands over an RS-485 link. This time I repeated the process with a twist, so now g-code is read from both Serial1 and Serial3 and responses are sent back to both ports too. This way no matter whether USB or wifi are the source of g-code the printer will work transparently.
Please note that ESP chip works at 3.3 volts while Arduino Mega works at 5V, so you do not want to connect an Arduino output to an ESP input as it can be destroyed by the excessive voltage. The opposite is no risk (applying 3.3volts to an Arduino input is not a problem and it will be detected as a high level). You can see in the picture above the circuit and the two data connections (GND is connected if both boards are USB powered by the same computer or power adapter). A simple 1N4148 (or similar) diode will be ok (as far as the RX input pin pull-up resistor is activated in the ESP chip).
In order not to mess with Marlin, I chose to use the alternate port configuration (RX2/TX2) on the Nodemcu so no boot-up strings would be sent when the wifi adapter is booting up.
ESP-link configuration web based and I am pleasantly surprised on how well though out is done (the fact that the firmware tells you the new IP of the board once it has logged to another wifi network is just genius!!).
Once you know the IP address of the wifi adapter (that now is connected to Marlin's Serial3 port) you can send g-code to it easily. Port 23 is the one used by default, but sending data cannot be done with command line tools like netcat as we need to have some flow control (i.e. not to send a new command if the previous one is not yet done). For each successful command, Marlin sends back an "ok" response. So I wrote a small program to send data to my wifi 3D printer.
Now I can chose to use the USB port or send data over wifi. More freedom to locate the printer not necessarily tied to a USB port.
UPDATE: I later found out that modern versions of Pronterface will accept an ip_address:port in the the serial com selector and it will then work using a socket connections instead of a serial port. So there is no need to use another program for doing that :-)
I was recently ask by a friend about how certain P2P wireless cameras can be accessed from a
cellphone with no router configuration. I had no idea about those cameras or its so-called P2P-thing whatever that was that tricked your home router so your camera can be accessed using a mobile app.
Of course if both the wifi camera and the cellphone belong to the same LAN there is a simple answer, but when they belong to different networks and there are one or more routers in between things may get murkier, specially when on or more of these routers are broadband routers (marketing-talk for NAt boxes).
The problem of reaching one host on the Internet from another is:
to figure out its IP address
to be able to connect to it (this is where firewalls may be a problem for your communication)
However, if a device is connected to a home network with Internet access, it is most likely served by one of these broadband routers, that will block any connection attempt that might come from the Internet to any device in the home network. Effectively making impossible to access devices in your home networks to good or bad users on the Internet.
Of course, there are ways to overcome limitation with virtual-servers port forwarding that will expose certain computers on the home network to be accessed from the Internet. But using such a feature requires configuration changes on the home router. Sometimes you cannot do that or do not know how to do it, so extra help might be needed. If that helps come in human form it may be costly. So manufacturers (Microsoft?) created the Universal Plug-and-play Protocol (or UPnP) that will allow your computer to do the job of changing router configuration for you, cheaper but riskier. Because of that many broadband routers do not enable UPnP by default (or do not even support it).
The tricky part of me discovering how in hell this mobile app was being able to contact the P2P camera required me to install one of these cameras at home and capture network traffic caused by a remote access using my cellphone (with wifi disabled so I could be certain it was in fact a remote access happening through the Internet).
I have been using Wireshark software for quite a while, and the fact that I know it used to be called Ethereal can give you and idea of how long that while might be. Anyway, Wireshark is a open-source software that can capture network traffic in real-time for later analysis.
My home network uses WPA2/AES encryption with a pre-shared key (PSK) so you might think that because my computer knows the wifi password, I could capture all wifi traffic on my network. And yes, I could do that, but no, it is not that simple.
WPA(2) protects mobile devices traffic using different keys for different devices on the same network. So even if my computer can capture encrypted network traffic it cannot decode it even if I provide the wifi password because each mobile device would use a different session key (derived from a master key, derived from the wifi password).
But two details will make everything come together:
you need to capture traffic using monitor mode (that captures not only data frames but also all 802.11 control frames that are usually invisible to user software)
you need to make sure all mobile devices whose traffic you need to decode perform a wireless association (EAPOL) during the traffic capture (this way the software can learn the session key each one is using as is exchanged between the mobile terminal and the router at the beginning of each association).
Ok, so once you have done all that you look at the captured traffic and you feel that I was kidding because it still looks as encrypted as before (but now there are many weird 802.11 control frames too).
Decoding the traffic does not happen while you are capturing data but later. You have to let Wireshark know the wifi password and for that you have go to Edit/Preferences/Protocols/IEEE802.11 and add your wifi password and SSID. In older versions both password and SSID are input in the same textbox and separated by a colon (like in the image below).
Ok, then ... why is not yet decrypted? Well if your capture is not yet decrypted press Ctrl+R for the program to reload the data from the internal buffer, but this time, hopefully you will have the decrypted traffic.
Unfortunately, while I succeeded in eavesdropping multiple devices inside my wifi network, I realized that the the camera was using an unknown encrypted protocol that would connect the camera to a server in China (using UDP so maybe connect is not the best word here). Next the camera would connect to other hosts on the Internet (my guess is these are other similar cameras, therefore the P2P name).
The mobile application on the cellphone starts by connecting the server and from there it connects to the camera. The "connection" (again using UDP) to the camera works because the camera punches a hole through the broadband router NAT-table (I guess instructed by the server that coordinates them both).
I contacted the makers of Blue Iris PC software for IP cameras asking if they supported such a protocol and they did not support it. So my guess is that having a similar feature on a PC with more powerful software is not going to be an easy task (given manufacturers give no detail about how the protocol they created works).