Fighting with computers

Computers are not always friendly.

Friday, August 26, 2016

G-code over wifi

In the past I tried a Bluetooth link for sending g-code wirelessly to a 3D printer. It works ok but it seems a bit slow so eventually small stops happen while printing (buffer empties). Wifi was an expensive option at the time so I forgot about it.

Recently, the availability of the excellent ESP-link firmware together with NodeMCU/ESP12E boards for less than $5 painted a different scenario and while I was not on an immediate need of it I decided to give it a try during my summer holidays.

That firmware could be used with smaller and cheaper ESP8266 boards but I have found much more convenient to use (as they include their own voltage regulator) the so-called Nodemcu, just $1 more or so. These boards pack a 32bit SoC with 4Mbytes of flash and, lately, they are even supported through the Arduino IDE.

In order to keep the printer still usable through USB connected to a computer I patched Marlin so I could use an additional serial port for the wifi connection. The problem was that I already was using Serial2 for another purpose, so added code for simultaneously handling Serial3. Luckily the modification by TerawattIndustries showed how to add an additional serial port to be used for Bluetooth module. I had used that in the past to add an additional serial port to be used with some new G-code commands over an RS-485 link. This time I repeated the process with a twist, so now g-code is read from both Serial1 and Serial3 and responses are sent back to both ports too. This way no matter whether USB or wifi are the source of g-code the printer will work transparently.


Please note that ESP chip works at 3.3 volts while Arduino Mega works at 5V, so you do not want to connect an Arduino output to an ESP input as it can be destroyed by the excessive voltage. The opposite is no risk (applying 3.3volts to an Arduino input is not a problem and it will be detected as a high level).  You can see in the picture above the circuit and the two data connections (GND is connected if both boards are USB powered by the same computer or power adapter).  A simple 1N4148 (or similar) diode will be ok (as far as the RX input pin pull-up resistor is activated in the ESP chip).

In order not to mess with Marlin, I chose to use the alternate port configuration (RX2/TX2) on the Nodemcu so no boot-up strings would be sent when the wifi adapter is booting up. 

ESP-link configuration web based and I am pleasantly surprised on how well though out is done (the fact that the firmware tells you the new IP of the board once it has logged to another wifi network is just genius!!). 

Once you know the IP address of the wifi adapter (that now is connected to Marlin's Serial3 port) you can send g-code to it easily. Port 23 is the one used by default, but sending data cannot be done with command line tools like netcat as we need to have some flow control (i.e. not to send a new command if the previous one is not yet done). For each successful command, Marlin sends back an "ok" response. So I wrote a small program to send data to my wifi 3D printer.  


 Now I can chose to use the USB port or send data over wifi. More freedom to locate the printer not necessarily tied to a USB port.

UPDATE: I later found out that modern versions of Pronterface will accept an ip_address:port in the the serial com selector and it will then work using a socket connections instead of a serial port. So there is no need to use another program for doing that :-)

Monday, August 08, 2016

Eavesdropping your own wifi network

I was recently ask by a friend about how certain P2P wireless cameras can be accessed from a
cellphone with no router configuration. I had no idea about those cameras or its so-called P2P-thing whatever that was that tricked your home router so your camera can be accessed using a mobile app.

Of course if both the wifi camera and the cellphone belong to the same LAN there is a simple answer, but when they belong to different networks and there are one or more routers in between things may get murkier, specially when on or more of these routers are broadband routers (marketing-talk for NAt boxes).

The problem of reaching one host on the Internet from another is:

  1. to figure out its IP address
  2. to be able to connect to it (this is where firewalls may be a problem for your communication)
However, if a device is connected to a home network with Internet access, it is most likely served by one of these broadband routers, that will block any connection attempt that might come from the Internet to any device in the home network. Effectively making impossible to access devices in your home networks to good or bad users on the Internet. 

Of course, there are ways to overcome limitation with virtual-servers port forwarding that will expose certain computers on the home network to be accessed from the Internet. But using such a feature requires configuration changes on the home router. Sometimes you cannot do that or do not know how to do it, so extra help might be needed. If that helps come in human form it may be costly. So manufacturers (Microsoft?) created the Universal Plug-and-play Protocol (or UPnP) that will allow your computer to do the job of changing router configuration for you, cheaper but riskier. Because of that many broadband routers do not enable UPnP by default (or do not even support it).

The tricky part of me discovering how in hell this mobile app was being able to contact the P2P camera required me to install one of these cameras at home and capture network traffic caused by a remote access using my cellphone (with wifi disabled so I could be certain it was in fact a remote access happening through the Internet). 

I have been using Wireshark software for quite a while, and the fact that I know it used to be called Ethereal can give you and idea of how long that while might be. Anyway, Wireshark is a open-source software that can capture network traffic in real-time for later analysis.  

My home network uses WPA2/AES encryption with a pre-shared key (PSK) so you might think that because my computer knows the wifi password, I could capture all wifi traffic on my network. And yes, I could do that, but no, it is not that simple.

WPA(2) protects mobile devices traffic using different keys for different devices on the same network. So even if my computer can capture encrypted network traffic it cannot decode it even if I provide the wifi password because each mobile device would use a different session key (derived from a master key, derived from the wifi password).

But two details will make everything come together: 
  1. you need to capture traffic using monitor mode (that captures not only data frames but also all 802.11 control frames that are usually invisible to user software)
  2. you need to make sure all mobile devices whose traffic you need to decode perform a wireless association (EAPOL) during the traffic capture (this way the software can learn the session key each one is using as is exchanged between the mobile terminal and the router at the beginning of each association).
Ok, so once you have done all that you look at the captured traffic and you feel that I was kidding because it still looks as encrypted as before (but now there are many weird 802.11 control frames too).

Decoding the traffic does not happen while you are capturing data but later. You have to let Wireshark know the wifi password and for that you have go to Edit/Preferences/Protocols/IEEE802.11 and add your wifi password and SSID. In older versions both password and SSID are input in the same textbox and separated by a colon (like in the image below).


Ok, then ... why is not yet decrypted? Well if your capture is not yet decrypted press Ctrl+R for the program to reload the data from the internal buffer, but this time, hopefully you will have the decrypted traffic.

Unfortunately, while I succeeded in eavesdropping multiple devices inside my wifi network, I realized that the the camera was using an unknown encrypted protocol that would connect the camera to a server in China (using UDP so maybe connect is not the best word here). Next the camera would connect to other hosts on the Internet (my guess is these are other similar cameras, therefore the P2P name). 

The mobile application on the cellphone starts by connecting the server and from there it connects to the camera. The "connection" (again using UDP) to the camera works because the camera punches a hole through the broadband router NAT-table (I guess instructed by the server that coordinates them both). 

I contacted the makers of Blue Iris PC software for IP cameras asking if they supported such a protocol and they did not support it. So my guess is that having a similar feature on a PC with more powerful software is not going to be an easy task (given manufacturers give no detail about how the protocol they created works).



Friday, July 29, 2016

Buliding a Prusa i3 MK2

I have built (or help others building) quite a few Prusa i3, from sets I sourced myself, including the self-printed parts to commercial kits from bq or Josef Prusa himself. But when I saw the latest i3 version I was surprised about the ingenuity of some its solutions.

Having used kits from Prusa3D before I knew they left no details unattended, so I could understand them charging more than others. We are very happy with the i3 we built from kit so next time we needed to get some printers I had to decide between what I reckon are two good choices: bq's Hephestos 2 or Prusa i3 MK2. H2 has larger bed but it does not have a heated bed. MK2 can do more materials and can print hotter than H2, so we stayed to that.

The kit comes is a box similar to the cardboard box of a mini-tower PC. There are different smaller boxes and plastic bags inside with the assorted components.

 And it comes with its own set of tools (not the red box but the other tools).
 Motors come well protected, as some of them now have a long threaded shaft, plus each motor is identified with the axis name.
 Plus a bag with all the printed parts, any color you want as far as it is orange (there are a few black parts too).
 The power supply comes pre-wired and protected by a plastic part that holds a power switch and a power socket.
 Now let's begin the build. Kit comes with a full-color manual with  pictures and explanations, but you might want to have a computer nearby so you can zoom-in whenever you need a better picture (my sight could be better). Steps are numbered and there is a bag of metal parts and a bag of plastic parts for each step. Just follow the manual and you will be ok.
 Little by little some differences start to appear. And you may even panic. Like when it seems there is something wrong with the new y-axis belt holder, whose screws apparently go through an untapped hole in the y-carriage x-shaped part. And it is then when Prusa3D plays what I think is one of their better assets, they have a chat applet in their website you can use you to get support in real-time. So in case of doubt you can contact them to help you realize there was nothing wrong and that you just missed some detail because MK2 does some things a bit different (there are no tapped holes anymore on y-carriage in case you are wondering).
 Another thing that is differnt is that now z-axis motors come with a built-in threaded shaft.
 X-axis does business as usual but now it includes room for an end-switch and later you will need to add the somehow tricky z-axis nuts.
 Just follow the suggested sequence and your machine will be taking shape.
 Did not I mention candy is part of the kit? What is unclear is what is the best step to use it, as the manual does not mention it nor the bag has a number attached. Anyway, it is a nice touch too, that might even please any little ones you might have around.
 So after three hours of work we were like this.
And one hour later our built was finished. Our biggest mistake was to mount y-carriage the wrong way, so later we could not fix the heated bed to it. One shameful chat later, we realize what we did wrong and fixed it and the build was done.

However, it took us one hour more for setting up the machine. Making sure all was square was easy. Do not forget what the kit does not include but you will need is a ruler, at least 100mm long. Another thing that can be useful to have around is a wire cutter for trimming the many zip-ties you will use. We used the pliers from the kit but those will leave a long-ish piece of material.

Our first print was a PLA batman that failed almost at the end as the model included no heating of the bed, I do not know why.

All in all, I am impressed with the kit.

Friday, July 22, 2016

Useful uses of screen command

Every now and then I am using command-line tools. I work with daily with OSX and Linux and they both have in common the availability of a powerful command line tool.

The same could be said about Windows, but that would be an overstatement, as CMD.EXE provides not the efficiency level that can be achieved with other systems. But even if it could, they chose to make it different.

Anyway, many times I am working over remote terminals on other's computers command line tools and one thing that may not be welcome is for a program to destroyed your temporary data or to just stop working whenever the connection is broken.

If you are using a so-called broadband router you may realize than some remote terminal sessions die for no good reason. (The real reason is that after a few minutes without seeing any traffic through a TCP connection your home router will kill the connection without you knowing it). Let say you are editing a text file on a remote computer through an ssh connection when you get a telephone call that keeps you a few minutes away from the computer and, when you are back, your terminal session dies with an error message. It might mean the changes you did to that file are lost forever. That would be a bad thing.

There is one command that can help here, keeping the text editor program and your session frozen but alive while your ssh connection is destroyed summarily by your broadband router. The screen command allows you to create a terminal that does not go away when the connection is broken. A terminal session you can safely return to later.

Other usage case that I face from time to time is that I want to launch a program, maybe a long simulation, on a remote computer. If I do not keep the terminal open all the time, the program running on the remote server will be killed by the system. But even if I decide to keep my computer connected, still the connection may be killed by (you are guessing ...) your home router. And the worst thing is the next morning you will not have the results of the simulation and you will need to start from scratch.

Once again, you can connect to the server and start a screen session before starting the simulation, this way you can cancel the remote terminal session at any time with the confidence that your simulation will keep on running till the end. Next morning you can connect again to see the results and, if desired, finish that screen program.

Even better is that screen command is not limited by one terminal per user but you can have as many as you need. And switching from one to the next is as simple as pressing Ctrl+A and next pressing N.

 Yet another scenario is when you launch a program on your office computer (let's say you love simulations and it is just another one). Now that you are home you would like to check the intermediate data the program is printing to the screen but you cannot do that (unless you have some remote desktop software running on your computer).  However, if you started a screen session before launching your program, then that session can be detached from the original terminal and attached the new with the screen -D -r command.

It is a really interesting tool with only on drawback: that you will loose your terminal's scroll mechanish. So now, when you attach to a certain screen instance, you can see the content of the current screen but you cannot scroll back to see the lines that were printed before. Other than that, it is pretty useful.

Tuesday, June 28, 2016

Painless transition to El Capitán

My aging desktop computer is a 2011 iMac. When I bought it I loved the concept that would allow me a clean desktop. Truth be told and not iMac's fault, my desktop is almost always a mess despite de computer form factor.

Since I upgraded it to Snow Leopard (mostly for the nee to use a newer version of Java) I have learned about some SMART error on the hard drive. Once I started to feel the pressure of certain application binaries not running because my system libraries were too old, I wanted to upgrade the system but I could not. OSX install would check the hard disk and it will refuse to upgrade if found defective.

Whatever the problem my 1TB is suffering is not killing it for more than two years. And the iMac being the DIY-unfriendly that it is I keep delaying the hard disk replacing. A few months ago I found a spare USB hard disk at home and I used it to install Mavericks on it (yeah, I am not in a hurry to get next memory-hog upgrade). It all worked nicely while I keep on using the internal hard disk too. But one USB less plus another wall wart left me a bit low on available power sockets.

A few days ago I saw a very good offer for 240GB SSD drive and I bite the bullet. Combined with one ElCheapo USB-SATA adapter I got a nice deal. Maybe it is not a top-of-the-line speed-demon but it copies one and a half gigabytes in less than a minute.

I used an old MacBook Pro to download and install El Capitán on the SSD drive. I like being able to use a USB drive as the system unit, a feature I only see on Macs though it might be available in some modern PC motherboards.

But the beauty of it is that I brought the drive home and then use it to boot up my MacBook Air flawlessly too. But that was not the final stop, I just use it to customize the install, add things like Arduino or Chrome. And now, after plugging it in to the iMac and booting from it (OSX uses Cmd key press while booting up to go to boot drive selection) I am writing this entry finally on the iMac. Of course nothing was needed to use the wireless keyboard or mouse that were needing for boot selection or typing the user password. Definitely a much better experience than if I was dealing with other operating system.

And for those Arduino users that like me still bitch about the weirdness of Windows 8.1 Arduino IDE install (to enable non-signed drivers) nothing of that happens here. I even found a signed driver for CH34x USB serial chips found on many Chinese boards. Maybe I will upgrade other systems if the experience continues being positive. I still need to figure out how to get my pictures and music back.

Saturday, June 25, 2016

On placing a tag on an area

The common approach I have used in the past for locating a tag on a given 2D shape has been to use the centroid location. For convex parts there is a very good solution. However, when the shape is not convex the centroid location may be outside of the shape surface.



Whenever the tags are intended to identify a shape it might be a problem is the label falls outside of the shape, even more so when multiple shapes are packed together, as user may not be able to be sure which label belongs to which part.

One idea of fixing that is to make sure the tag location is always inside the part, and for that purpose I have evolved through for different algorithms, trying to find the best result.

Algorithm 1

If centroid is within the shape area, then just use that. When it is outside (concave shape) then an horizontal sweep is done in 10% increments, at the centroid height, looking for a spot within the shape area. If that is not found, then same approach is repeated with a vertical sweep at the centroid width too.  It appears as a black box in the video.

Algorithm 2

At the centroid, one horizontal line is traced and shape is explored for the longest area intersection with this line. The middle point of this line is now used for performing a similar sweep but this time done vertically. The tag location will be at the middle point of the longest vertical intersection. It appears a in blue color in the video.

Algorithm 3

Similar to algorithm 2, but adding a second horizontal sweep trying to get a better centered result. It appears as a pink box in the video.

Algorithm 4

It follows a topological approach, looking for the point that it is furthest from the shape perimeters. To do so the shape is painted as a bitmap and a dilate operation is done repeatedly till the last pixels are removed from the image. It is the location of that last pixel the desired tag location. It appears in red color in the video.
Usually, the black box is hiding the centroid that appears as a small circle, but on a few cases that can be seen as the black box is moved away from the centroid.

If you have another way of solving the problem, please let me know in the comments below.

Algorithm 5

Actually, similar to number 4: Instead of using a bitmap, I use the vector representation of the perimeter as a polygon. Then I perform, repeatedly, negative polygon buffer operations [on the larger block] until polygon area reaches a certain threshold. Then I use the centroid of that remaining polygon as the location for the label. It turns out much more efficient than its cousin Algorithm 4 (provided you have a decent polygon offset implementation).


Tuesday, June 21, 2016

A cheap idea for thermal imaging

Sometimes I needed to check how heat was distributed on a surface. A cool but expensive way is to use a thermographic camera. I do not have one at hand.

But an ongoing project uses thermochromic ink. That is an ink that becomes transparent once a temperature threshold is reached. It goes from a certain color to no color at all. So if you paint a piece of cloth and place it on a given surface you can do the measurement of temperature at each point.

The following pictures show the heating process of a certain aluminium heated bed. My sample cloth was not large enough to cover the whole bet but you get the idea.

 Heat sources start to show as whiter areas. 

 Now heat spreads a bit more.

 Reaching the temperature threshold at many points

For best results a glass on top would make sure the cloth is making contact with the whole surface evenly (top left corner was not having a good contact which explains the apparent colder temperature).