tag:blogger.com,1999:blog-15768824344615483222024-03-13T15:18:30.797-07:00Technical blogToplushttp://www.blogger.com/profile/05180762187403568313noreply@blogger.comBlogger17125tag:blogger.com,1999:blog-1576882434461548322.post-56653180665087272122016-02-24T12:33:00.000-08:002016-02-24T12:33:19.368-08:00Accurately measure the speed of (toy) cars in a track <div dir="ltr" style="text-align: left;" trbidi="on">
A couple of months ago my friend Chris told me over a beer the problems he was having with accurately measuring the speed of a toy car over a track for a STEM project. At that time I was (and still am) looking for excuses to be involved in some technology fun outside of work. So not knowing much about possible sensors, I theorised with him that an Arduino could do a good job at measuring the speed rather accurately.<br />
<br />
After some short research, I concluded that the so called beam-break sensors (i.e. a pair of sensors formed by an IR LED and an IR photo-transistor as a receiver) would probably be the best way to implement this. I also added a LCD to the mix so that the speed could be displayed straight away. The picture below roughly explains the idea.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://3.bp.blogspot.com/-lT_c6Vq9I_Q/Vst2bg-yEcI/AAAAAAAAOCk/Z-GaXkMHspI/s1600/speed%2BTrap_vintage%2B1.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="265" src="https://3.bp.blogspot.com/-lT_c6Vq9I_Q/Vst2bg-yEcI/AAAAAAAAOCk/Z-GaXkMHspI/s400/speed%2BTrap_vintage%2B1.jpg" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
For the demonstrator I used an <a href="https://www.arduino.cc/en/Main/ArduinoBoardUno">Arduino UNO</a>, a couple of beam-brake sensors for <a href="https://www.adafruit.com/products/2168">Adafruit</a>, a cheap LCD board, a couple of breadboards and a few jumper wires. I actually didn't use the battery pack, but you can use 4 AAA rechargeable Ni-MH batteries to provide roughly the 5 volts required for the sensors, so that you don't have so pass so many wires from one side to the other side of the track.<br />
<br />
The HW setup is rather simple. First connect the ground and power from the Arduino to the breadboard for a common bus. Connect the 4 sensors there. One sensor of each pair (the receiver) also need a digital pin so that we can tell if the beam has been broken or not.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://4.bp.blogspot.com/-RnBFXbNmBQc/Vs4HUxsno5I/AAAAAAAAODE/OuwAU-j03e0/s1600/wholePath.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="300" src="https://4.bp.blogspot.com/-RnBFXbNmBQc/Vs4HUxsno5I/AAAAAAAAODE/OuwAU-j03e0/s400/wholePath.jpg" width="400" /></a></div>
<br />
<br />
The LCD is a bit more complicated. Apart from power and ground, it needs 6 digital pins to work. We just need to make sure that these pins are connected to the right places in the LCD board. The pins in the LCD board that require connection are shown in the figure below, labelled as 'D7 to D4' and 'D9 & D8' . These are 6 consecutive pins starting from the 5th pin in the top-left corner (i.e. the 5th pin starting from the top-left corner is D4, the next downwards is D5, and so on). The reason why there seems to be a gap between pins D7 and D8 is because there is a gap in the physical pin of the LCD board (so it can slot on top of the Arduino pins). The A0 pin is used to drive the buttons of the board and should be connected to an analog pin in the Arduino.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-E4FnGLOyaJY/Vs4GpKbsKfI/AAAAAAAAODA/DAM3EINN8ug/s1600/LCD_small.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="400" src="https://1.bp.blogspot.com/-E4FnGLOyaJY/Vs4GpKbsKfI/AAAAAAAAODA/DAM3EINN8ug/s400/LCD_small.jpg" width="381" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
In terms of SW, we need to use the <i>LiquidCrystal</i> library to drive the LCD, and make sure that we initialise the pins to the right ones, as connected to the Arduino. We also need to set the digital pins of the sensors as INPUT, and turn on the internal pullup resistor to make sure the default value is HIGH, i.e. beam not broken. After <i>setup()</i>, in the <i>loop</i> section, we need to:<br />
<br />
<ul style="text-align: left;">
<li>Read the status of the data pins of the beam-break sensors, to see if they have been broken or not.</li>
<li>If the were broken, record the time at which it happened (using <i>millis()</i>).</li>
<li>If the breaking of the beams happened in the right order, calculate the speed (for this, we obviously need to know the distance between both pairs of sensors).</li>
<li>Output our calculation to the LCD.</li>
<li>Output our calculation to the serial port (for debugging).</li>
<li>Prepare the variables to run the 'loop' section again</li>
</ul>
<div>
<br />
I won't put all the code here, so I leave it as an exercise for the reader. It is not a difficult job if you have some experience with the Arduino. I do have a full user guide which I might upload at some point. </div>
<div>
<br /></div>
<div>
Happy Hacking.</div>
</div>
Toplushttp://www.blogger.com/profile/05180762187403568313noreply@blogger.com0tag:blogger.com,1999:blog-1576882434461548322.post-33038227120658692922012-02-10T07:42:00.000-08:002012-02-10T08:08:24.295-08:00BT nonsense with VPNI might be shooting on my foot for talking too soon, but I have just spent half an hour talking with the customer service of BT about why my BT Infinity connection won't work with VPN. After talking to numerous people that had to talk to their supervisors because they had no idea what I was talking about, I ended up speaking to a sales guy. He said that if I want VPN support, I had to change to a business package and end up paying more than double what I'm paying now, which is over £50 per month. That would include an static IP address (which is a nice addition, but honestly, I don't care about it at the moment). The thing is that it used to work before, but they didn't care when I said this. They just said that BT home does not support VPN. End of discussion. <div><br /></div><div>This is all my fault for not doing my research before calling BT, although one would think that they should know better than anybody else about what's possible and what is not. Of course they don't, because after a brief search I found a BT forum in which they (as in customers, not BT) suggested to enable "port clamping" in the BT hub's settings. Sure enough, there is a menu called VPN, where it is suggested to enable this setting if you are having problems with VPN, which I had. Oh magic... now it works with no problem whatsoever... well, that is so far, because apparently it is unsupported according to their own sales department. </div><div><br /></div><div><i>Lesson 1: </i>Always do your research before talking to anybody</div><div><i>Lesson 2:</i> Do not think that customer support knows what they are talking about</div><div><i>Lesson 3:</i> For God's sake, do not believe sales people!</div><div><br /></div><div>I'm tempted to send a letter of complain telling them that they should have known about the BT Hub option and that it almost seems they were just trying to make more money out of me... but I think I will pass just in case they decide to cut my VPN and say "We told you it wasn't supported". Shame on you, BT...</div>Toplushttp://www.blogger.com/profile/05180762187403568313noreply@blogger.com4tag:blogger.com,1999:blog-1576882434461548322.post-74625101823405655022012-01-19T05:20:00.000-08:002012-01-19T05:22:49.853-08:00Status report on the Internet of Things<p class="western" style="margin-bottom: 0cm">Almost two years ago I wrote about the Internet of Things and what, in my view, the IoT did not encompass. In that post I argued that because nobody could really tell what the IoT was, it could be a good idea to start by discussing what it wasn't. There were mixed opinions, of course. My intention was to try to focus the discussion on the IoT a bit and try to avoid falling into the buzzword black hole. But two years down the line, how has the IoT discussion evolved? Has the community gained any focus or has it all finally spiralled out of... well, that, focus.</p> <p class="western" style="margin-bottom: 0cm">As you probably sense, my answer is going to be the second one. The last couple of years have seen an increase in conferences, books and journals dedicated to the Internet of Things. This is great, until you realise that most of these are just using the IoT as a buzzword just because is “in fashion”. Consequently, the majority of the papers submitted to these conferences and journals are written by researchers that have not changed their research topic, but they have just added “Internet of Things” to the titles of their papers to be able to publish in more places. Of course, since the conferences and journals had been set up in the first place just to attract more submissions, there is no real filtering on the suitability of the topics and anything with the IoT name on it will be deemed as within the scope. When these papers get eventually published, they send the signal that any topic remotely based to networking and the Internet is valid, and this contamination just goes on an on.</p> <p class="western" style="margin-bottom: 0cm">Basically, in my point of view we have seen little or no improvement in the understanding of what is the IoT, what are its implications and how it will actually work. I haven't found any publication talking about how the architecture of the IoT should be, what are its challenges, how can it be achieved or how far we are from it. Yes, there have been some papers describing enabling technologies and applications, but that doesn't really help without a generic, structured vision (and eventual agreement) in how all should fit together. At the same time I have not found much criticism on what is happening... it would almost seem that nobody wants to kill the golden egg hen. Of course, eventually all the research labelled now as IoT will jump to a new buzzword if it is convenient, stick to the old buzzwords if necessary, or none at all if that gives it better chances to be published / funded.</p> <p class="western" style="margin-bottom: 0cm">The Future Internet (or the future of the Internet, just to keep avoiding buzzwords) is of course still relevant, and research will inevitably carry on because it is a hugely important ICT area. One could argue that irrespectively of the community going somewhere with the IoT or not, research will continue in a very similar, if not identical, way. As argued earlier, the fact that research in this area has only changed the title of its publications when the IoT discussion became trendy, tells us that the popularity of the IoT will come and go without any major impact in research investment. So, if this is the case, will we then get to the same place regardless of how the research is labelled? Well, yes, but the path is also important. Eventually, the concepts that the IoT encompasses (whichever they are) will happen anyway, but a focus on a structured vision of the IoT would allow us to enjoy results earlier and better. For that reason it is sad to see how the grand vision of the Internet of Things becomes adulterated and diluted over time. Is it perhaps too late for the IoT? I think it is, but I would like to be wrong.</p>Toplushttp://www.blogger.com/profile/05180762187403568313noreply@blogger.com0tag:blogger.com,1999:blog-1576882434461548322.post-86655194118430863612011-08-09T05:18:00.000-07:002011-08-09T05:57:09.905-07:00Things I learnt with ESXiI have been using a Vmware server at work to run virtual machines. The server (2.1) is running on one of our Kubuntu boxes. I never liked the Web Access for it, and the whole thing seems buggy and some times just doesn't work properly. Anyway, I decided to give a go to ESXi (4.1), which is another free product from Vmware. Basically its a linux-based distribution which is optimized to run virtual machines. You install it as you would install you regular Linux distro. I did it from a CD that you can <a href="http://www.vmware.com/products/vsphere-hypervisor/overview.html">download</a> from Vmware we pages. You will need to create an account first. <div>
<br /></div><div>Direct control over the ESXi box once installed is very limited. You can change some things like the password and network access, but full control is intended to be achieved by using the Vmware vSphere Client. Unfortunately, the client only works in Windows, and furthermore there is no Web Access for the free ESXi. Some things to think about before installing it I suppose. </div><div>
<br /></div><div>The first thing you might want to do is to create a virtual machine or import one. In my case, I have installed two so far: one imported from the vmware server that I'm already running, and the other one from Bitnami, which gives free virtual appliances (virtual machines) with specific pieces of software already installed. In both cases, the easiest way to deploy your VM to the new ESXi is using the <a href="http://www.vmware.com/products/converter/">vmware converter</a>, which luckily runs also on Linux. It will allow you not only to convert VM formats, but also to import and deploy VM from a one server to the other. It will even handle the case when your VM is running. The converter software can independently run in a different machine than your VM servers.</div><div>
<br /></div><div>Once you have deployed your VMs to ESXi, you surely will want to back them up. Free vmware products do not have fancy back up support, but for some of them, the guys from the vmware community have been nice enough to provide command line scripts. On the Vmware server, I had my backup set up so I would manually stop the VM, rsync them with my NAS, and start them again when the syncing is finished. For that I used the <i>vmrun</i> utility from the vmware server 2.1 and set up the script as a cron job. Not very intuitive, but after some trying it worked all right. Something like this:</div><div><ul><li><span class="Apple-style-span" >sudo vmrun -T server -h https://127.0.0.1:8333/sdk -u user -p password suspend "[standard] machineName/machineName.vmx"</span></li><li><span class="Apple-style-span" >rsync -a --password-file=/etc/rsyncd.scrt /var/lib/vmware user@IP_of_NAS::volume_name</span></li><li><span class="Apple-style-span" ><span class="Apple-style-span">sudo vmrun -T server -h https://127.0.0.1:8333/sdk </span><meta equiv="content-type" content="text/html; charset=utf-8"><span class="Apple-style-span">-u user -p password start "[standard] machineName/machineName.vmx"</span></span></li></ul></div><div></div><div>The ESXi, unfortunately, is not running a standard Linux distribution, and also does not support the same commands as the vmware server, so I can not use the same method. After some digging, there is a script called <a href="http://communities.vmware.com/docs/DOC-8760">ghettoVCB</a> which basically does the same thing and more. The trouble is that you need access to the console of the ESXi server for that. To enable that, you need to enable what they call Tech Support Mode (TSM) like <a href="http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1017910">here</a>. You can now follow the instructions in the <a href="http://communities.vmware.com/docs/DOC-8760">ghettoVCB</a> web site to configure and back up your VM. A few highlights:</div><div><ul><li>To backup through NFS, you need to add an NFS data storage to your ESXi server using the vSphere client. Once you do that, make sure that you point your configuration files to there.</li><li>Once you manually test that your configuration works, don't forget to set up a cron job as explained in the same web page.</li><li>You will need to install vmware tools on your VMs so the script can stop your VM before starting the backup. If your host is Ubuntu, make sure you read <a href="https://help.ubuntu.com/community/VMware/Tools">this</a> for doing that. </li></ul></div><meta equiv="content-type" content="text/html; charset=utf-8"><div>
<br /></div>Toplushttp://www.blogger.com/profile/05180762187403568313noreply@blogger.com0tag:blogger.com,1999:blog-1576882434461548322.post-13357330863832678672010-08-30T03:58:00.000-07:002010-08-30T04:35:55.011-07:00Installing Linux without CD, Floppy Disk or USBRecently I got hold of a laptop without a CD or floppy drive, and which BIOS wouldn't boot from USB (is a pretty old laptop). I wanted to install Linux on it, so what were my options? Well, so far I have come across two options:<div><br /></div><div><b><i>1. Network Install</i></b></div><div><br /></div><div>If the laptop's BIOS can boot from the internal NIC (network card), you can set up a DHCP and TFTP server and do a netboot. This will load the installer's image from your server at laptop's boot time. There are many guides out there on how to do this, but the one I followed is <a href="http://www.debian-administration.org/articles/478">this one</a>. Substitute Etch for your distribution. I installed Debian Lenny.</div><div>The HOWTO is quite self-explanatory. Perhaps one thing worth mentioning is that I decided to create a private network with a different network switch rather than using my regular wireless home router. I did this because I didn't want to stop the DHCP server from the router while I was doing the installation, because I had other machines running using the Internet. If you are also doing this, remember to 1) assign a static IP to your DHCP server so you can configure the server itself properly (I used 192.168.1.1, obviously the same as the TFTP server and also the default gateway) and so your server can start properly (otherwise it will probably fail or complaint about something when you try to start it from /etc/init.d) and 2) once your laptop has boot from the network card, plug your network card to the actual router so it can install via the Internet. </div><div><br /></div><div><b><i>2. Set up the image from an existing operating system</i></b></div><div><br /></div><div>If you already have Linux or Windows installed in the laptop, you can set up a boot loader so it will boot the installer image from the hard disk instead of retrieving it from the TFTP server. I have tried this with <a href="http://www.gnu.org/software/grub/">Grub</a> and when you have Linux installed, it is quite straight forward because the boot loader is already installed: you just need to put the installer image on your hard dish and modify the Grub menu so it will show an option to boot from that image. <a href="http://ubuntuforums.org/archive/index.php/t-28948.html">This small tutorial</a> gives you a flavor of how to do both from Linux and Windows. Just skip the boot.ini part if you are doing it from Linux.</div><div><br /></div><div>Happy installation!</div><div><br /></div>Toplushttp://www.blogger.com/profile/05180762187403568313noreply@blogger.com0tag:blogger.com,1999:blog-1576882434461548322.post-78670585442676259302010-03-22T14:18:00.000-07:002010-03-22T14:42:57.382-07:00What the Internet of Things is NOT<p class="western" style="margin-bottom: 0cm">The “Internet of Things” is a very popular term that many mention but few seem to exactly know what is about. Is one of those <a href="http://technicaltoplus.blogspot.com/2010/02/buzzwords.html">buzzwords</a> that are gaining momentum and that walk the line, still uncertain if they will reach the other side. As a good buzzword, the IoT is rather abstract, and aside conceptual definitions, it is very hard to tell exactly what the Internet of Things is. Is because of that that rather than talking about what the IoT is, I will talk about what the IoT is not. With some luck, that will narrow down the scope for a more focused discussion in the future.</p> <div><br /></div> <span class="Apple-style-span" style="font-size:medium;"><span class="Apple-style-span" style="color:#FFCC66;">The IoT is not </span><i><span class="Apple-style-span" style="color:#FFCC66;">ubiquitous/pervasive computing</span></i></span><br /><p class="western" style="margin-bottom: 0cm">As if Weiser wouldn't have been referenced enough since he predicted the second wave of computing (4925 times <a href="http://scholar.google.co.uk/scholar?hl=en&as_sdt=2000&q=weiser+computer+21st+century">according to Google</a>, and counting), some seem to use interchangeably the IoT and the ubiquitous computing concepts. Although the miniaturization of computing devices and the ubiquitous services derived from their data is probably a requirement for the IoT, pervasive computing is NOT the Internet of Things. Ubiquitous computing doesn't imply the use of objects, nor it requires an Internet infrastructure. The miniaturized devices that Weiser envisioned could represent anything, and provide data for anything. And of course, in 1991, there was little Internet to go around, and although it could have formed part of the ubiquitous computing vision, I don't think it could be argued that global network connectivity was ever a requirement for that vision.</p><p class="western" style="margin-bottom: 0cm"><span class="Apple-style-span" style="color:#FFCC66;"><span class="Apple-style-span" style="font-size:medium;"> The IoT is not </span></span><i><span class="Apple-style-span" style="color:#FFCC66;"><span class="Apple-style-span" style="font-size:medium;">the Internet Protocol</span></span></i></p> <p class="western" style="margin-bottom: 0cm">The Internet as we know it can be used globally because clients and servers use the same protocol for communication: the Internet Protocol (IP). It therefore appears logical that the Internet of Things must also run the IP (since is the same Internet, some might say), and that all the new clients to this extended Internet, the “things”, must connect to the same network and therefore run the Internet Protocol as well, right? Wrong. Of course, in a perfect world of limitless power on effortlessly miniaturized wireless devices integrated in everyday things, this would be true. But the reality is that the technologies that have the greatest potential, in terms of size and cost, to empower most of the IoT in the short term, can not run the Internet Protocol because they just don't have enough <i>juice</i> to do it. Examples of this are RFID or Wireless Sensor Networks. Some will argue that there are new low power versions of the IP aiming to running on very constrained devices. Acronyms such as 6LowPAN, ROLL or IPSO will surely be mentioned in those arguments. It is true that the IETF and other standardization bodies are making great efforts to reduce the footprint of IPv6 and related protocols, but they are still IP: a passive RFID tag can not run the IP, nor do many wireless sensor nodes based on the low-end hardware specs, which are precisely the cheapest ones and the most likely to become pervasive. What is more, there are already hundreds of millions of RFID tags and wireless sensor nodes out there, not to mention several billion mobile phones (largely without IP capabilities). Is the IoT going to be an elitist group of only IP-capable devices of which existing old or just cheap devices can not be part?</p><p class="western" style="margin-bottom: 0cm"><span class="Apple-style-span" style="color:#FFCC66;"><span class="Apple-style-span" style="font-size:medium;">The IoT is not </span></span><i><span class="Apple-style-span" style="color:#FFCC66;"><span class="Apple-style-span" style="font-size:medium;">communication technologies</span></span></i></p> <p class="western" style="margin-bottom: 0cm">I was recently at a <a href="http://technicaltoplus.blogspot.com/2010/03/wireless-communication-for-internet-of.html">workshop</a> where NEC Europe described <a href="http://en.wikipedia.org/wiki/3GPP_Long_Term_Evolution">LTE</a> as an enabler for the Internet of Things. Be LTE as it may the next generation of cellular networks (with the permission of HSPA+), I have my reservations in that it has anything to do with enabling the IoT. If is about global connectivity, other older cellular technologies, although slower, also provide the same (or more) pervasive connectivity. In any case, the same reasons given for the IP apply, since Internet over cellular networks is implemented nowadays via IP stacks on the cellular modems. A similar reasoning can be applied to many other technologies that some insist in equalling to the Internet of Things. Technologies such as WiFi, Bluetooth, ZigBee / 802.15.4, 18000-7 come to mind. It is obvious that if things are going to be wirelessly connected to the Internet, they are going to need wireless communication technologies, the same way the “regular” Internet needs WAN and LAN connection technologies (e.g Ethernet) to interconnect millions of computers. However, we can not say that those technologies are the Internet, although they certainly might be part of it.</p><p class="western" style="margin-bottom: 0cm"><span class="Apple-style-span" style="color:#FFCC66;"><span class="Apple-style-span" style="font-size:medium;">The IoT is not </span></span><i><span class="Apple-style-span" style="color:#FFCC66;"><span class="Apple-style-span" style="font-size:medium;">embedded devices</span></span></i></p> <p class="western" style="margin-bottom: 0cm">Words such as RFID or wireless sensor networks (WSN) have often been heard when discussing about he Internet of Things. Indeed, visionaries at the Auto-ID centre and other people working on RFID circa the year 2000 appear to be responsible for coining the term. They envisioned what is today the <a href="http://www.epcglobalinc.org/standards/architecture/">EPCnetwork</a>, a set of distributed Internet resources that gather, filter, store and discover RFID data. Maybe because the term was never formally defined, because the vision has been extended with new technologies, or maybe just because other disciplines have seen on the Internet of Things an opportunity to attract an increasing interest, the IoT has come to mean much more that just networked RFID systems. Furthermore, too many times has RFID been used to describe what the IoT is without painting the back-end information infrastructures into the picture. If something the IoT is certainly not is a bunch of RFID tags attached to objects an read by random RFID readers. Another technology that has recently become popular when describing the IoT are sensor systems in general, and WSN in particular. This equivalence is even more inaccurate, because while RFID systems have at least certain standardized information architectures to which all the Internet community could refer, global WSN infrastructures have never been standardized and many, many times, not even considered. Some may say, however, that <a href="http://www.opengeospatial.org/projects/groups/sensorweb">global Internet based sensor standards</a> exist, to which I would reply: yes, but they were not built with “things” in mind (i.e they don't have a standardized way for uniquely identifying things!)</p><p class="western" style="margin-bottom: 0cm"><span class="Apple-style-span" style="color:#FFCC66;"><span class="Apple-style-span" style="font-size:medium;">The IoT is not </span></span><i><span class="Apple-style-span" style="color:#FFCC66;"><span class="Apple-style-span" style="font-size:medium;">the applications</span></span></i></p> <p class="western" style="margin-bottom: 0cm">I recently read an <a href="http://thehammersmithgroup.com/images/reports/networked_objects.pdf">article</a> by The Hammersmith Group in which they talk about plants asking to be watered using wireless sensors, wine racks that know which bottles are stored and medicine bottles that issue warning if the medicines are not taken on time. They titled this article “The Internet of things: Networked objects and smart devices”. What we see here is another common misuse of the Internet of Things, very related with the pervasive computing issue described above. Think about somebody using Facebook or Google at the beginning of the 90's to describe what the Internet is. But is worse, because although I'm sure that we agree that Google is not the Internet, at least is well accepted that is an Internet-wide service. All these applications that many are describing as the IoT are just small services on an Internet-like scale. So, not only is absurd to use Internet application and services to describe the Internet itself, but it is even more illogical to refer to small applications that would have no real impact on a global Internet </p>Toplushttp://www.blogger.com/profile/05180762187403568313noreply@blogger.com18tag:blogger.com,1999:blog-1576882434461548322.post-61533381083351947402010-03-13T08:29:00.000-08:002010-03-13T09:53:45.472-08:00Wireless Communication for the Internet of ThingsOn last Thursday 11 of March, the University of Surrey held an event called <i>"Wireless Communications to Enable the Internet of Things"</i>. This event was organized by the <i><a href="http://www.wisig.org/">Wireless Sensing Interest Group</a> </i>of the <i>Sensors & Instrumentation KTN</i> and the <i>Electronics KTN</i>. The sessions featured the view of industrialist, academics and reports on existing deployments. I attended this event as a researcher from the University of Cambridge.<div><br /></div><div>The event was organized around four sessions. The first session introduced the three main technologies that are considered the future of cellular communications, namely GPRS, 3G and LTE. The first two were introduced by ST Ericsson, and the talk focused on the key considerations for a communication project. It concluded that 2G and 3G networks are now pervasive and will stay for many years, and that they are best suited for environments which do not require massive amounts of data transfer such as machine-to-machine (M2M) applications. The second talk was given by NEC, and introduced LTE as the next generation cellular technology that will enable the Internet of Things. LTE was compared against HSPA+, and some NEC's ICT solutions based on this and other technologies were also presented. </div><div><br /></div><div>The second session focused on issues in communications and location based services. Two companies presented: Libelium, a Spanish wireless sensor network start-up, and HW Communications, a British company dealing with mobile wireless communications. Libelium compared the 2.4Ghz, 686Mgh and 900Mhz bands for chips running the ZigBee / IEEE 802.15.4. Some test results were presented and the best configurations for each band were outlined. HW communications talked about location services, and the pros and cons of technologies such as cellular network positioning, triangulation and GPS. </div><div><br /></div><div>The third session presented two implementations of short-range wireless networks. The first implementation, from the company Zarlink semiconductors, dealt with health-care implants and how to solve problems related with wireless communications. A solution based on a double frequency, single antenna implementation was presented, where a 2.4 Ghz link is used to wake up the device and the medical band MICS at 402Mhz is used for data communications. The second implementation was more of an academic introduction to energy storage technologies by Imperial College London.</div><div><br /></div><div>The fourth and final session focused on solutions for large scale Wireless Sensor Networks. The first talk in this session introduced the implications of embedded devices and the Internet of Things in urban environments. The concept of Web of Things was introduced as hyperlinks given by physical objects, which is somewhat different from what <a href="http://www.webofthings.com/">other initiatives </a>understand by Web of Things. The seconds and final talk on this session was about security and key management in Wireless Sensor Networks.</div><div><br /></div><div>In my opinion, this event brought much about technologies and applications but little about what the Internet of Things is. It is probably not a good idea to put "Internet of Things" in the tittle without defining what is understood by it. Obviously, the event was oriented to the believe that the IoT is the same as <i>ubiquitous computing</i> or similar terms, but I think is quite clear that both concepts should not be same, because otherwise there would be no need for a new term. Most of the talks were useful to understand the advances in technologies and how companies and academia are aligning towards the future of wireless communications, but I gained no new knowledge on how wireless communications are going to enable the IoT, which is the main title of the event. This is not a new feeling for me though, since so many times I am hearing people talking about the IoT when they either don't know what the IoT is or its implications, or they think that is understood that the IoT is a something that everybody knows (when obviously, they don't know or don't agree on the same definition). Therefore this events has reinforced my believe that a globally agreed definition of what the Internet of Things is should be realised, or we will go deeper and deeper in the <a href="http://technicaltoplus.blogspot.com/2010/02/buzzwords.html">buzword</a> hole that I talked about a while ago.</div>Toplushttp://www.blogger.com/profile/05180762187403568313noreply@blogger.com4tag:blogger.com,1999:blog-1576882434461548322.post-42271585269564100202010-03-04T13:14:00.000-08:002010-11-08T08:43:32.966-08:00Fun with port forwarding and SSH<blockquote></blockquote>No matter if your office computer has a public IP address, if you are part of a big organization, that computer will most likely be behind some big firewall, effectively isolating the internal network from external connections. In many occasions, however, the system administrators will not completely disable access, and will enable a gateway machine as the means for bridging connections from outside to the inside of the network. This gateway computer, nevertheless, tends to be quite limited, and most probable only allows secure connections via SSH (secure shell). Well, despair not, because giving you SSH access means that you could use that connection to piggyback other connections to other services inside any computer in the network, provided that the servers where those services are locates also have a SSH server. This is normally called <i>tunnelling</i> and port <i>forwarding.</i><div><i><br /></i></div><div>On the last couple of years I have lived with the inconvenience of having no access to my office computer. For some reason, I didn't care about this tunnelling business, or I thought they were rumours or that it would be too difficult to implement. How wrong I was. Obviously it has taken a couple of days, although probably I have only scratched the surface of what is possible. But now I am able to access not only my desktop computer remotely (with a display session and all), but also other services that I use inside my work's network that I normally need for my work (e.g database servers and subversion repositories). I will now briefly explain what it all boils down to.</div><div><br /></div><div>Let's call the machines involved in this system <i>client</i>, <i>gateway</i> and <i>target</i>. We will use these names as addresses to those machines as well. Obviously, the first you need is an SSH client in client, and an SSH client/server in the other two machines. You also need an account in all those machines. First, lets make a key for the SSH connection between client and gateway so that you will not need to enter a password every time:<br /><br /><span class="Apple-style-span" style="font-family:'courier new';">client:~$ ssh-keygen</span><br /><br />Leave all the values to default and use no password (simply press enter). Now, copy the key to gateway:<br /><br /><span class="Apple-style-span" style="font-family:'courier new';">client:~$ ssh-copy-id -i .ssh/id_rsa.pub user_gateway@gateway_address</span><br /><br />Now you can just ssh to gateway without using a password. Next, you might want to connect directly to target without having to SSH in gateway and from gateway to target. For this, you can set up a simple rule at your <i>ssh_config</i> file (normally in /etc/ssh/ssh_config) to forward text directly to targer through bridge:</div><div><br /></div><span><span><span class="Apple-style-span" style="font-family:'courier new';">Host symbolicName</span></span></span><div><span><span><span class="Apple-style-span" style="font-family:'courier new';">HostName target </span></span></span></div><div><span><span><span class="Apple-style-span" style="font-family:'courier new';"> User user_target </span></span></span></div><div><span><span><span class="Apple-style-span" style="font-family:'courier new';"> ProxyCommand ssh user_gateway@gateway netcat -w 1 %h %p</span></span></span><div><div></div></div><div><br /></div><div>Now you can "SSH symbolicName" and you will directly be prompted for the password of your target machine. But how to forward service ports? Well, with a command like this:</div><div><br /><span class="Apple-style-span" style="font-family:'courier new';">ssh -L port_client:target:port_target user_gateway@gateway </span></div><div><br /></div><div>That command will take any connection requested in client to port port_client and will forward it to target to port port_target, using gateway as bridge. So the key to getting basically <i>anything</i> to your target machine is using localhost as the destination address when requesting the service, using port_client as the port (which obviously doesn't need to be the same port where the service is actually running on target). Confused? Right, is confusing at the beginning. For example, let's say that you have a SVN repository inside your work network, and when you are at your office computer, you use something like <span class="Apple-style-span" style="font-family:'courier new';">svn+ssh://machine_with_service/path_to_svn_repos</span> as the URL to your service. In your computer outside of the work network, you will use the following <span class="Apple-style-span" style="font-family:'courier new';">svn+ssh://localhost/path_to_svn_repos</span>, taking into consideration, of course, that the target machine we have been talking about before is machine_with_service in the example. Other service you can run like this is, for example, NX client and server, so you can get a graphical session of your office machine at home. Isn't that great?</div><div><br /></div><div>All this can be done in Windows if that happens to be the OS that you using in your client. For that, you can use <a href="http://www.chiark.greenend.org.uk/~sgtatham/putty/">Putty</a>. Putty has an option called "tunnelling", and you need to add your client's <i>port</i> and the <i>target:target_port</i> on this option tab for each service that you want to be forwarded. Then you only need to connect to the gateway machine in the usual way and there you go. Of course, the Putty window needs to remain open while you need port forwarding. More <a href="http://www.cs.uu.nl/technical/services/ssh/putty/puttyfw.html">here</a> and many other places on the net. </div><div><br /></div><div>And this is all. Have fun with port forwarding too! </div><div><br /></div><div style="text-align: right;"><span class="Apple-style-span" style="font-size:small;"><i>Note: Many thanks to <a href="http://www.ifm.eng.cam.ac.uk/people/hfm21/">Hugo</a> for teaching me most of this!</i></span></div><div><br /></div><div><br /></div></div>Toplushttp://www.blogger.com/profile/05180762187403568313noreply@blogger.com0tag:blogger.com,1999:blog-1576882434461548322.post-23300336866694906192010-02-15T05:00:00.001-08:002010-02-15T05:00:59.190-08:00Buzzwords<b>Buzzwords</b> are a disease. They take over beautiful concepts and contaminate them with fabricated hype and empty promises. They spread their influence, first as innocent curiosity about innovative ideas and then as viral gibberish eager to be ignored and forgotten. Buzzwords eventually kill the concept, flooding the public with excessive and misleading publicity, until nobody wants to know or cares what it all actually meant. And so dreams die, suffocated by the burden of what could, wasn't and was pretended to be.<br /><br />Buzzwords are, however, necessary. They attract attention and make people care. Without attention and care there are no desires to know more. Without those desires there is no investment, and without investment there is no progress. Without progress, concepts eventually die and are forgotten.<br /><br />In the world of information and communication technologies, buzzwords create a delicate balance between the increase interest about a new research area and the road to damnation of unreachable funding. Interestingly, it seem impossible to control the buzzword influence, and with the increase of the buzz, news, conferences, blogs, books and interest groups spring eager to get a piece of the hype pie, spinning off new publicity waves that retrofit in the system in an apparent infinite loop. Concerning about the <i>buzzword effect</i> seems, therefore, useless. If it shall kill, it will, and should you find yourself in the middle of the storm, there is little to be done rather than try to secure funding while you can. With luck, that will give you some experience, publications and even prestige, and will help you to move towards the next research in your academic career.<br /><br />Although concern about the buzzed concept's destiny seems futile, one can venture in identifying certain attributes that signal a disaster to come. It is so that the most powerful but deadly buzzwords are those that are too general to actually mean anything, letting the public imagine what is the concept behind them. Wide abstract ideas may seem beneficial at first because they increase the interest base with hardly related imaginative concepts. However, this fuzziness will eventually dilute the original meaning, propelling a devastating wave of public disappointment, either because what they thought it was it was not, or because what they thought it was could never be realised, weakened by the adulteration of uncontrolled brainstorming. I would therefore advice caution: don't let yourselves be carried away by the excitement of the public using buzzwords related to your research. Maintain your pure vision of what you have imagined, and argue against those who attempt to throw everything into the same sack. Don't succumb to the temptation of adulterating your ideas aiming to gain traction in the community, for that will bring in the long run counter-productivity and eventual apocalypses.<br /><br />Since I purposely avoided naming specific buzzwords that match my argumentation, I invite the reader to name some. Which are the ICT buzzwords that you have come to love and hate?Toplushttp://www.blogger.com/profile/05180762187403568313noreply@blogger.com0tag:blogger.com,1999:blog-1576882434461548322.post-75911928610115959262009-03-03T03:12:00.000-08:002009-03-03T03:21:46.800-08:00Attending DECOI 2009From February 23 until February 27, another researcher and me attended the 2009 International Workshop on Collective Intelligence and Evolution (DECOI). The main reason to participate in DECOI 2009 was too gain some more insight in multi-agent systems regarding the SAHNE project in which we are both involved. The venue for this year's workshop was in the Lorentz Center in Leiden, The Netherlands. Leiden is a beautiful small town around 30 Km from Amsterdam. Unfortunately, the work was such that we couldn't make it to see the town in day light.<br /><p style="margin-bottom: 0cm;">DECOI is more a “school” than what is traditionally understood as a workshop. The <a href="http://lorentzcenter.nl/lc/web/2009/328/program.php3?wsid=328">program</a> was distributed in key-note presentations and work in <a href="http://www.cs.vu.nl/%7Eschut/dbldot/collectivae/decoi/cms/index.php?option=com_content&view=category&id=181&Itemid=261">projects</a>. The project details were given on the first day of the workshop, and each group of 4-6 people had to choose one of such projects to work on for the rest of the week. My <a href="http://www.cs.vu.nl/%7Eschut/dbldot/collectivae/decoi2009/wiki/doku.php?id=participants:group2">group</a> chose the project titled “<a href="http://www.cs.vu.nl/%7Eschut/dbldot/collectivae/decoi/cms/index.php?option=com_content&view=article&id=361:helwigtask&catid=181:decoi2009tasks&Itemid=261">Learning, communication and establishment of norms in pedestrian crowds</a>”. This project was first introduced by Anders Johansson and, for the time being, the slides can be found <a href="http://www.cs.vu.nl/%7Eschut/dbldot/collectivae/decoi2009/wiki/lib/exe/fetch.php?media=johansson_leiden_20090223.ppt">here</a>. In his lecture, Dr. Johansson introduced a basic model of pedestrian behaviour in crowds. The objective of the project was to incorporate “smarter” behaviour in the pedestrians so they would negotiate better how to avoid getting stuck in groups of pedestrians that are trying to get to the same place at the same time. The project involved first to theorise how this intelligent behaviour could be accomplished, and then to develop a simulation that showed how smarter pedestrians (agents) could obtain better results. </p> <p style="margin-bottom: 0cm;">The simulations were developed with NetLogo, which is a Java-based multi-agent simulation tool that deploys its own language. None of the team members had used this tool before, so we not only faced the challenge to first write the simplistic model that Dr. Johansson presented in order to build on top of it, but also how to do this using a programming tool that we didn't know. For this reason, we spent 3 days learning and building this basic model, and the last two days writing the new smart features and gathering results. The features that we implemented were:</p> <ul><li><p style="margin-bottom: 0cm;">A grouping factor, that encourages groups of “friends” to get closer to each other on the search for the goal.</p> </li><li><p style="margin-bottom: 0cm;">A learning factor, that tries to remember which direction worked best in the past and use it in the decision making</p> </li><li><p style="margin-bottom: 0cm;">A rumour factor, which is spread by those pedestrians which are stuck in a crowd and that tries to prevent other pedestrians to go towards the direction of the crowd</p> </li></ul> <p style="margin-bottom: 0cm;">The simulation takes place in a room with two entries and one exit. The entries and the exit are separated by doors whose position and size can be varied. The objective of every pedestrian in the simulation is to get to the exit door as soon as possible. In order to avoid overlapping with walls and other pedestrians, a set of “forces” in the form of vectors are implemented and sum to the gradient force that attracts pedestrians towards the exit. A naïve approach would generate a crowd that tries to approach the exit through the shortest path, getting stuck in the intermediary door. All the smart features that we implemented helped one way or another to increase the performance of the system. A summary of the strategies and results can be found <a href="http://www.cs.vu.nl/%7Eschut/dbldot/collectivae/decoi2009/wiki/lib/exe/fetch.php?media=participants:group2-decoi09.pdf">here</a> . A Java applet accessible <a href="http://www.srcf.ucam.org/%7Etsl26/material/decoi2009/pedestrians_final.html">here</a> allows you to play the simulation directly from a web browser.<br /></p><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://4.bp.blogspot.com/_vlxPlXnIPO0/Sa0SdQ_D_PI/AAAAAAAAEjk/_D2D2VJ5MGc/s1600-h/decoi.jpeg"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 383px; height: 400px;" src="http://4.bp.blogspot.com/_vlxPlXnIPO0/Sa0SdQ_D_PI/AAAAAAAAEjk/_D2D2VJ5MGc/s400/decoi.jpeg" alt="" id="BLOGGER_PHOTO_ID_5308919829724658930" border="0" /></a>Toplushttp://www.blogger.com/profile/05180762187403568313noreply@blogger.com0tag:blogger.com,1999:blog-1576882434461548322.post-4601949024407560842009-01-20T08:34:00.000-08:002009-01-20T09:24:33.726-08:00Aircraft management with multi-agent systems<p style="margin-bottom: 0cm;">Here at <a href="http://www.ifm.eng.cam.ac.uk/automation/">DIAL</a> we have a project sponsored by <a href="http://www.boeing.com/">The Boeing Company</a> to develop a multi-agent system for emergent behavior in aircraft configuration control. In this project, codenamed SAHNE (Self-serving assets for higly networked environments), agents are the virtual counterparts of aircraft spare parts, and are autonomous in terms of decision making regarding their life-cycle. In SAHNE, we put special emphasis in the process of part replacement, and develop a fully autonomous system in which the agents detect when the parts need to be replaced (e.g. expiration date on life bests, worn out of mechanical components, etc) and initiate and coordinate the parts procurement. This later process involves finding suppliers, ask for quotes on the replacements and decide which supplier will better supply the needed part for an specific aircraft. </p> <p style="margin-bottom: 0cm;">My main role in this project is to assess the integration of sensor data into the agent decisions. Until now, the development of the platform focused on creating a certain degree of functionality based on static data and software agents. For the second year of the project, we will use actual spare parts and capture technologies to feed real-time data into the software agents. The project is scheduled to run until 2010.</p>Toplushttp://www.blogger.com/profile/05180762187403568313noreply@blogger.com0tag:blogger.com,1999:blog-1576882434461548322.post-63387168088179106162007-11-20T10:50:00.000-08:002007-11-29T02:22:19.258-08:00RFID and CaterpillarI recently participated in the development of a demo scenario for a project proposal with <a href="http://www.cat.com/">Caterpillar</a>. The idea of the project is to use RFID tags into CAT engine parts to ease the maintenance of the machines. When the engine of the vehicle is turned on, the ID of all the engine installed parts would be collected by means of RFID readers inside the engine casing. The installing time of the parts and its the number of hours of operation would be stored in a engine master-tag (EPCglobal Gen2 tag with extended user memory bank). The life part information at the vehicle start-up would<br />be compared with the information stored in the master tag. When a part is replaced, a discrepancy would be detected and the operator would be warned. A history of replaced parts would also be stored on the master tag. By means of this mechanism, we expect for vehicle maintainers to be able to know when parts have been replaced and how long the operation of a certain part has been. This mechanism is aimed to replace the current manual practices that are prone to human error and inaccurate information. Furthermore, even if the engine block is replaced, we still carry the information on the master tag, wherever that engine goes. That can help end-of-life part management. For example, we can know exactly which parts can be reused and which ones should be disposed.<br />To demonstrate these ideas, I helped to develop a software that simulates the engine start and then reads parts and mater tag. It then compares the information and warns the user about new parts installed, removed or replaced. It also updates the engine master tag. We then had a one-day field trip to a CAT research and development site nearby Cambridge (UK). There we installed tags and readers and tested the software in a real CAT tractor.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://bp2.blogger.com/_vlxPlXnIPO0/R0MzYILBrsI/AAAAAAAAANE/kohOkuA_gfc/s1600-h/cat_visit.jpg"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="http://bp2.blogger.com/_vlxPlXnIPO0/R0MzYILBrsI/AAAAAAAAANE/kohOkuA_gfc/s320/cat_visit.jpg" alt="" id="BLOGGER_PHOTO_ID_5135004489736105666" border="0" /></a><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://bp2.blogger.com/_vlxPlXnIPO0/R0My7ILBrrI/AAAAAAAAAMk/W50yyOGWC00/s1600-h/vlcsnap-10471239.png"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="http://bp2.blogger.com/_vlxPlXnIPO0/R0My7ILBrrI/AAAAAAAAAMk/W50yyOGWC00/s320/vlcsnap-10471239.png" alt="" id="BLOGGER_PHOTO_ID_5135003991519899314" border="0" /></a><br /><br />I must say that the trickiest part of all was to program the master tag update, because the tags we were using didn't have much memory to play around with.Toplushttp://www.blogger.com/profile/05180762187403568313noreply@blogger.com0tag:blogger.com,1999:blog-1576882434461548322.post-5102841626563120622007-09-04T04:47:00.000-07:002007-09-04T05:42:40.951-07:00Eclipse plug-in for WSNSpring last year (2006) I was given the job to make an Eclipse-based IDE to program, download and debug Wireless Sensor Network nodes. At that time, I didn't know very well what Eclipse was. Well, to be fair, I had used it several times before, and I was starting to use it for my research for other things (a middleware monitor, which I will talk about in some other occasion). Anyway, I had certainly no idea about the Eclipse plug-in system. You see, the whole Eclise IDE is based on plug-ins that use and create extension points. With this extension points, you can do things such as adding an icon to the tool bar, creating a new code editor or just adding a new view with pretty much whatever you like.<br />Back then, other people have already done <a href="ftp://ftp.cordis.europa.eu/pub/ist/docs/dir_c/ems/nanoemb_softplatf.pdf">similar things</a>. However, since we were (and are) building our own platform, <a href="http://resl.icu.ac.kr/%7Ekimd/">prof. Kim</a> felt that we had to have our own programming software to complement the kit (and, anyway, is not that the guys at <a href="http://www.etri.re.kr/">ETRI</a> were going to borrow us the Java code). The most interesting of all is that our laboratory did participate in 2004 in building this <a href="http://www.blogger.com/ieeexplore.ieee.org/iel5/10826/34120/01625587.pdf">other</a> platform. But anyway, that's another story. Nevertheless, it seems that prof. Kim also felt that I was the best option for coding the new IDE and make it look "good".<br />Summer past and I really didn't do much. I was into my research, and although I read some documentation to understand how the plug-in system works, I didn't really code anything. Eventually, pressure from some exhibition and others (such as prof. Kim :) made me take it seriously, and in less than one month and built a prototype that could do just that (well, more or less): program in C (<a href="http://www.eclipse.org/cdt/">CDT</a> plug-in) and compile (<a href="http://winavr.sourceforge.net/">winavr</a>), download to the target board (<a href="http://avarice.sourceforge.net/">avarice</a>) and debug using a JTAG-ICE (also avarice and <a href="http://sourceware.org/gdb/">gdb</a>). It looked (and still pretty much looks) like this:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://bp0.blogger.com/_vlxPlXnIPO0/Rt1Ot5SWglI/AAAAAAAAACc/yR8YrKFqReg/s1600-h/ide_final.png"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="http://bp0.blogger.com/_vlxPlXnIPO0/Rt1Ot5SWglI/AAAAAAAAACc/yR8YrKFqReg/s320/ide_final.png" alt="" id="BLOGGER_PHOTO_ID_5106324102886883922" border="0" /></a><br /><br />It was quite tricky to get some of the functionality, and even now, the IDE is basically a prototype which just works with one kind of micro-controller and only downloads with a JTAG-ICE. I think I should mention that I was not free to do whatever I liked with the OS source (the <a href="http://resl.icu.ac.kr/%7Ewebpublic/ants/index.html">WSN operating system</a> that we are also developing), and whatever I did to the source tree, it had to be compatible with the "old" way of coding (namely, make a source file and use the command line to compile/upload the image to the sensor board) After the initial version, I tried to make some improvements. The first concern was that the current module selection system was based on a specific version of the OS. This is bad, because new boards and new chips bring the need for new drivers and the like. Also improvements on old modules need sometimes to be offered as a choice (eg. for testing). So my solution was to build a XML file for each main board and sensor board that we developed, specifying the name of the module and where the drivers in the source tree are located. Then, in the "new project" wizard, a user could choose first the main board, which will open the choice of associated sensor boards and so on. Unfortunately, although I began coding this, I had other more urgent things to do and finally I never completed it. Now it seems that a new post-Doc student will take care of my "creation". I hope he can understand my coding :)Toplushttp://www.blogger.com/profile/05180762187403568313noreply@blogger.com0tag:blogger.com,1999:blog-1576882434461548322.post-44229360713664085102007-06-19T19:08:00.000-07:002007-06-19T21:13:00.526-07:00Summer of ProjectsUnlike previous summers that we were encouraged to produce academic related results, this summer we are encouraged to get some work done: the <a href="http://www.res.icu.ac.kr">RESL</a> is undertaking around 10 different projects to be distributed among our lab members and <a href="http://www.snre.co.kr">SNR</a> workers. The aim of the projects are very various although all of them are, of course, are related with Wireless Sensor Networks, that is what we are doing in the laboratory. Unfortunately there is a certain secrecy about the development of those projects, so what I can tell here about them is rather limited. But I'll try to give some general description about few of them that at least allows to make an idea what they are about.<br />Two of our laboratory <a href="http://resl.icu.ac.kr/people.htm">members</a> have a military background and entered RESL to help us with the increasing interest of the Korean military on the WSN technology. We have already had <a href="http://resl.icu.ac.kr/%7Ewebpublic/military/index.html">previous</a> projects and research involving military applications, such as detecting moving troops, send unmanned vehicles to enemy territory, etc. One of the current projects, codenamed <span style="font-style: italic;">u-Army</span>, involves similar work with sensor networks such as borderline intrusion detection and others.<br />If something Korea has as a nation is a thirst to invest in new technologies to show the world they are at the crest of the IT wave. For this purpose, they don't just poor money into research institutions to build their projects on the labs, but generally they require pilot deployments to prove that the things work as promised. For this reason, most of our projects require such deployments of WSN into the real world. For example, the <span style="font-style: italic;">School Zone</span> project aims to distribute sensor nodes around school areas to prevent car over-speeding and illegal parking. In the <span style="font-style: italic;">Bulkuk Temple</span> project, we must deploy a ring of wireless sensor nodes around an ancient Korean temple to detect fires and prevent the wooden building to burn down. In another project, we will install sensors in a couple of islands to measure tide levels and river flooding.<br />Other projects aim to test the capabilities of the new technologies rather than providing an specific deployment scenario. For example, although sensor nodes have been used before for localization, they use techniques like ultrasound that are only usable on the lab but that can not realistically be implemented in the real world. To this regard, RESL is also investigating localization techniques using only several aspects of the RF signal, measurement values that are not affected by line-of-sigh restrictions and the like. Finally, on the RFID side of our research, this summer we will implement a prototype of the <a href="http://csdl2.computer.org/persagen/DLAbsToc.jsp?resourcePath=/dl/proceedings/&toc=comp/proceedings/percomw/2007/2788/00/2788toc.xml&DOI=10.1109/PERCOMW.2007.113">EPC Sensor Network</a> for merging the RFID EPC Network Infrastructure with sensor data. <span class="on down" style="display: block;" id="formatbar_CreateLink" title="Link" onmouseover="ButtonHoverOn(this);" onmouseout="ButtonHoverOff(this);" onmouseup="" onmousedown="CheckFormatting(event);FormatbarButton('richeditorframe', this, 8);ButtonMouseDown(this);"></span>Toplushttp://www.blogger.com/profile/05180762187403568313noreply@blogger.com0tag:blogger.com,1999:blog-1576882434461548322.post-13465879870497089342007-06-10T01:12:00.000-07:002007-06-10T01:33:27.227-07:00Be careful with what you installI just passed through one of those Linux episodes of "be careful what you install", caused as always due to deficient dependency management. Well, to be fare, I can't blame my distribution for breaking applications, because the broken apps where 3rd party installations (namely mozilla products). And all because I wanted to <span style="font-style: italic;">finally</span> install hangul (Korean language characters) support in my <a href="http://www.kubuntu.org/">Kubuntu</a> box.<br />Lesson learned: Don't install<span style="font-style: italic;"> scim</span> and <span style="font-style: italic;">uim</span> packages if you are going to use Firefox, Thunderbird and Adobe Acrobat reader. <a href="https://bugs.launchpad.net/ubuntu/+source/scim/+bug/2246/+viewstatus">Apparently</a> there is conflict with binaries compiled with different branches of the glibc library. There are some <a href="http://www.scim-im.org/wiki/faq/gtk_gnome/why_firefox_mozilla_acrobat_reader_7_other_gtk_2_based_apps_can_not_be_installed_started">workarounds,</a> but since I'm not sure about all the programs that are using that library, I'd rather uninstall the whole thing that having some problem in a couple of weeks once I forget the issue. I might try those solutions some other time. For now, I had enough time wasted finding out what was the problem (for which, by the way, was rather handy to have Opera installed when I couldn't run Firefox)Toplushttp://www.blogger.com/profile/05180762187403568313noreply@blogger.com0tag:blogger.com,1999:blog-1576882434461548322.post-42136716560513299122007-06-03T03:05:00.000-07:002007-06-03T04:20:12.713-07:00My affair with IT drawingI'm not a designer. I never studied art, although I liked to draw things when I was a kid, like many other kids I suppose. Nevertheless, I guess is fair to say that I like to make things look good. Probably that's what people notices, and when there is something to do that should look good, they sometimes ask me to do that for them.<br />As I was saying, I'm not a professional or anything. But, of course, there are no professional designers in a lab of CS and EE graduate students. It seems I made things look good enough for being labeled the "official" lab picture-maker. So along my 3+ years in the <a href="http://resl.icu.ac.kr/">RESL</a> laboratory, I was asked many times to draw pictures about the IT field we are working on: Wireless Sensor Networks. Of course, pictures must be done with the computer, so a lot of times I basically combined smaller existing pictures to create the concept I was asked. I started just drawing them in MS's Power Point, and then moved to MS's Visio. Microsoft products are good because they have a large database of vector images than can be searched withing the program and modified to fit your requirements. However, Visio files, as all MS Office files, have a proprietary format. Moreover, they don't export properly to SVG (Standard Vector Graphics). When I moved to Linux, I refused to change to Windows every time to use Visio, so I tried to use other programs that could provide me similar results. Software such <a href="http://www.gnome.org/projects/dia/">Dia</a> and <a href="http://www.koffice.org/kivio/">Kivio</a> are good to draw vector-based flowcharts and block diagrams, but they are not what I need to <span style="font-style: italic;">make things look good</span>. The answer is general SVG drawing programs (such as <a href="http://www.inkscape.org/">Inkscape</a>) combined with online SVG databases, such as the <a href="http://openclipart.org/">Open Clipart Project</a> (apparently currently moving to a new server). SVG images can look<a href="http://www.deviantart.com/deviation/49439769/"> really good</a>, but they are more cumbersome to produce and normally take longer time. However, the SVG format is a true standard that I feel good about using, not like MS proprietary formats.<br /><br />Many of my early pictures have been lost. Nevertheless,<a href="http://www.tomas-sanchez.com/technicalBlog/post2"> here</a> is a small sample.<br />By the way, sometimes I do logos too...<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://bp2.blogger.com/_vlxPlXnIPO0/RmKjtAdO8eI/AAAAAAAAABs/7vP_XLJN98c/s1600-h/logo1_frame.jpg"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="http://bp2.blogger.com/_vlxPlXnIPO0/RmKjtAdO8eI/AAAAAAAAABs/7vP_XLJN98c/s320/logo1_frame.jpg" alt="" id="BLOGGER_PHOTO_ID_5071796123984654818" border="0" /></a>Toplushttp://www.blogger.com/profile/05180762187403568313noreply@blogger.com0tag:blogger.com,1999:blog-1576882434461548322.post-830565556561502222007-05-21T04:18:00.000-07:002007-05-22T01:45:05.717-07:00The WISSE simulatorWhen I started to think about how I was going to simulate <a href="http://www.google.com/url?sa=t&ct=res&amp;cd=2&url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel5%2F10746%2F33870%2F01613564.pdf&ei=goFRRqXZPKG2sAKwn7XMDQ&usg=AFrqEzepbS76wkBlyfnwibTPk0jWO2wUzA&sig2=SJ3CAvCZ2CDMNsqOEChhrg">WISSE</a>, I instantly though about NS2. Seems like the compulsory choice nowadays for anybody simulating wireless (and wired) networks. Of course there are other simulators out there, but they don't seem to have much flexibility and, since there are not many people using them, there are not enough extensions for the protocols I need such as the IEEE 802.15.4. But, how was I going to integrate the Gen2 protocols and the sensor node protocols and write on top of that my application?<br /><br />Full of doubts, I probably took the reverse path that I should have and I started first with the implementation. I had the <a href="http://resl.icu.ac.kr/~toplus/node.png">sensor nodes</a> <a href="http://resl.icu.ac.kr/">there</a>, in the lab, and at that time we had (well, short of) the operating system ready, so theoretically I just had to take my algorithms in paper and write them into C using the EOS libraries. However, this path proved itself harder than expected, and not because the algorithms where all wrong, but because my inexperience with the OS libraries and the immaturity of the whole system (and myself) at the time. If I recall properly, I spent most of the Christmas trying to make the darn thing work, but it never really worked properly. Nevertheless I'm sure that the Eclipse plug-in I developed that fall saved me many headaches in debugging and downloading to the node. But that will be another story.<br />At some point I realized that it was impossible to get anything from analyzing the lines in the console that the nodes where outputting trough their serial port. First, it was already difficult to identify which piece of key information should be printed in the program to find out if the algorithm was working properly. But the most important limitation was that the WISSE protocol was distributed and analyzing individual nodes' output in terminal windows gave little information about what was actually going on in the network. Because the behavior of the algorithm changed according to when a node is turned on (and hence broadcasts its beacons), it was very difficult to predict was was supposed to happen and, even worse, it was almost impossible to find out why it was happening this thing or the other.<br /><br />After some time thinking about it, I decided to code a monitoring software. For this purpose, I would attach a node to the monitoring machine with the radio chip turned to promiscuous mode, that would listen to anything going on and just print it out through the serial. Fortunately hacking the Chipcon's 2420 was easy (well documented). However, interpreting just line after line in a terminal, even if it was everything that was going on in the network rather than an isolated node, it was again quite confusing. Soon I decided that I would code some short of GUI that would paint each node and the packets they are exchanging (something like NAM in NS2) I was using Java, because well, is the high level language I know best and I could use it in my Linux and Windows boxes without re-codding. Unfortunately I'm a person that likes things that look good (even for self enjoyment) and not only I wasn't satisfied with plain circles for nodes, but I got stubborned in drawing fancy animated messages from one node to another. This, again, took a lot of time because I was unfamiliar with animation techniques in Java (and, still now, doesn't work as good as it should).<br /><br />Finally I short of finished the monitor and I tried to debug the algorithms in the nodes to see if it could work. Unfortunately again, after all the effort I was not very lucky. The protocols wouldn't work properly, and to make things worse, I had a lot of problems with the libraries that implemented the timers because there were errors in the coding that nobody knew about because nobody really tried to use them before. After several weeks of struggle, at some point I thought that, well, if it was the algorithm the one that was flawed somewhere, why should I suffer implementing it in a limited and black-boxed sensor node instead of trying first a in higher level language in a regular computer? And, what is more, why should I do that in a foreign environment like NS2 while what I really needed is to see (and prove) that the algorithms were working as designed rather than show their performance? So, obvious now, I chose to reuse the GUI I created for the monitor and code a threaded software that would create nodes, control their life-cycle, show their interactions in real-time and collect statistics of what was happening. Finally I wrote the algorithm, found some flaws and made the whole thing work. And, of course, I ultimately proved my point that a double clustered architecture with dynamic representative election is more power efficient that other types of architecture.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://bp2.blogger.com/_vlxPlXnIPO0/RlGSRQdO8bI/AAAAAAAAABU/muHaIs9no0o/s1600-h/simulator2.jpg"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="http://bp2.blogger.com/_vlxPlXnIPO0/RlGSRQdO8bI/AAAAAAAAABU/muHaIs9no0o/s320/simulator2.jpg" alt="" id="BLOGGER_PHOTO_ID_5066991880941531570" border="0" /></a><br /><br />Above, entity 6 sends a broadcast advertising the network. Well, it doesn't look so much sleek, but I like it better than NAM (NOTE: buttons were taken from Linux icon repositories)...Toplushttp://www.blogger.com/profile/05180762187403568313noreply@blogger.com0