Monday, August 30, 2010

Installing Linux without CD, Floppy Disk or USB

Recently I got hold of a laptop without a CD or floppy drive, and which BIOS wouldn't boot from USB (is a pretty old laptop). I wanted to install Linux on it, so what were my options? Well, so far I have come across two options:

1. Network Install

If the laptop's BIOS can boot from the internal NIC (network card), you can set up a DHCP and TFTP server and do a netboot. This will load the installer's image from your server at laptop's boot time. There are many guides out there on how to do this, but the one I followed is this one. Substitute Etch for your distribution. I installed Debian Lenny.
The HOWTO is quite self-explanatory. Perhaps one thing worth mentioning is that I decided to create a private network with a different network switch rather than using my regular wireless home router. I did this because I didn't want to stop the DHCP server from the router while I was doing the installation, because I had other machines running using the Internet. If you are also doing this, remember to 1) assign a static IP to your DHCP server so you can configure the server itself properly (I used, obviously the same as the TFTP server and also the default gateway) and so your server can start properly (otherwise it will probably fail or complaint about something when you try to start it from /etc/init.d) and 2) once your laptop has boot from the network card, plug your network card to the actual router so it can install via the Internet.

2. Set up the image from an existing operating system

If you already have Linux or Windows installed in the laptop, you can set up a boot loader so it will boot the installer image from the hard disk instead of retrieving it from the TFTP server. I have tried this with Grub and when you have Linux installed, it is quite straight forward because the boot loader is already installed: you just need to put the installer image on your hard dish and modify the Grub menu so it will show an option to boot from that image. This small tutorial gives you a flavor of how to do both from Linux and Windows. Just skip the boot.ini part if you are doing it from Linux.

Happy installation!

Monday, March 22, 2010

What the Internet of Things is NOT

The “Internet of Things” is a very popular term that many mention but few seem to exactly know what is about. Is one of those buzzwords that are gaining momentum and that walk the line, still uncertain if they will reach the other side. As a good buzzword, the IoT is rather abstract, and aside conceptual definitions, it is very hard to tell exactly what the Internet of Things is. Is because of that that rather than talking about what the IoT is, I will talk about what the IoT is not. With some luck, that will narrow down the scope for a more focused discussion in the future.

The IoT is not ubiquitous/pervasive computing

As if Weiser wouldn't have been referenced enough since he predicted the second wave of computing (4925 times according to Google, and counting), some seem to use interchangeably the IoT and the ubiquitous computing concepts. Although the miniaturization of computing devices and the ubiquitous services derived from their data is probably a requirement for the IoT, pervasive computing is NOT the Internet of Things. Ubiquitous computing doesn't imply the use of objects, nor it requires an Internet infrastructure. The miniaturized devices that Weiser envisioned could represent anything, and provide data for anything. And of course, in 1991, there was little Internet to go around, and although it could have formed part of the ubiquitous computing vision, I don't think it could be argued that global network connectivity was ever a requirement for that vision.

The IoT is not the Internet Protocol

The Internet as we know it can be used globally because clients and servers use the same protocol for communication: the Internet Protocol (IP). It therefore appears logical that the Internet of Things must also run the IP (since is the same Internet, some might say), and that all the new clients to this extended Internet, the “things”, must connect to the same network and therefore run the Internet Protocol as well, right? Wrong. Of course, in a perfect world of limitless power on effortlessly miniaturized wireless devices integrated in everyday things, this would be true. But the reality is that the technologies that have the greatest potential, in terms of size and cost, to empower most of the IoT in the short term, can not run the Internet Protocol because they just don't have enough juice to do it. Examples of this are RFID or Wireless Sensor Networks. Some will argue that there are new low power versions of the IP aiming to running on very constrained devices. Acronyms such as 6LowPAN, ROLL or IPSO will surely be mentioned in those arguments. It is true that the IETF and other standardization bodies are making great efforts to reduce the footprint of IPv6 and related protocols, but they are still IP: a passive RFID tag can not run the IP, nor do many wireless sensor nodes based on the low-end hardware specs, which are precisely the cheapest ones and the most likely to become pervasive. What is more, there are already hundreds of millions of RFID tags and wireless sensor nodes out there, not to mention several billion mobile phones (largely without IP capabilities). Is the IoT going to be an elitist group of only IP-capable devices of which existing old or just cheap devices can not be part?

The IoT is not communication technologies

I was recently at a workshop where NEC Europe described LTE as an enabler for the Internet of Things. Be LTE as it may the next generation of cellular networks (with the permission of HSPA+), I have my reservations in that it has anything to do with enabling the IoT. If is about global connectivity, other older cellular technologies, although slower, also provide the same (or more) pervasive connectivity. In any case, the same reasons given for the IP apply, since Internet over cellular networks is implemented nowadays via IP stacks on the cellular modems. A similar reasoning can be applied to many other technologies that some insist in equalling to the Internet of Things. Technologies such as WiFi, Bluetooth, ZigBee / 802.15.4, 18000-7 come to mind. It is obvious that if things are going to be wirelessly connected to the Internet, they are going to need wireless communication technologies, the same way the “regular” Internet needs WAN and LAN connection technologies (e.g Ethernet) to interconnect millions of computers. However, we can not say that those technologies are the Internet, although they certainly might be part of it.

The IoT is not embedded devices

Words such as RFID or wireless sensor networks (WSN) have often been heard when discussing about he Internet of Things. Indeed, visionaries at the Auto-ID centre and other people working on RFID circa the year 2000 appear to be responsible for coining the term. They envisioned what is today the EPCnetwork, a set of distributed Internet resources that gather, filter, store and discover RFID data. Maybe because the term was never formally defined, because the vision has been extended with new technologies, or maybe just because other disciplines have seen on the Internet of Things an opportunity to attract an increasing interest, the IoT has come to mean much more that just networked RFID systems. Furthermore, too many times has RFID been used to describe what the IoT is without painting the back-end information infrastructures into the picture. If something the IoT is certainly not is a bunch of RFID tags attached to objects an read by random RFID readers. Another technology that has recently become popular when describing the IoT are sensor systems in general, and WSN in particular. This equivalence is even more inaccurate, because while RFID systems have at least certain standardized information architectures to which all the Internet community could refer, global WSN infrastructures have never been standardized and many, many times, not even considered. Some may say, however, that global Internet based sensor standards exist, to which I would reply: yes, but they were not built with “things” in mind (i.e they don't have a standardized way for uniquely identifying things!)

The IoT is not the applications

I recently read an article by The Hammersmith Group in which they talk about plants asking to be watered using wireless sensors, wine racks that know which bottles are stored and medicine bottles that issue warning if the medicines are not taken on time. They titled this article “The Internet of things: Networked objects and smart devices”. What we see here is another common misuse of the Internet of Things, very related with the pervasive computing issue described above. Think about somebody using Facebook or Google at the beginning of the 90's to describe what the Internet is. But is worse, because although I'm sure that we agree that Google is not the Internet, at least is well accepted that is an Internet-wide service. All these applications that many are describing as the IoT are just small services on an Internet-like scale. So, not only is absurd to use Internet application and services to describe the Internet itself, but it is even more illogical to refer to small applications that would have no real impact on a global Internet

Saturday, March 13, 2010

Wireless Communication for the Internet of Things

On last Thursday 11 of March, the University of Surrey held an event called "Wireless Communications to Enable the Internet of Things". This event was organized by the Wireless Sensing Interest Group of the Sensors & Instrumentation KTN and the Electronics KTN. The sessions featured the view of industrialist, academics and reports on existing deployments. I attended this event as a researcher from the University of Cambridge.

The event was organized around four sessions. The first session introduced the three main technologies that are considered the future of cellular communications, namely GPRS, 3G and LTE. The first two were introduced by ST Ericsson, and the talk focused on the key considerations for a communication project. It concluded that 2G and 3G networks are now pervasive and will stay for many years, and that they are best suited for environments which do not require massive amounts of data transfer such as machine-to-machine (M2M) applications. The second talk was given by NEC, and introduced LTE as the next generation cellular technology that will enable the Internet of Things. LTE was compared against HSPA+, and some NEC's ICT solutions based on this and other technologies were also presented.

The second session focused on issues in communications and location based services. Two companies presented: Libelium, a Spanish wireless sensor network start-up, and HW Communications, a British company dealing with mobile wireless communications. Libelium compared the 2.4Ghz, 686Mgh and 900Mhz bands for chips running the ZigBee / IEEE 802.15.4. Some test results were presented and the best configurations for each band were outlined. HW communications talked about location services, and the pros and cons of technologies such as cellular network positioning, triangulation and GPS.

The third session presented two implementations of short-range wireless networks. The first implementation, from the company Zarlink semiconductors, dealt with health-care implants and how to solve problems related with wireless communications. A solution based on a double frequency, single antenna implementation was presented, where a 2.4 Ghz link is used to wake up the device and the medical band MICS at 402Mhz is used for data communications. The second implementation was more of an academic introduction to energy storage technologies by Imperial College London.

The fourth and final session focused on solutions for large scale Wireless Sensor Networks. The first talk in this session introduced the implications of embedded devices and the Internet of Things in urban environments. The concept of Web of Things was introduced as hyperlinks given by physical objects, which is somewhat different from what other initiatives understand by Web of Things. The seconds and final talk on this session was about security and key management in Wireless Sensor Networks.

In my opinion, this event brought much about technologies and applications but little about what the Internet of Things is. It is probably not a good idea to put "Internet of Things" in the tittle without defining what is understood by it. Obviously, the event was oriented to the believe that the IoT is the same as ubiquitous computing or similar terms, but I think is quite clear that both concepts should not be same, because otherwise there would be no need for a new term. Most of the talks were useful to understand the advances in technologies and how companies and academia are aligning towards the future of wireless communications, but I gained no new knowledge on how wireless communications are going to enable the IoT, which is the main title of the event. This is not a new feeling for me though, since so many times I am hearing people talking about the IoT when they either don't know what the IoT is or its implications, or they think that is understood that the IoT is a something that everybody knows (when obviously, they don't know or don't agree on the same definition). Therefore this events has reinforced my believe that a globally agreed definition of what the Internet of Things is should be realised, or we will go deeper and deeper in the buzword hole that I talked about a while ago.

Thursday, March 4, 2010

Fun with port forwarding and SSH

No matter if your office computer has a public IP address, if you are part of a big organization, that computer will most likely be behind some big firewall, effectively isolating the internal network from external connections. In many occasions, however, the system administrators will not completely disable access, and will enable a gateway machine as the means for bridging connections from outside to the inside of the network. This gateway computer, nevertheless, tends to be quite limited, and most probable only allows secure connections via SSH (secure shell). Well, despair not, because giving you SSH access means that you could use that connection to piggyback other connections to other services inside any computer in the network, provided that the servers where those services are locates also have a SSH server. This is normally called tunnelling and port forwarding.

On the last couple of years I have lived with the inconvenience of having no access to my office computer. For some reason, I didn't care about this tunnelling business, or I thought they were rumours or that it would be too difficult to implement. How wrong I was. Obviously it has taken a couple of days, although probably I have only scratched the surface of what is possible. But now I am able to access not only my desktop computer remotely (with a display session and all), but also other services that I use inside my work's network that I normally need for my work (e.g database servers and subversion repositories). I will now briefly explain what it all boils down to.

Let's call the machines involved in this system client, gateway and target. We will use these names as addresses to those machines as well. Obviously, the first you need is an SSH client in client, and an SSH client/server in the other two machines. You also need an account in all those machines. First, lets make a key for the SSH connection between client and gateway so that you will not need to enter a password every time:

client:~$ ssh-keygen

Leave all the values to default and use no password (simply press enter). Now, copy the key to gateway:

client:~$ ssh-copy-id -i .ssh/ user_gateway@gateway_address

Now you can just ssh to gateway without using a password. Next, you might want to connect directly to target without having to SSH in gateway and from gateway to target. For this, you can set up a simple rule at your ssh_config file (normally in /etc/ssh/ssh_config) to forward text directly to targer through bridge:

Host symbolicName
HostName target
User user_target
ProxyCommand ssh user_gateway@gateway netcat -w 1 %h %p

Now you can "SSH symbolicName" and you will directly be prompted for the password of your target machine. But how to forward service ports? Well, with a command like this:

ssh -L port_client:target:port_target user_gateway@gateway

That command will take any connection requested in client to port port_client and will forward it to target to port port_target, using gateway as bridge. So the key to getting basically anything to your target machine is using localhost as the destination address when requesting the service, using port_client as the port (which obviously doesn't need to be the same port where the service is actually running on target). Confused? Right, is confusing at the beginning. For example, let's say that you have a SVN repository inside your work network, and when you are at your office computer, you use something like svn+ssh://machine_with_service/path_to_svn_repos as the URL to your service. In your computer outside of the work network, you will use the following svn+ssh://localhost/path_to_svn_repos, taking into consideration, of course, that the target machine we have been talking about before is machine_with_service in the example. Other service you can run like this is, for example, NX client and server, so you can get a graphical session of your office machine at home. Isn't that great?

All this can be done in Windows if that happens to be the OS that you using in your client. For that, you can use Putty. Putty has an option called "tunnelling", and you need to add your client's port and the target:target_port on this option tab for each service that you want to be forwarded. Then you only need to connect to the gateway machine in the usual way and there you go. Of course, the Putty window needs to remain open while you need port forwarding. More here and many other places on the net.

And this is all. Have fun with port forwarding too!

Note: Many thanks to Hugo for teaching me most of this!

Monday, February 15, 2010


Buzzwords are a disease. They take over beautiful concepts and contaminate them with fabricated hype and empty promises. They spread their influence, first as innocent curiosity about innovative ideas and then as viral gibberish eager to be ignored and forgotten. Buzzwords eventually kill the concept, flooding the public with excessive and misleading publicity, until nobody wants to know or cares what it all actually meant. And so dreams die, suffocated by the burden of what could, wasn't and was pretended to be.

Buzzwords are, however, necessary. They attract attention and make people care. Without attention and care there are no desires to know more. Without those desires there is no investment, and without investment there is no progress. Without progress, concepts eventually die and are forgotten.

In the world of information and communication technologies, buzzwords create a delicate balance between the increase interest about a new research area and the road to damnation of unreachable funding. Interestingly, it seem impossible to control the buzzword influence, and with the increase of the buzz, news, conferences, blogs, books and interest groups spring eager to get a piece of the hype pie, spinning off new publicity waves that retrofit in the system in an apparent infinite loop. Concerning about the buzzword effect seems, therefore, useless. If it shall kill, it will, and should you find yourself in the middle of the storm, there is little to be done rather than try to secure funding while you can. With luck, that will give you some experience, publications and even prestige, and will help you to move towards the next research in your academic career.

Although concern about the buzzed concept's destiny seems futile, one can venture in identifying certain attributes that signal a disaster to come. It is so that the most powerful but deadly buzzwords are those that are too general to actually mean anything, letting the public imagine what is the concept behind them. Wide abstract ideas may seem beneficial at first because they increase the interest base with hardly related imaginative concepts. However, this fuzziness will eventually dilute the original meaning, propelling a devastating wave of public disappointment, either because what they thought it was it was not, or because what they thought it was could never be realised, weakened by the adulteration of uncontrolled brainstorming. I would therefore advice caution: don't let yourselves be carried away by the excitement of the public using buzzwords related to your research. Maintain your pure vision of what you have imagined, and argue against those who attempt to throw everything into the same sack. Don't succumb to the temptation of adulterating your ideas aiming to gain traction in the community, for that will bring in the long run counter-productivity and eventual apocalypses.

Since I purposely avoided naming specific buzzwords that match my argumentation, I invite the reader to name some. Which are the ICT buzzwords that you have come to love and hate?