Can Smart Contracts Provide a Cheaper Security Solution for IoT?

An article this week referred to the correlation of IoT security to the quality of a connected device - low quality correlates with very poor or non existent security and high quality correlates with good security.

This statement of course is a truism for most people but the consequences are never made explicit.

Low security IoT is now no longer a risk just for the buyer, it is a risk for others. If I buy a low quality thermostat I am no longer just faced with the issue that it may not work because it is cheap, rather other people are faced with the danger that it may go rogue and participate in a botnet surge against a security expert.

Now the real question is whether individuals/companies in the future, for the sake of anti-botnet altruism, will choose higher quality connected devices? No, of course not. (That said, there may be an interesting legal argument against manufactures' who do not secure their connected devices, and the resulting situation in which the same devices are taken over by botnets).

The only way of solving this impending security plague caused by low quality unsecure devices is to find a cheaper way of accomplishing much higher security.

So the question we ask at Chain of Things is whether smart contracts can offer a lifeline by providing cheaper and better security for IoT.

The main feature of a smart contract is that it is immutable code that runs on a shared network; it is widely purported that through the transparency/immutability of smart contract code, security can be achieved. In the next section we look at how valid that statement is and whether smart contract code can provide a cheaper solution for security.


A smart contract is code published on a distributed ledger that performs certain functions when prompted. The source code of the smart contract is not, by default, visible to everyone on the distributed ledger network. The network can only see the bytecode which is compiled from the source code. So it is not completely true that the functions of the smart contract are readable and therefore, predictable. If you wrote a smart contract to control access to a security camera and published the contract, the reality is that no-one would be able to vet the code.

However, you probably would not trust a smart contract that was not open-source. By making the source code open-source you allow a community to verify the functions of the contract before it is published. So, you can argue that the benefits of transparency in smart contract coding only are available if the source code is shared as well as having the code deployed to a shared network. For corporations, however, this level of transparency is possibly unwelcome.

Open sourcing code helps to have a community stress test and vet the functions of the code, but that is not always enough. Formal verification of smart contract code is required to ensure that it behaves in a certain way with no exceptions. A recent paper published by Microsoft shows a formal verification method for Solidity code.


On the Ethereum network you must pay miners to use the Ethereum virtual machine (i.e. the CPU of everyone in the network). Will people interacting with a connected fridge or connected sensor be willing to pay a network fee to interact with the device? That is not completely clear yet.

Pilot Phase

Every smart contract needs a pilot phase in which it is released into the wild with the specific goal of being hacked. Although you may experiment on a test net to try and determine the behavior of a smart contract, you’ll never know for sure where the vulnerabilities are until it goes live. One idea from the Chain of Security Forum was a separate smart contract that pays bug bounties when one of its (slave) smart contracts is hacked. The challenge with a bug bounty smart contract is that the value of the bounty usually depends on the severity of the bug uncovered. The Smart Bug Contract would therefore need a supplementary bounty in the beginning, possibly one that pays out a fixed fee automatically when a Github pull request has been accepted. In any case, the pilot phase of a smart contract needs to incentivize hackers enough to break it early and identify any ‘zero days’ defects (manufacturer defects) in the code.

No room for error when the blockchain is irreversible

Another challenge with smart contracts acting as controllers for connected devices is that there is generally no room for error. A smart contract programmer cannot write code with the assumption of shutting down the contract if later a security flaw is found. To an extent there needs to be few possibilities for errors when the code is broadcast to the network.

That does not mean you can not have a ‘kill switch' on the contract or, more generally, a particular design that allows for 'updates' of the code of the smart contract. (The updating function is not quite the same as normal code as smart contract code is immutable - 'superseding' is a more accurate description, as an entirely new smart contract is deployed with the fixed code.)

However, what is different with a smart contract on a public network is that the consequences of something going wrong can be far more difficult to unwind. Here is an example that relates to our Chain of Lading Case Study:

Take a Smart Bill of Lading that controls tokens representing units of goods being shipped. A buyer of the goods in transit sends a payment to the Smart Bill of Lading and, as a result, the Smart Bill of Lading pays a token to the buyer. The buyer's public account (public cryptographic address) is registered with the Smart Bill of Lading and that person is now entitled to that one unit of goods. However, now imagine there is a recursive loop, like in the Dao hack, that allows the buyer to keep withdrawing tokens from the Smart Bill of Lading. Probably by the time the administrator for the smart contract disables the contract with a forced state change the buyer would have already sold the tokens to other bona fide purchasers.

So the finality of blockchain technology (or non-reversibility) is a challenge for smart contracts. Thus the level of smart contract engineering responsibility is not dissimilar to that of Nasa programmers or programmers of utilities such as Power Stations. A slight exaggeration, but the main point is that the chain of consequences after a smart contract hack may be incredibly difficult or very expensive to unwind/resolve. Now that of course brings up an almost philosophical question, one that is talked about at the moment. Is irreversibility a feature or a flaw of the blockchain? Brian Kelly believes it is a feature; Accenture believes it is a flaw.

Making Smart Contracts Scalable may be the secret to lowering the costs for securing IoT

It is unlikely that companies will be writing their own smart contracts for each device they deploy to the blockchain. Surely smart contracts will become formulaic and there will be a healthy Github community (including all stakeholders) who will invest collectively in the effort of making one unbreakable contract that can be used by many - thus keeping down the deployment cost per contract.

In conclusion, it is not yet clear whether smart contracts can offer a cheaper form of security for connected devices. The jury is still out. But one thing is for sure, the more junky IoT that is distributed to the world, with no care for in-built security or even a notion that the software will be updated, will snowball into a vulnerability nightmare that will continue to be a cheap resource for hackers to build zombie device botnets.  To make IoT security a given, we need to make it cheaper, simpler, and more open.