The Koch snowflake

Can you imagine a closed figure that you can see, to have a limited area, but infinite perimeter?? Well i have the Koch snowflake for you….

The Koch snowflake (also known as the Koch star and Koch island) is a mathematical curve and one of the earliest fractalcurves to have been described.

The Koch curve can be constructed by starting with an equilateral triangle, then recursively altering each line segment as follows:

  1. divide the line segment into three segments of equal length.
  2. draw an equilateral triangle that has the middle segment from step 1 as its base and points outward.
  3. remove the line segment that is the base of the triangle from step 2.

After one iteration of this process, the result is a shape similar to the Star of David.

The Koch curve is the limit approached as the above steps are followed over and over again.

To implement Koch Snowflake in Python, I had to implement the LOGO program famous for Children. First, what is LOGO?

Ans) The LOGO language, originally developed at MIT was made to teach children how to program. It has very few instructions and is based on making a “turtle” (pointer on the screen) move according to the instructions given by children. While the turtle crawls over the screen it can be set in two modes: pen down (which makes the turtle draw the path it follows) and pen up (which lets the turtle crawl without making a trace on the screen).
The instructions for making pictures in this way are very simple to learn. Essentially they are: move forwards (a given number of steps), turn left (a given angle), turn right (a given angle) and finally the useful “repeat” instruction, to create repeated images which include beautiful drawings.
The language neatly teaches some basic operations which are used in any programming language, in particular the useful “repeat” operation. Hence children pick up the basics for programming while playing this game. LOGO can be used to draw various geometric figures. Children quickly pick up on how to draw with this tool.
A bonus of learning LOGO, is that it has been used in a variety of robot toys. You don’t need to buy these to teach your children to program, the free LOGO version suffices. Of course the robot toys are great fun as well.

Once LOGO was implemented, the next step was to use LOGO to draw the required Curve.

The Basic idea in drawing a Koch Curve is that the same patten repeats itself. And hence came the idea. If i were to iterate the same pattern again and again, then i would get a Koch Curve of required complexity. The wiki page for koch curve game me the following

The Koch Curve can be expressed by a rewrite system

Alphabet : F
Constants : +, −
Axiom : F++F++F
Production rules:
F → F−F++F−F

Here, F means “draw forward”, + means “turn right 60°”, and − means “turn left 60°”

Here a rewrite system basically means replacing the subterms of a formula by other terms. According to them, their Axiom tells us to go forward, rotate 120° and then again go forward, rotate 120° and then go forward. But that didnt work for me… But that did help me though. So i create my Koch snowFlake using the following program…

The initial 7 lines of command was to make the Turtle to go a spot at the left side of the canvas so that the whole curve is in the canvas.

The basic commands are:

t.penUp() : move the turtle without drawing a line

t.penDown() : move the turtle with a trail.

t.moveForward(x) : moves the Turtle forward x steps.

t.rotateTurtle(n) : rotates the turtle n° in the anti-clockwise direction.

t.hide() : hides the turtle. : unhides te turtle.

The program is very simple. It has a basic pattern of MRMRM to draw a triangle. And a Koch curve will be formed for a given number of iterations if i replace each of the M(move forward) with a MLMRMLM i.e a command to split the straight line to make way for 2 new lines. I.e after the first iteration, a line like this:

will look like this.

Then for the second iteration, each of the lines will be replaced by a pattern similar to the second figure. After 6 iterations, my snowflake looked like this:

Once i was done, i wanted more.. so the wiki page for Koch snowflake gave me 2 new ideas…. I’m giving those functions also here.. and the pattern that they generate.

As you can see, a small variation to the original function gave two splendid curves, two variants of the Koch snowflake.


Posted by on September 18, 2010 in Python


Tags: , , , , , ,

Samsung Google Nexus Prime

    Nexus Prime:

The next version of Google’s Nexus handset is rumoured to be called the Nexus “Prime” and also the “Galaxy Nexus”. Whatever its name it’ll be made by Samsung and just might officially be announced at the upcoming CTIA event in October. There are a couple clues leading towards this possibility. Samsung has scheduled their next “Unpacked” event for October 11th, plus under awards section under the “Gamer & Entertainment Enthusiast” category it shows there will be “A product to be announced from Samsung Mobile”.

The 3rd Nexus device will run Android OS 4.0 Ice Cream Sandwich (also rumoured to be released in October), come with a 4.65-inch display with 720p HD resolution, 1.5 GHz dual core processor – which is good for gaming… so all the basic specs are covered and now we’re just missing are the usual leaks and official word.

Source: CTIA

Nexus Prime is expected to run Android Ice Cream Sandwich and feature a Super AMOLED HD display with 720p resolution. Prime’s display will include a 4.5-inch panel with a PenTile layout, according to a report from BGR.

Nexus Prime will be powered by a TI OMAP4460 chipset clocking at 1.5GHz. The report added that Nexus Prime won’t feature physical Android menu buttons below the screen anymore as everything will be software-based.

In other hardware, the camera will be 5 megapixels, but the sensor is expected to be improved for higher quality images and better low-light conditions. Like many Android handsets on the market, the Nexus Prime will offer a front-facing 1-megapxiel camera for video chat.

The 5-megapixel camera, however, is said to sport an advanced sensor delivering top image quality in addition to superior low-light performance. Moreover, the device would have 4G LTE support and at least 1GB of RAM, the report added.

Google is said to be aiming to launch the device around Thanksgiving.

 IceCream Sandwich
Samsung Mobile Unpacked Event Ice Cream Sandwich, Nexus Prime announcements on Oct 11?

We all are waiting eagerly for Google’s next Android release – Ice CreamSandwich and to our surprise Samsung just started sending invites for its unpacked event scheduled on October 11. This particular event is being organized in partnership with Google, aptly called as GoogleEpisode.

As you can see invites states that “Join Us at Samsung Mobile Unpacked 2011to get a look at what’s new from Android’, which is clear indication towards the ICS announcement.

We will also see a Nexus device (probably Nexus Prime) announcement from Samsung; otherwise there is no meaning of Samsung being present at the event. It is likely that Google and Samsung will announce ICS and next Nexus phone with pre-order starting the same day and shipping in November.

To add more credibility to all the these theories, this event will be live streamed on official Android Youtube account at – Why would Google stream a Samsung event at official Android channel unless it is indeed an ICS announcement?

1 Comment

Posted by on September 29, 2011 in Tech'KNOW'logy


Tags: , , , , , , , , , , , , , , , ,

iPhone 5



Al Gore, former Vice President of the United States and current Apple board member stated that new Apple iPhones will be released next month. This was said during a speech he gave at the Discovery Invest Leadership Summit. We aren’t sure whether he is referring to one new iPhone model on multiple carriers, or two new iPhone models. Either way this is more good news that further backs up the October iPhone 5 release rumors.

We’re expecting Apple to hold an event on Tuesday, October 4th to announce the next generation iPhone. Sales are expected to start shortly after the phone is announced, with the most likely date being Friday, October 14th.

Since we have still yet to hear any Official news from Apple, nobody knows exactly what they are planning. There are too many rumors and reports contradicting each other, but as of now we aren’t expecting the iPhone 5 to have much more then an upgraded camera and A5 chip. We are still completely in the dark in reference to it’s physical appearance, but we think it’s very unlikely they will keep it the same as the iPhone 4.

Over the past few weeks, Apple has been denying vacation requests from employees for the second week of October. There is much speculation that Apple is doing this in anticipation of a large increase of customers because of the release of iOS5 and the iPhone 5.

A few sources that are familiar with the matter have said that Apple has blacked out vacation time for two date ranges: October 9th- October 12th and October 14th- October15th.

It is believed that the first date range could be when Apple releases iOS5 for existing iPhone and iPod Users. This comes after AppleInsider reported that some sections of AppleCare have already been told to prepare for a large increase in the amount of iOS5 inquiries during that range.

Further fueling the fire is Twitter releasing a pair of “Develop Teatimes” on October 10th and October 12th. The main focus is going to be on Twitter’s integration with iOS5.

Finally, Apple has a history of releasing new operating software immediately before a new release of an iPhone. For example, Apple released iOS4 on June 21st, 2010 and then rolled out the iPhone 4 only three days later.

With this information available, it is highly likely that the second date range will be the release of the iPhone 5. Rumors have mentioned that Apple plans to hold a media event announcing the release of the iPhone 5 of October 4th, which would give Apple ten days to fill the large amount of preorders that will most likely be had before the official launch.

Apple sent out numerous invitations to members of major media outlets for a media event that is scheduled for October 4th. Tim Cook, the Chief Executive of Apple, is set to lead the meeting. The meeting is currently set for 10AM Pacific Time. The invitations were sent out with the headline “Let’s talk iPhone” so we know there will be some major next-generation iPhone discussion happening.


The invitation itself was composed of icons from the operating system that Apple uses on its’ products. The invitation included a calendar with the date, a clock displaying the time, the maps icon which set the location at Apple’s Cupertino headquarters, and the Phone icon which displayed one missed call.


This comes a week after it was first leaked that Apple was blacking out vacation days for its’ employees during the second week of October. It has been rumored that
the new iPhone will come out a week after next week’s meeting.


There are many rumors as to what the iPhone will look like and what functions it will have. There has been some speculation that the iPhone will be completely redesigned but other rumors have the new iPhone being almost identical to the iPhone 4.


However, with the numerous leaks over the past few months, it is more likely that the new iPhone will appear and much different than the older iPhones. With Apple’s track record, you never know what they are going to do though. All we can do is wait and see what Apple has in store for us again.



Leave a comment

Posted by on September 28, 2011 in Tech'KNOW'logy


Tags: , , , , , , , , , ,

iPhone 5 vs Nexus Prime Release and Specs..

This is Rightfully the clash of the titans…

Apple’s tight-lipped approach towards its iPhone 5 release has fueled rumors on its release date and specs that will define the next step in the iPhone’s evolution.


    iPhone 5:

Apple sent out invitation to press on Tuesday, announcing that it will hold its next iPhone event on October 4. It is expected the company will launch iPhone 5 or iPhone 4S – or both – at the event.

Though there is no official confirmation that two models of iPhone will be launched, analysts believe that to counter adoption of Android, Apple will offer a cheaper version of iPhone for developing markets. Recently, Al Gore, a former vice president of the US and a member of Apple’s board, said that “new iPhones” will launch in October.

It is expected that iPhone 5 will pack a dual-core processor, an 8 mega-pixel camera and 1GB RAM. Keeping in line with Apple’s attempt to slim down devices with each new release, it may eschew the glass and metal frame of iPhone 4 for a sleeker unibody aluminum shell. Some analysts believe that to take on big Androidsmartphones that have become increasingly popular in the last one year, Apple may also bump the screen size in its next iPhone.

Two days ago reports surfaced that the next iPhone will come with a feature called Assistant that will enable a number of voice-based commands and allow users to “talk” to their phone. Apple has titled October 4 even as “let’s talk iPhone,” something that hints at the possibility of Assistant.

The next iPhone will be powered by iOS 5 and Apple is expected to announce the availability of the OS on October 4. The new OS also supports iPhone 4 and iPhone 3GS.

The October 4 event will be Apple’s first media event after Steve Jobs resigned from CEO’s post. Hence the keynote address, made famous by Jobs, will likely be delivered by current CEO Tim Cook.

The iPhone 5 is expected to run on an upgraded operating system, iOS 5, and is expected to be supported by a strong ecosystem including iCloud. iCloud will also store photos, apps, calendars and documents without storing them onto the phone’s memory storage. Apple also placed itself in top position of the cloud war by inking deals with top music label companies to license songs for the iCloud service.

On the other hand, Google is readying its next iteration of Android mobile operating system – Android Ice Cream Sandwich. The phones that would feature this version of Android are expected to be a tough competitor to Apple’s iPhone 5.

If rumor mills are true, Samsungand Google are readying a “killer” device under the name Nexus Prime, which could give iPhone 5 a run for its money in terms of both OS as well as specifications.

      Nexus Prime:

The next version of Google’s Nexus handset is rumoured to be called the Nexus “Prime” and also the “Galaxy Nexus”. Whatever its name it’ll be made by Samsung and just might officially be announced at the upcoming CTIA event in October. There are a couple clues leading towards this possibility. Samsung has scheduled their next “Unpacked” event for October 11th, plus under awards section under the “Gamer & Entertainment Enthusiast” category it shows there will be “A product to be announced from Samsung Mobile”.

The 3rd Nexus device will run Android OS 4.0 Ice Cream Sandwich (also rumoured to be released in October), come with a 4.65-inch display with 720p HD resolution, 1.5 GHz dual core processor – which is good for gaming… so all the basic specs are covered and now we’re just missing are the usual leaks and official word.

Source: CTIA

Since both are expected arrive in October, an epic battle looms.

Let’s see below the speculated features of both iPhone 5 and Nexus Prime.

iPhone 5

Apple is expected to release iPhone 5 as a “World Phone” – both GSM and CDMA compatible. It is conjectured that the phone will have a SIM-less design with 3-4 antennas. Another rumor suggested that the iPhone 5 will feature a SIM card slot for other countries except the U.S. This will allow users to insert any SIM card in iPhone when traveling abroad.

Apple has made improvements in the camera department by fitting a 5 megapixel camera with LED flash. Apple’s iPhone 5 is expected to sport an 8 megapixel camera.

There is also near-unanimity in gadget circles that Apple will bring its A5 chip on to the iPhone 5. The A5 processor is the same one that Apple rolled out to power its iPad 2. Apple may boost the speed of its A5 chip in the range of 1.2 or 1.5 GHz as several Android smartphones are coming with 1.2 GHz processor.

Apple may fit a near-field communication chip in the iPhone 5 as some high-end Android devices are expected to sport the NFC chip. NFC allows for simplified transactions, data exchange, and connections with a touch. A smartphone or tablet with an NFC chip could make a credit card payment or serve as keycard or ID card.

Apple is rumored to increase the screen size of iPhone 5 to compete with Android smartphones, probably going for a 4-inch screen. The iPhone 4 has a 3.5-inch screen.

Apple’s iPhone 5 is expected to feature the new iCloud service in operating iTunes for wireless remote access of music from all computers and mobile devices.

Nexus Prime

Nexus Prime is expected to run Android Ice Cream Sandwich and feature a Super AMOLED HD display with 720p resolution. Prime’s display will include a 4.5-inch panel with a PenTile layout, according to a report from BGR.

Nexus Prime will be powered by a TI OMAP4460 chipset clocking at 1.5GHz. The report added that Nexus Prime won’t feature physical Android menu buttons below the screen anymore as everything will be software-based.

In other hardware, the camera will be 5 megapixels, but the sensor is expected to be improved for higher quality images and better low-light conditions. Like many Android handsets on the market, the Nexus Prime will offer a front-facing 1-megapxiel camera for video chat.

The 5-megapixel camera, however, is said to sport an advanced sensor delivering top image quality in addition to superior low-light performance. Moreover, the device would have 4G LTE support and at least 1GB of RAM, the report added.

Google is said to be aiming to launch the device around Thanksgiving.

1 Comment

Posted by on September 28, 2011 in Tech'KNOW'logy


Tags: , , , , , , , , , , , , , , , , , , , ,

CyBerbetic ORGanisms : The Future?

• Robot
It’s common to depict a robot in human form, with arms, legs and a head. In real life, though, robots often take more functional forms in order to fulfill their task, which could range from gathering moon rocks to diffusing bombs.
A robot is fully mechanized, without any organic parts. It is always controlled by an outside source, like a computer or human.
The word robot is derived from the Czech word “robota”, meaning labor. When the word first appeared in Rossum’s Universal Robots, a robot was a manufactured living thing. While this organic side to robots has disappeared, the idea of them being created for labor purposes is still the most popular.

• Android
An android is sometimes considered to be a robot resembling a human. An android is usually independent, not controlled externally.

• Cyborg
A Cyborg is a mix of organic and inorganic (sometimes). As posed in class, most of us could be considered cyborgs because of our reliance on technology.
Traditionally, a cyborg is a human with a robotic implant, like Darth Vader.

A cyborg is a cybernetic organism (i.e., an organism that has both artificial and natural systems). The term was coined in 1960 when Manfred Clynes and Nathan Kline used it in an article about the advantages of self-regulating human-machine systems in outer space. D. S. Halacy’s Cyborg: Evolution of the Superman in 1965 featured an introduction which spoke of a “new frontier” that was “not merely space, but more profoundly the relationship between ‘inner space’ to ‘outer space’ -a bridge…between mind and matter.” The cyborg is often seen today merely as an organism that has enhanced abilities due to technology, but this perhaps oversimplifies the category of feedback.
Fictional cyborgs are portrayed as a synthesis of organic and synthetic parts, and frequently pose the question of difference between human and machine as one concerned with morality, free will, and empathy. Fictional cyborgs may be represented as visibly mechanical (e.g. the Cybermen in the Doctor Who franchise or The Borg from Star Trek); or as almost indistinguishable from humans (e.g. the “Human” Cylons from the re-imagining of Battlestar Galactica). The 1970s television series the Six Million Dollar Man featured one of the most famous fictional cyborgs. Cyborgs in fiction often play up a human contempt for over-dependence on technology, particularly when used for war, and when used in ways that seem to threaten free will. Cyborgs are also often portrayed with physical or mental abilities far exceeding a human counterpart (military forms may have inbuilt weapons, among other things).
Real (as opposed to fictional) cyborgs are more frequently people who use cybernetic technology to repair or overcome the physical and mental constraints of their bodies. While cyborgs are commonly thought of as mammals, they can be any kind of organism

Kevin Warwick is Professor of Cybernetics at the University of Reading, England, where he carries out research in artificial intelligence, control, robotics and biomedical engineering. He is also Director of the University KTP Centre, which links the University with Small to Medium Enterprises and raises over £2Million each year in research income for the University.
Kevin was born in Coventry, UK and left school to join British Telecom, at the age of 16. At 22 he took his first degree at Aston University, followed by a PhD and a research post at Imperial College, London. He subsequently held positions at Oxford, Newcastle and Warwick universities before being offered the Chair at Reading, at the age of 33.
He has been awarded higher doctorates (DScs) both by Imperial College and the Czech Academy of Sciences, Prague. He was presented with The Future of Health technology Award from MIT (USA), was made an Honorary Member of the Academy of Sciences, St.Petersburg and received The IEE Achievement Medal in 2004. In 2000 Kevin presented the Royal Institution Christmas Lectures, entitled “The Rise of The Robots”.
Kevin has carried out a series of pioneering experiments involving the neuro-surgical implantation of a device into the median nerves of his left arm in order to link his nervous system directly to a computer in order to assess the latest technology for use with the disabled. He has been successful with the first extra-sensory (ultrasonic) input for a human and with the first purely electronic communication experiment between the nervous systems of two humans. His research has been discussed by the US White House Presidential Council on BioEthics, The European Commission FTP and has led to him being widely referenced and featured in academic circles as well as appearing as cover stories in several magazines – e.g. Wired (USA), The Week (India).
His work is now used as material in several advanced Level Physics courses in the UK and in many University courses including Harvard, Stanford, MIT & Tokyo. His implants are on display in the Science Museums in London and Naples. As a result, Kevin regularly gives invited Keynote presentations around the world at top international conferences.
Kevin’s research involves robotics and he is responsible (with Jim Wyatt) for Cybot, a robot exported around the world as part of a magazine “Real Robots” – this has resulted in royalties totalling over £1M for Reading University. Robots designed and constructed by Kevin’s group (Ian Kelly, Ben Hutt) are on permanent interactive display in the Science Museums in London, Birmingham and Linz.
Kevin is currently working closely with Dr Daniela Cerqui, a social and cultural anthropologist to address the main social, ethical, philosophical and anthropological issues related to his research into robotics and cyborgs.
Kevin regularly makes international presentations for the UK Foreign Office and the British Council, e.g.2004/5 India, New Zealand, Singapore, Malaysia, China, Spain, Czech Rep., USA and Hong Kong.
His presentations include The 1998 Robert Boyle Memorial Lecture at Oxford University, The 2000 Royal Institution Christmas Lectures, The 2001 Higginson Lecture at Durham University, The 2003 Royal Academy of Engineering/Royal Society of Edinburgh Joint lecture in Edinburgh, The 2003 IEEE (UK) Annual Lecture in London, The 2004 Woolmer Lecture at York University, the Robert Hooke Lecture (Westminster) in 2005, the 2005 Einstein Lecture in Potsdam, Germany and the 2006 IMechE Mechatronics Prestige Lecture in London.
Kevin was a member of the 2001 HEFCE (unit 29) panel on Electrical & Electronic Engineering, is Deputy Chairman for the same panel in the 2007/8 exercise and is a member of the EPSRC Peer College. He has produced over 400 publications on his research including more than 90 refereed journal articles and 25 books. Kevin received the EPSRC Millenium Award (2000) for his schools robot league project and is the youngest ever Fellow of the City and Guilds of London Institute. Kevin’s research has featured in many TV and film documentaries, e.g. in 2004/5 – Inventions that changed the world (BBC2), Future Scope (RAI 1) and in The Making of I Robot (Twentieth Century Fox/Channel 5). He has appeared 3 times on Tomorrow’s World, 5 times in Time magazine, twice in Newsweek and was selected by Channel 4 as one of the Top 6 UK Scientists for their 2001 series “Living Science”. In 2002 he was chosen by the IEE as one of the top 10 UK Electrical Engineers. Kevin also appeared as one of 30 “great minds on the future” in the THES/Oxford University book – Predictions – with J.K.Galbraith, Umberto Eco and James Watson.
Kevin’s research is frequently referred to by other authors – recent examples being in books by Robert Winston, Peter Cochrane, Jeremy Clarkson and Susan Greenfield. Kevin’s research has also been selected by National Geographic International for a 1 hour documentary, entitled “I,Human” to be screened in 2006 – this will be broadcast in 143 countries and translated into 23 different languages.

What happens when a man is merged with a computer?
This is the question that Professor Kevin Warwick and his team at the the department of Cybernetics, University of Reading intend to answer with ‘Project Cyborg’. 

On Monday 24th August 1998, at 4:00pm, Professor Kevin Warwick underwent an operation to surgically implant a silicon chip transponder in his foream. Dr. George Boulous carried out the operation at Tilehurst Surgery, using local anaesthetic only.

This experiment allowed a computer to monitor Kevin Warwick as he moved through halls and offices of the Department of Cybernetics at the University of Reading, using a unique identifying signal emitted by the implanted chip. He could operate doors, lights, heaters and other computers without lifting a finger. 

The chip implant technology has the capability to impact our lives in ways that have been previously thought possible in only sci-fi movies. The implant could carry all sorts of information about a person, from Access and Visa details to your National Insurance number, blood type, medical records etc., with the data being updated where necessary. 

The second phase of the experiment Project Cyborg 2.0 got underway in March 2002. This phase will look at how a new implant could send signals back and forth between Warwick’s nervous system and a computer. If this phase succeeds with no complications, a similar chip will be implanted in his wife, Irena. This will allow the investigation of how movement, thought or emotion signals could be transmitted from one person to the other, possibly via the Internet. The question is how much can the brain process and adapt to unfamiliar information coming in through the nerve branches? Will the brain accept the information? Will it try to stop it or be able to cope? Professor Kevin Warwicks answer to these questions is quite simply “We don’t have an idea – yet, but if this experiment has the possiblility to help even one person, it is worth doing just to see what might happen”.

The next step towards true Cyborgs?
On the 14th of March 2002 a one hundred electrode array was surgically implanted into the median nerve fibres of the left arm of Professor Kevin Warwick. The operation was carried out at Radcliffe Infirmary, Oxford, by a medical team headed by the neurosurgeons Amjad Shad and Peter teddy. The procedure, which took a little over two hours, involved inserting a guiding tube into a two inch incision made above the wrist, inserting the microelectrode array into this tube and firing it into the median nerve fibres below the elbow joint.
A number of experiments have been carried out using the signals detected by the array, most notably Professor Warwick was able to control an electric wheelchair and an intelligent artificial hand, developed by Dr Peter Kyberd, using this neural interface. In addition to being able to measure the nerve signals transmitted down Professor Wariwck’s left arm, the implant was also able to create artificial sensation by stimluating individual electrodes within the array. This was demonstrated with the aid of Kevin’s wife Irena and a second, less complex implantconnecting to her nervous system.
Another important aspect of the work undertaken as part of this project has been to monitor the effects of the implant on Professor Warwick’s hand functions. This was carried out by Allesio Murgia a research student at the department, using the Southampton Hand Assessment Procedure (SHAP) test. By testing hand functionality during the course of the project the difference between the performance indicators before, during and after the implant was present in Kevin’s arm can be used to give a measure of the risks associated with this and future cyborg experiments.

Electrodes and a control chip are inserted into a moth during its pupal stage. When the moth emerges the electrodes stimulate its muscles to control its flight.
The Pentagon is creating an army of cyber-moths and beetles to spy on their enemies.
They aim to insert micro-systems as the live insects undergo metamorphosis and their organs grow around the chips and wires that make up the remote-control devices. US military science bureau DARPA (the Defense Advanced Research Projects Agency) believes they can take advantage of the evolution of insects such as moths in the pupa stage during which the insect is re-built. The programme is called HI-MEMS (Hybrid Insect Micro-Electro-Mechanical Systems).

Project director Dr Amit Lal got the idea after reading Thomas Easton’s 1990 novel Sparrowhawk in which animals enlarged by genetic engineering were fitted with implanted control-systems. DARPA’s goal is to create cyborg insects that can fly at least 100 metres from their controller and land within 5 metres of a target, then stay put until commanded to buzz off again.If the groups keep making strides, the proverbial fly on the wall may literally become a spy.

In a series of video clips shown at a science conference in Tucson, Arizona, and posted online a tobacco hawkmoth with wires connected to its back lifts and lowers one wing, then the other, then both, in response to signals delivered to its flight muscles. As the DARPA researchers ramp up the frequency of the muscle stimulation the moth’s wings beat faster approaching take-off speed.

In another clip, the moth is flying, tethered from above, when electrical impulses applied to muscles on one side or the other cause the moth to yaw left or right. The clips were filmed at the Boyce Thompson Institute in Ithaca, New York, where a team led by Dr David Stern implanted the flexible plastic probes into tobacco hawkmoth pupae seven days before the moths emerged. They found inserting them any earlier meant the tissue was too fluid to seal around the probe, but any later and development was too advanced and the probes damaged the moths’ muscles. A probe is embedded in each set of flight muscles on either side of the moth and a connection protrudes from the moth’s back. This can be hooked up to the tether wires which also deliver control signals and power.
Meanwhile another DARPA-funded group led by Dr Michel Maharbiz at California University implanted electrodes into the brains of adult green June beetles, near brain cells that control flight. When the team delivered pulses of negative voltage to the brain, the beetles’ wing muscles began beating and the bugs took off.
A pulse of positive voltage shut the wings down, stopping flight short, and by rapidly switching between these signals, they controlled the insects’ thrust and lift. Dr Maharbiz’s team found two ways to make tethered beetles turn. In one, they mounted an LED display in front of a beetle’s eyes. Lighting up the left or right portion turned the beetle in the opposite direction.

In a second, more successful, approach they directly stimulated the flight muscles on one side, causing the insect to turn to the other.
Dr Maharbiz’s system uses a battery glued to the outside of the beetle for power, while Dr Stern’s moth-control system relies on power provided through wires plugged into the implant. 
But both would stick out like sore thumbs, and that’s before adding the microphones, environmental sensors and transmitters that they would need to be of any use as spies.

The challenge now is to shrink the components to hide as many of them as possible inside the insect. They are also looking to harness power from the insects themselves. How the insects will be guided to a target is yet another unresolved problem.
“There were a bunch of ideas,” said Dr Charles Higgins at the University of Arizona, who was involved in DARPA’s original brainstorming session for the HI-MEMS project. One was to use radio control to guide the moth although that would mean emitting radio signals, which could be detected by the enemy.
A second was to use GPS signals to guide the insect to its goal, and a third was to point the moth in the right direction and send it off with a pre-programmed series of instructions – for example, fly straight for 50 metres, then circle.
“What you want to avoid is some way of detecting that it’s not a plain old insect, or some situation where its signals could be jammed,” said Dr Higgins. Researchers have already developed remote control systems for rats, pigeons and even sharks but these latest projects are the most audacious yet. The motivation is simple – why labour for years to build robots that imitate the ways animals move when you can just plug into living creatures and hijack systems already optimised by millions of years of evolution?

Attempts by the U.S. Defense Advanced Research Projects Agency (DARPA) to create cybernetic insects (hybrids of biological and electronic bugs) have yielded ultralow-power radios to control the bugs’ flight and a method of powering those circuits by harvesting energy, according to research that will be reported this week at the IEEE International Solid-State Circuits Conference (ISSCC)
Two papers being presented at ISSCC reveal the latest initiatives in the DARPA-sponsored Hybrid Insect Micro-Electro-Mechanical Systems (HI-MEMS) project, which is currently in its third year. The program’s goal is the creation of moths or other insects that have electronic controls implanted inside them, allowing them to be controlled by a remote operator. The animal-machine hybrid will transmit data from mounted sensors, which might include low-grade video and microphones for surveillance or gas sensors for natural-disaster reconnaissance. To get to that end point, HI-MEMS is following three separate tracks: growing MEMS-insect hybrids, developing steering electronics for the insects, and finding ways to harvest energy from the them to power the cybernetics.
Researchers at the Boyce Thompson Institute for Plant Research, in Ithaca, N.Y.—which is one of the contractors on the HI-MEMS project—presented progress on the first goal at the IEEE MEMS 2009 conference in Italy two weeks ago, describing silicon neural interfaces for gas sensors that were inserted into insects during the pupal phase. At ISSCC, the HI-MEMS projects focused on new chip technology for the second two goals: Researchers led by DARPA contractor MIT will present a low-power ultrawide-band radio, a digital baseband processor, and a piezoelectric energy-harvesting system that scavenges power from vibrations.
The HI-MEMS project was conceived in 2005 by program manager Amit Lal, an electrical engineering professor on leave from Cornell University while he coordinates the four-year DARPA effort. MIT is one of three major contractors, including the University of Michigan and Boyce Thompson. The research also draws on the work of entomologists, electrical engineers, and mechanical engineers at the University of California, Berkeley, the University of Arizona, and Washington University in St. Louis, Mo. To be considered successful, the final HI-MEMS cybernetic bug must fly 100 meters from a starting point and then be steered into a controlled landing within 5 meters of a specified end point. On landing, the insect must stay in place.
The electronic and MEMS components of the system must consume little power and be absolutely featherweight. After all, an average hawk moth weighs 2.5 grams; with too much extra weight it would be unable to fly.
Anantha Chandrakasan, an electrical engineering professor at MIT, is a coauthor on each of the ISSCC papers. The first is an ultrawide-band receiver system on chip, a radio that works at extremely low power over a broad swath of spectrum. (Earlier research had created the transmitter.) The device was specifically built for the HI-MEMS project in order to steer the moth. To control the moth’s flight direction, Chandrakasan and MIT graduate student Denis Daly designed a small, lightweight, low-power radio connected to a tungsten 4-electrode neurostimulator. When this radio picks up the right commands, the device stimulates the nervous tissue in the moth’s abdominal nerve cord. The stimulation makes the moth’s abdomen move in a way that alters the direction of its flight. The radio and stimulator are powered by a hearing-aid battery.
The second chip is a low-power digital baseband processor that can very quickly synchronize with wireless signals. That solves a particular problem with wireless communication. ”When you send a piece of data through a wireless link, the receiver takes some time to lock to the transmitter,” Chandrakasan says. ”Our new algorithms can very quickly synchronize, which means that you can turn on the radio, take the piece of data, and then turn the radio back off very quickly. That saves a lot of power.”
A third chip being presented at ISSCC, which Chandrakasan says is unrelated to the radio chips and not funded under HI-MEMS, could nevertheless be used to meet the DARPA project’s goal of finding ways to efficiently harvest energy from the moth. While a cyborg insect would be fairly autonomous and self-fueling, there would be no way to recharge its equipment payload on missions. Batteries are heavy. So the researchers are seeking a method by which the insect’s flight itself generates the electrical energy the payload electronics require. Harvesting ambient vibration energy through piezoelectric means—in which energy is converted between mechanical and electrical forms—could supply between 10 and several hundred microwatts of power.
The research presented at ISSCC addresses a common problem with energy-harvesting circuits: The power consumed by the harvesters’ control circuits reduces the amount of usable electrical power. The solution, a circuit called a bias-flip rectifier, improves the power-extraction capability by ”more than four times,” according to the paper by Chandrakasan and graduate student Yogesh K. Ramdass.

The moth can be made flap its wings under computer control.
The HI-MEMS project is not the first attempt at creating cyborg animals. The list is long, including pigeons, beetles, cats, and bees. Perhaps the most famous example is the cyborg rat. In 2004, John Chapin, a professor at the State University of New York Health Science Center, in Brooklyn, demonstrated Rescue Rats. These were lab rats with neural implants that encouraged them to steer through rubble piles with a camera and GPS locator to find people. Using a radio remote control, Chapin stimulated a part of the rats’ brains that mimicked the sensation of being touched on the whiskers. In response, the rats turned in the direction of the sensation. When they turned, Chapin rewarded them with a quick jolt of electricity in the pleasure center of their brains.
Jelle Atema, a biologist at Boston University and at the Woods Hole Oceanographic Institute, was also funded by DARPA in 2005 to research steering sharks with similar neural implants. Atema says that while he applauds the HI-MEMS project for its technical ambition and engineering virtuosity, he is concerned about its ultimate biological feasibility: Electronic control would compete with natural brain processes. He cites some limitations for insects, including a tendency for moths to approach light sources (the proverbial flames) and a powerful sex pheromone response that could override attempts at remote electronic control. ”Pheromones are incredibly powerful,” he says.
In addition, modifying just one moth would be prohibitively time-consuming and expensive, especially in light of the life span of the animal, says Atema.
Even if HI-MEMS never produces a working cyborg moth, Chandrakasan says that the usefulness of these devices is not limited to the specific DARPA project. You can repurpose the chips for assistive technologies and implantable devices. In particular, he says, the energy-harvesting system would be a promising technology for prosthetic arms, which have a similar problem with weight and battery life.


In medicine
In medicine, there are two important and different types of cyborgs: these are the restorative and the enhanced. Restorative technologies “restore lost function, organs, and limbs”. The key aspect of restorative cyborgization is the repair of broken or missing processes to revert to a healthy or average level of function. There is no enhancement to the original faculties and processes that were lost.
On the contrary, the enhanced cyborg “follows a principle, and it is the principle of optimal performance: maximising output (the information or modifications obtained) and minimising input (the energy expended in the process)”. Thus, the enhanced cyborg intends to exceed normal processes or even gain new functions that were not originally present.
Although prostheses in general supplement lost or damaged body parts with the integration of a mechanical artifice, bionic implants in medicine allow model organs or body parts to mimic the original function more closely. Michael Chorost wrote a memoir of his experience with cochlear implants, or bionic ear, titled “Rebuilt: How Becoming Part Computer Made Me More Human.” Jesse Sullivan became one of the first people to operate a fully robotic limb through a nerve-muscle graft, enabling him a complex range of motions beyond that of previous prosthetics. By 2004, a fully functioning artificial heart was developed. The continued technological development of bionic and nanotechnologies begins to raise the question of enhancement, and of the future possibilities for cyborgs which surpass the original functionality of the biological model. The ethics and desirability of “enhancement prosthetics” have been debated; their proponents include the transhumanist movement, with its belief that new technologies can assist the human race in developing beyond its present, normative limitations such as aging and disease, as well as other, more general incapacities, such as limitations on speed, strength, endurance, and intelligence. Opponents of the concept describe what they believe to be biases which propel the development and acceptance of such technologies; namely, a bias towards functionality and efficiency that may compel assent to a view of human people which de-emphasises as defining characteristics actual manifestations of humanity and personhood, in favour of definition in terms of upgrades, versions, and utility.
One of the more common and accepted forms of temporary modification occurs as a result of prenatal diagnosis technologies. Some modern parents willingly use testing methods such as ultrasounds and amniocentesis to determine the sex or health of the fetus. The discovery of birth defects or other congenital problems by these procedures may lead to neonatal treatment in the form of open fetal surgery or the less invasive fetal intervention.
A brain-computer interface, or BCI, provides a direct path of communication from the brain to an external device, effectively creating a cyborg. Research of Invasive BCIs, which utilize electrodes implanted directly into the grey matter of the brain, has focused on restoring damaged eye sight in the blind and providing functionality to paralysed people, most notably those with severe cases, such as Locked-In syndrome.
Retinal implants are another form of cyborgization in medicine. The theory behind retinal stimulation to restore vision to people suffering from retinitis pigmentosa and vision loss due to aging (conditions in which people have an abnormally low amount of ganglion cells) is that the retinal implant and electrical stimulation would act as a substitute for the missing ganglion cells (cells which connect the eye to the brain).
While work to perfect this technology is still being done, there have already been major advances in the use of electronic stimulation of the retina to allow the eye to sense patterns of light. A specialized camera is worn by the subject (possibly on the side of their glasses frames) the camera converts the image into a pattern of electrical stimulation. A chip located in the user’s eye would then electrically stimulate the retina with this pattern and the image appears to the user. Current prototypes have the camera being powered by a hand sized power supply that could be placed in a pocket or on the waist.
Currently the technology has only been tested on human subject for brief amounts of time and the amount of light picked up by the subject has been minimal. However, if technological advances proceed as planned this technology may be used by thousands of blind people and restore vision to most of them. Robot assisted surgery is another way cyborgs are being integrated into medicine.
In the military
Military organizations’ research has recently focused on the utilization of cyborg animals for inter-species relationships for the purposes of a supposed a tactical advantage. DARPA has announced its interest in developing “cyborg insects” to transmit data from sensors implanted into the insect during the pupal stage. The insect’s motion would be controlled from a MEMS, or Micro-Electro-Mechanical System, and would conceivably surveil an environment and detect explosives or gas. Similarly, DARPA is developing a neural implant to remotely control the movement of sharks. The shark’s unique senses would be exploited to provide data feedback in relation to enemy ship movement and underwater explosives.
In sports
The cyborgization of sports has come to the forefront of the national consciousness in recent years. Through the media, America has been exposed to the subject both with the BALCO scandal and the accusations of blood doping at the Tour de France levied against Lance Armstrong and Floyd Landis. But, there is more to the subject; steroids, blood doping, prosthesis, body modification, and maybe in the future, genetic modification are all topics that should be included within cyborgs in sports.
As of now, prosthetic legs and feet are not advanced enough to give the athlete the edge, and people with these prosthetics are allowed to compete, possibly only because they are not actually competitive in the Ironman event among other such -athlons. Prosthesis in track and field, however, is a budding issue. Prosthetic legs and feet may soon be better than their human counterparts. Some prosthetic legs and feet allow for runners to adjust the length of their stride which could potentially improve run times and in time actually allow a runner with prosthetic legs to be the fastest in the world. One model used for replacing a leg lost at the knee has actually improved runners’ marathon times by as much as 30 minutes. The leg is shaped out of a long, flat piece of metal that extends backwards then curves under itself forming a U shape. This functions as a spring, allowing for runners to be propelled forward with by just placing their weight on the limb. This is the only form that allows the wearer to sprint.
In art
The concept of the cyborg is often associated with science fiction. However, many artists have tried to create public awareness of cybernetic organisms; these can range from paintings to installations. Some artists who create such works are Neil Harbisson, Isa Gordon,[citation needed] Motohiko Odani,[citation needed] Nick Lampert,[citation needed] Patricia Piccinini, Jenifer Gonzalez, Simbiotica and Oron Catts, Iñigo Manglano-Ovalle, Steve Mann,[citation needed] Orlan and Stelarc.
Machines are becoming more ubiquitous in the artistic process itself, with computerized drawing pads replacing pen and paper, and drum machines becoming nearly as popular as human drummers. This is perhaps most notable in generative art and music. Composers such as Brian Eno have developed and utilized software which can build entire musical scores from a few basic mathematical parameters.
In popular culture
Cyborgs have become a well-known part of science fiction literature and other media. Examples of fictional biomechanical cyborgs include Robocop, Replicants, Star Trek’s Borg and Star Wars’ Darth Vader. Mechanical models include Cylons, and Terminators.


Leave a comment

Posted by on June 13, 2011 in Tech'KNOW'logy


Screen | Linux Command

Taken From : TheDanishProject

When you run commands and programs in a command prompt on Linux, the programs or commands only run while the command prompt session is open but as soon as the command prompt session is terminated for whatever reason, the commands or programs running within the command prompt session will also get terminated.

Using wget to download files from the internet using putty console is an oldschool technique. But imagine downloading a 500mb file and then suddenly putty crashes causing the session to terminate along with it. There goes the 500mb file download!! “Screen” is a lifesaver in these situations

The screen program is a magnificent utility. Screen basically starts a session within the the session that you logged in with. So, if your putty session suddenly crashed don’t worry, the screen session would still be running in the background. Log in again to the server usingputty and you should be able to retrieve the screen session you initiated earlier.

# screen

This starts a screen session.

# Ctrl + A followed by D

This will detach your screen session and return you to the original session you logged in with. Your screen session will now be running in the background.

# screen -r

This command will resume your previous screen session.

# Ctrl + A followed by Ctrl + \(back slash)
# exit

You could run either command above to end a screen session.

# Ctrl + A followed by “

This command will list all the available screen sessions running if there are.

Usefull Links:
O’REILLY Linux Command Directory

Inside Open Source


Leave a comment

Posted by on February 1, 2011 in Unix / Linux


Factory Method and Automatic Pointers

In general when a factory method returns an instance of the created object, in C++, it is a pointer to a dynamically created memory or a resource.

Resource* factory(); // allocates dynamically

Factory method pattern does not talk about the lifetime of the object it creates. It depends upon the caller of the factory to release the resource. It can do better here. A factory method can act smarter by returing the dynamically allocated pointer by wrapping it in an automatic pointer (auto_ptr).

auto_ptr <Resource> factory(); // allocates dynamically

Returning an automatic pointer strongly indicates ownership transfer as well as
takes care of releasing the resource.

auto_ptr <Resource> rtemp;
rtemp = factory();
} // rtemp freed here automatically even in the face of exceptions!!


1 Comment

Posted by on December 17, 2010 in C++ Programming


Using copy on STL map

Sometimes it is useful to be able iterate over all the elements of a std::map using standard algorithms like std::copy(), std::count(), std::min_element(), std::max_element(). These standard functions do not work out of the box using std::map::iterator. For example, if you want to print all the elements of a map to standard output, you can’t use the following popular copy-ostream_iterator idiom.

std::map <std::string, int> m;
std::copy (m.begin(), m.end(), std::ostream_iterator<int>(std::cout, “\n”));
// does not compile

This is because value_type of the map::iterator is a pair. In other words, if iter is a map<T,U>::iterator then *iter gives pair<T,U> and not U. If we could somehow get hold of pair::second (i.e. type U) instead of pair<T,U> all the above mentioned algorithms can be used out of the box.

The approach I took to solve this problem is to write an iterator adaptor that behaves likes any general bidirectional_iterator. In general, this approach allows map iterators to be used wherever Iterator-Pair idiom is useful. The code given below is kind of long but quite straight forward and idiomatic in nature.

#include <map>
#include <iostream>
#include <algorithm>
#include <string>
#include <list>
#include <iterator>

template <class BiDirIter>
class StdMapIteratorAdaptor
/* To make the custom iterator behave like a standard iterator by exposing
required iterator_traits */
: public
std::iterator <std::bidirectional_iterator_tag,
typename BiDirIter::value_type::second_type>
BiDirIter iter_;

explicit StdMapIteratorAdaptor(BiDirIter const & iter = BiDirIter())
: iter_(iter) {}

bool operator == (StdMapIteratorAdaptor const & rhs) const {
return (iter_ == rhs.iter_);

bool operator != (StdMapIteratorAdaptor const & rhs) const {
return !(*this == rhs);

/* Return type is const to make it work with map::const_iterator */
typename BiDirIter::value_type::second_type const & operator * () {
return iter_->second;

typename BiDirIter::value_type::second_type const & operator * () const {
return iter_->second;

typename BiDirIter::value_type::second_type const * operator -> () const {
return &(iter_->second);

// Pre-increment
StdMapIteratorAdaptor & operator ++ () {
return *this;

// Post-increment
const StdMapIteratorAdaptor operator ++ (int) {
StdMapIteratorAdaptor temp (iter_);
return temp;

// Pre-decrement
StdMapIteratorAdaptor & operator — () {
return *this;

// Post-decrement
const StdMapIteratorAdaptor operator — (int) {
StdMapIteratorAdaptor temp (iter_);
return temp;

/* An helper function to save some typing of tedious nested C++ types
It is very similar to std::make_pair function */
template <class BiDirIter>
StdMapIteratorAdaptor <BiDirIter>
make_map_iterator_adaptor (BiDirIter const & iter)
return StdMapIteratorAdaptor<BiDirIter> (iter);

int main(void)
typedef std::map <std::string, int> StrIntMap;
StrIntMap months;

months["january"] = 31;
months["february"] = 28;
months["march"] = 31;
months["april"] = 30;
months["may"] = 31;
months["june"] = 30;
months["july"] = 31;
months["august"] = 31;
months["september"] = 30;
months["october"] = 31;
months["november"] = 30;
months["december"] = 31;

StrIntMap const & m = months;

StdMapIteratorAdaptor <StrIntMap::const_iterator> begin (m.begin());
StdMapIteratorAdaptor <StrIntMap::const_iterator> end (m.end());
std::copy(begin, end, std::ostream_iterator <int> (std::cout, ” “));
std::cout << std::endl;

std::list<int> l(make_map_iterator_adaptor(m.begin()),

std::copy (l.begin(), l.end(), std::ostream_iterator <int> (std::cout, ” “));
std::cout << std::endl;
std::copy (make_map_iterator_adaptor(months.begin()),
std::ostream_iterator <int> (std::cout, ” “));

return 0;


1 Comment

Posted by on December 17, 2010 in Uncategorized


Get every new post delivered to your Inbox.

%d bloggers like this: