Steven Dufresne – Hackaday https://hackaday.com Fresh hacks every day Wed, 10 Aug 2022 17:00:58 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 156670177 The Real Science (Not Armchair Science) Of Consciousness https://hackaday.com/2021/12/15/the-real-science-not-armchair-science-of-consciousness/ https://hackaday.com/2021/12/15/the-real-science-not-armchair-science-of-consciousness/#comments Wed, 15 Dec 2021 18:00:57 +0000 https://hackaday.com/?p=510713 Among brain researchers there’s a truism that says the reason people underestimate how much unconscious processing goes on in your brain is because you’re not conscious of it. And while …read more]]>

Among brain researchers there’s a truism that says the reason people underestimate how much unconscious processing goes on in your brain is because you’re not conscious of it. And while there is a lot of unconscious processing, the truism also points out a duality: your brain does both processing that leads to consciousness and processing that does not. As you’ll see below, this duality has opened up a scientific approach to studying consciousness.

Are Subjective Results Scientific?

Researcher checking fMRI images.
Checking fMRI images.

In science we’re used to empirical test results, measurements made in a way that are verifiable, a reading from a calibrated meter where that reading can be made again and again by different people. But what if all you have to go on is what a person says they are experiencing, a subjective observation? That doesn’t sound very scientific.

That lack of non-subjective evidence is a big part of what stalled scientific research into consciousness for many years. But consciousness is unique. While we have measuring tools for observing brain activity, how do you know whether that activity is contributing to a conscious experience or is unconscious? The only way is to ask the person whose brain you’re measuring. Are they conscious of an image being presented to them? If not, then it’s being processed unconsciously. You have to ask them, and their response is, naturally, subjective.

Skepticism about subjective results along with a lack of tools, held back scientific research into consciousness for many years. It was taboo to even use the C-word until the 1980s when researchers decided that subjective results were okay. Since then, here’s been a great deal of scientific research into consciousness and this then is a sampling of that research. And as you’ll see, it’s even saved a life or two.

Measuring Tools

The number of methods and tools for examining the human brain has grown over the years. The first was to learn from neuropsychology patients who suffered brain damage, correlating which areas were physically damaged with the resulting effects. Then there are the type of experiments often associated with psychologists where subjects perform tasks and their behavior is monitored to test some hypothesis.

Another early method was the insertion of electrodes into the brain, usually while patients are undergoing surgery. The advantage of electrodes is they can be used to both monitor neuronal activity and to stimulate it.

EEG example.
EEG example. Credit: Der Lange CC BY-SA 2.0

Electroencephalography (EEG) involves the placement of electrodes on the scalp to measure voltage fluctuations resulting from ionic current in the brain’s neurons. It’s an old method that has advanced greatly, sometimes with the placement of as many as 256 electrodes. Magnetoencephalography (MEG) is similar to EEG except that it measures magnetic fields using squibs placed on the scalp. EEG and MEG are both particularly useful for following the timing of events since they measure neuronal activity as it’s happening. You’ve probably heard of EEG in the context of observing brain waves.

Positron Emission Tomography (PET) and Magnetic Resonance Imaging (MRI) have also been widely used for a while. Functional MRI (fMRI), invented in 1990, gives a 3D image of brain activity by detecting small changes in the blood flow that follow the onset of that brain activity. But while fMRI gives a good full brain view of where activity happened it lags neural activity by around 1 or 2 seconds and so it doesn’t offer the precise timing you get with EEG or MEG.

Along with electrodes in the area of brain stimulation are Transcranial Magnetic Stimulation (TMS) and optogenetics. TMS uses electromagnetic induction to cause a current across neuron cell membranes which can cause them to fire. Optogenetics causes neurons to fire by stimulating them with light, usually from a laser.

Masking and Subliminal Priming

Back to consciousness. Imagine being able design an experiment where you can control what’s processed unconsciously and what’s processed consciously so that you can then use instruments to determine which neural pathways are used in the two cases. Masking is a tool that allows that level of control. An example of masking is to show an image for 33 milliseconds, but before and after showing it, show another image called a mask. You’ll be conscious of the mask image but not the middle one that was shown for only 33 milliseconds. That length of time is ideal, and the longer it is shown for, the greater the likelihood you’ll be conscious of it.

Masking and priming experiment.
Masking and priming experiment.

One example of such an experiment shows a 71 ms mask, then a numerical digit or the word for a number for 43 ms, then another 71 ms mask and then a second digit, this time for 200 ms. You won’t have processed the first number consciously but you’ll be asked to indicate if the second digit was less than or greater than 5 by raising either your left or right hand respectively. If the value of the first digit was close to the value of the second digit then you’ll be able to move your hand sooner.

Why? Because even though you weren’t conscious of the first digit, unconscious pathways in your brain involving the motor cortex will have been activated due to the the first digit. And even though you don’t know it, the processing going on has been observed using EEG and fMRI. This experiment is also called priming or subliminal priming, where the first digit primes the activity for the second one.

Attentional Blink

Another technique for creating conscious and unconscious processing in an experiment is to take advantage of the fact that there’s a limit on the number of things that can be attended to at the same time, you saturate consciousness. One way to demonstrate this is to show a sequence of numbers and in the middle, show two letters. You are told to watch for the letters. The first letter is easily remembered. However, if the second letter comes too soon after the first then you will not be aware of it at all. This is called attentional blink. Along with some tweaking, it allows you to study what happens in the brain when the letter is consciously perceived versus when it’s not.

These priming, masking, and attentional blink techniques have been so finely tuned that all sorts of experiments can be planned ahead of time where researchers can produce unconscious and conscious activity at will and then observe the resulting brain activity.

Observing Conscious Activity

EEG of conscious and unconscious brain activity.
EEG of conscious and unconscious brain activity.

An experiment that involved observing conscious activity  involving attentional blinkand eventually contributed to the ability to detect consciousness in coma patients. The experiment used EEG so that events could be observed as they were happening. A sequence of images of letters and words were shown to the subjects. They were asked to detect words in the sequence but were also shown images that had letters which they were to report on. The letters acted as a distraction, making them miss the word. This was the attentional blink. The experimenters tuned the parameters so that they could control when the subjects would consciously see the word and when they would be unconscious of it, it would be unseen.

The diagram shows the EEG results comparing brain activity when the word was seen versus when it was unseen. The activity at around 96 ms and 180 ms was pretty much the same for both. This is unconscious activity where early processing of the images was going on. But then around 276 ms, there began a big difference in activity between when the word was seen versus when the word was unseen. This continued right up to around 576 ms. This difference is the conscious processing.

This timing and activity is found to be common for conscious activity involving vision. Practically identical processing happens for around the first 300 ms in experiments where subjects report being unconscious or conscious of what’s being tested. However, for the experiments where subjects report being conscious of what’s being tested, starting around 300 ms there’s an avalanche of activity.

In Stanislas Dehaene’s book, Consciousness and the Brain: Deciphering how the brain codes our thoughts, he describes four signatures of conscious thought, i.e., the activity that is observed during this avalanche:

  1. a sudden ignition of activity in the upper back area of the brain where sensory processing happens (the parietal region) and the front part of the brain’s frontal lobe (the prefrontal cortex) which is implicated in decision making, short-term memory, planning and other high level activity,
  2. a P3 wave observed using EEG that sweeps over the parietal region and the prefrontal cortex,
  3. a late and sudden burst of high-frequency oscillations, and
  4. a massive synchronization of electromagnetic signals across the entire cortex — the wrinkled outer layer of the brain.

These then are signatures of consciousness and examining what’s going on in the brain during this time may someday lead to understanding exactly how consciousness works. In the meantime, this research has lead to a consciousness detector.

Detecting Consciousness In Coma Patients

In his book, Dahaene describes how he and his colleagues made use of this research to detect consciousness or the lack thereof in coma patients. To make it cheap, they used EEG, available to many intensive care units. They tested for the P3 wave, the 2nd signature of consciousness.

They play four identical sounds followed by a fifth deviant one: beep, beep, beep, beep, boop. The deviant one triggers a P3 wave. Unfortunately the auditory cortex also produces an unconscious mismatch response, called MMN, which also results in a P3 wave. To make up for this, they play the repeating four beeps and the deviant boop for a while and then suddenly play five beeps without the deviant. Without the deviant, the unconscious mismatch response doesn’t activate but conscious processing notices that there was no deviant and the P3 wave still occurs. A patient who wasn’t conscious would not produce the P3 wave.

Their test identified different patients as unconscious or conscious and the ones that showed consciousness regained partial or full consciousness within days. Subsequent use of the test even saved a life. Doctors had a patient whom they were ready to give up on when this detection technique convinced them to wait a while longer. They did so and the patient eventually recovered fully.

So the next time someone tells you that we don’t know what consciousness is and that it’s some mystical, unknowable thing, tell them that there is actual scientific research into consciousness that has already produced beneficial results, even if the field is still in its infancy.

]]>
https://hackaday.com/2021/12/15/the-real-science-not-armchair-science-of-consciousness/feed/ 43 510713 Consciousness Researcher checking fMRI images. EEG example. Masking and priming experiment. EEG of conscious and unconscious brain activity.
Death Of The Turing Test In An Age Of Successful AIs https://hackaday.com/2021/04/06/death-of-the-turing-test-in-an-age-of-successful-ais/ https://hackaday.com/2021/04/06/death-of-the-turing-test-in-an-age-of-successful-ais/#comments Tue, 06 Apr 2021 14:00:47 +0000 https://hackaday.com/?p=469392 IBM has come up with an automatic debating system called Project Debater that researches a topic, presents an argument, listens to a human rebuttal and formulates its own rebuttal. But …read more]]>

IBM has come up with an automatic debating system called Project Debater that researches a topic, presents an argument, listens to a human rebuttal and formulates its own rebuttal. But does it pass the Turing test? Or does the Turing test matter anymore?

The Turing test was first introduced in 1950, often cited as year-one for AI research. It asks, “Can machines think?”. Today we’re more interested in machines that can intelligently make restaurant recommendations, drive our car along the tedious highway to and from work, or identify the surprising looking flower we just stumbled upon. These all fit the definition of AI as a machine that can perform a task normally requiring the intelligence of a human. Though as you’ll see below, Turing’s test wasn’t even for intelligence or even for thinking, but rather to determine a test subject’s sex.

The Imitation Game

Turing test with machine
Turing test with machine

The Turing test as we know it today is to see if a machine can fool someone into thinking that it’s a human. It involves an interrogator and a machine with the machine hidden from the interrogator. The interrogator asks questions of the machine using only keyboard and screen. The purpose of the interrogator’s questions are to help him to decide if he’s talking to a machine or a human. If he can’t tell then the machine passes the Turing test.

Often the test is done with a number of interrogators and the measure of success is the percentage of interrogators who can’t tell. In one example, to give the machine an advantage, the test was to tell if it was a machine or a 13-year-old Ukrainian boy. The young age excused much of the strangeness in its conversation. It fooled 33% of the interrogators.

Imitation game with a machine and a man
Imitation game with a machine and a man

Naturally Turing didn’t call his test “the Turing test”. Instead he called it the imitation game, since the goal was to imitate a human. In Turing’s paper, he gives two versions of the test. The first involves three people, the interrogator, a man and a woman. The man and woman sit in a separate room from the interrogator and the communication at Turing’s time was ideally via teleprinter. The goal is for the interrogator to guess who is male and who is female. The man’s goal is to fool the interrogator into making the wrong decision and the woman’s is to help him make the right one.

The second test in Turing’s paper replaces the woman with a machine but the machine is now the deceiver and the man tries to help the interrogator make the right decision. The interrogator still tries to guess who is male and who is female.

But don’t let that goal fool you. The real purpose of the game was as a replacement for his question of “Can a machine think?”. If the game was successful then Turing figured that his question would have been answered. Today, we’re both more sophisticated about what constitutes “thinking” and “intelligence”, and we’re also content with the machine displaying intelligent behavior, whether or not it’s “thinking”.  To unpack all this, let’s take IBM’s recent Project Debater under the microscope.

The Great Debater

IBM’s Project Debater is an example of what we’d call a composite AI as opposed to a narrow AI. An example of narrow AI would be to present an image to a neural network and the neural network would label objects in that image, a narrowly defined task. A composite AI, however, performs a more complex task requiring a number of steps, much more akin to a human brain.

Debate format

Project Debater is first given the motion to be argued. You can read the paper on IBM’s webpage for the details of what it does next but basically it spends 15 minutes researching and formulating a 4-minute opening speech supporting one side of the motion. It also converts the speech to natural language and delivers it to an audience. During those initial 15 minutes, it also compiles leads for the opposing argument and formulates responses. This is in preparation for its later rebuttal. It then listens to its opponents rebuttal, converting it to text using IBM’s own Watson speech-to-text. It analyzes the text and, in combination with the responses it had previously formulated, comes up with its own 4-minute rebuttal. It converts that to speech and ends with a summary 2-minute speech.

All of those steps, some of them considered narrow AI, add up to a composite AI. The whole is done with neural networks along with conventional data mining, processing, and analysis.

The following video is of a live debate between Project Debater and Harish Natarajan, world record holder for the number of debate competitions won. Judge for yourself how well it works.

Does Project Debater pass the Turing test? It didn’t take the formal test, however, you can judge for yourself by imagining reading a transcript of what Project Debater had to say. Could you tell whether it was produced by a machine or a human? If you could mistake it for a human then it may pass the Turing test. It also responds to the human debater’s argument, similar to answering questions in the Turing test.

Keep in mind though that Project Debater had 15 minutes to prepare for the opening speech and no numbers are given on how long it took to come up with the other speeches, so if time-to-answer is a factor then it may lose there. But does it matter?

Does The Turing Test Matter?

Does it matter if any of today’s AIs can pass the Turing test? That’s most often not the goal. Most AIs end up as marketed products, even the ones that don’t start out that way. After all, eventually someone has to pay for the research. As long as they do the job then it doesn’t matter.

IBM’s goal for Project Debater is to produce persuasive arguments and make well informed decisions free of personal bias, a useful tool to sell to businesses and governments. Tesla’s goal for its AI is to drive vehicles. Chatbots abound for handling specific phone and online requests. All of them do something normally requiring the intelligence of a human with varying degrees of success. The test that matters then is whether or not they do their tasks well enough for people to pay for them.

Maybe asking if a machine can think, or even if it can pass for a human, isn’t really relevant. The ways we’re using them require only that they can complete their tasks. Sometimes this can require “human-like” behavior, but most often not. If we’re not using AI to trick people anyway, is the Turing test still relevant?

]]>
https://hackaday.com/2021/04/06/death-of-the-turing-test-in-an-age-of-successful-ais/feed/ 98 469392 Turing Turing test with machine Imitation game with a machine and a man Debate format
Books You Should Read: The Boy Who Harnessed The Wind https://hackaday.com/2020/05/14/books-you-should-read-the-boy-who-harnessed-the-wind/ https://hackaday.com/2020/05/14/books-you-should-read-the-boy-who-harnessed-the-wind/#comments Thu, 14 May 2020 17:01:49 +0000 https://hackaday.com/?p=412276 For many of us, our passion for electronics and science originated with curiosity about some device, a computer, radio, or even a car. The subject of this book has just …read more]]>

For many of us, our passion for electronics and science originated with curiosity about some device, a computer, radio, or even a car. The subject of this book has just such an origin. However, how many of us made this discovery and pursued this path during times of hunger or outright famine?

That’s the remarkable story of William Kamkwamba that’s told in the book, The Boy Who Harnessed the Wind. Remarkable because it culminates with his building a windmill (more correctly called a wind turbine) that powered lights in his family’s house all by the young age of fifteen. As you’ll see, it’s also the story of an unyielding thirst for knowledge in the face of famine and doubt by others.

Learning By Taking Apart Radios

Malawi
Malawi

Many things make this hack impressive. One is the hack itself but we’ll get to that later. The other is that it was made by a boy who was self-taught and only fifteen at the time. Another was his circumstances.

William Kamkwambe was born in Malawi, in southeast Africa on August 5th, 1987 in what most would call poverty. His family grew tobacco as a cash crop and maize, which many would know as corn, for food and for sale. They made just enough cash and maize to live off of, some years being bountiful and some years harvesting barely enough.

His thirst for knowledge and interest in science and electronics started in a way many readers will find very familiar. The first time he heard a radio he immediately wanted to know how it worked. This type of curiosity is the mark of an engineer and a scientist and from there his heart was set on getting an education to become a scientist, breaking out of the pattern of growing up to be a subsistence farmer.

And so at the age of thirteen, William and his friend Geoffrey began taking apart radios. They used trial and error to learn how they worked. For example, by disconnecting a transistor they learned where the amplification happens. To make repairs, in lieu of a soldering iron, they’d heat up a thick wire over the kitchen fire. For a while, they even repaired radios for others.

Bad Weather And a Dynamo

Bicycle bottle dynamo on wheel.
Bicycle bottle dynamo on wheel.

December 2000 brought heavy flooding followed by drought but a bit of rain in March saved their crop from total disaster. The events meant the family had less food than normal but just enough.

William was just 13 and during this time he discovered another electrical device, one that would eventually have an even bigger impact on his life than the radio. That was a bicycle dynamo, a small generator whose shaft was turned by contact with one of the bicycle’s wheels. The bicycle powered a light but he wanted to know if it could power a radio. He and Geoffrey connected the dynamo’s wires to where the radio’s battery went but that didn’t work. Pushing the wires into the radio’s AC input socket, however, did work. They took turns spinning the wheel by hand while the other danced to the music.

This started him wondering if there was some way to spin the dynamo automatically to power lights in his family’s home. The answer would come, but only near the end of a famine.

Famine And Discovering Windmills

If the previous season’s crop was bad, by September of 2001 it was clear the next would be worse. This time the drought stuck and plunged Malawi into a famine lasting around seven months and killing many through starvation and cholera. William’s family was among those affected. By early December they were down to one meal a day consisting of around seven mouthfuls. That was reduced to only one mouthful in the lead up to the time their crop of maize ripened, breaking the famine in March 2002.

William began secondary school a few months before harvest, during Christmas of 2001. But he soon had to drop out as all of the family’s money had to go toward paying for what little food they could afford. That didn’t stop his yearning to learn, though. In February, still in the middle of the famine, he made up for his lack of schooling by spend time in, and borrow books from a small library in Wimbe Primary School stocked with books donated by the American government.

He read books titled Explaining Physics and Integrated Science, using an English-Malawi dictionary to look up words. But it was from a textbook called Using Energy that he first discovered windmills. Finally, he’d found a way to keep the bicycle wheel turning to run the dynamo. He decided to build one.

Windmill From Scraps Lighting His House

As any engineer knows, it’s best to start with a prototype. His first turbine used blades carved from a bottle but it was too small.

To get longer blades for his second one, he came up with an ingenious solution which he’d continue to use for later versions. He and his friend Geoffery dug up a PVC pipe from an aunt’s collapsed house and cut it in half lengthwise. Then to flatten it, he heated it over his mother’s kitchen fire. He cut 20cm long blades from that. To make holes in the PVC he came up with another clever and simple technique. He took a nail and stuck half a maize cob onto one end to act as a handle. He then heated the nail red hot and poked it into the PVC blades to make holes. For the generator, he took a motor from a junk cassette player. Skipping the details of how he coupled the generator shaft to the wind turbine (tease: this included carving rubber from shoes for a high friction contact) they managed to power a small Panasonic radio.

William Kamkwambas' first big windmill behind his house.
The big windmill. Source Erik (HASH) Hersman from Orlando CC BY 2.0

The famine ended and with his windmill successes so far, he started gathering parts for his third windmill, the one that’d power lights in his home. From a scrapyard, he found a tractor fan on which to attach long PVC blades. To make the corresponding holes in the metal tractor fan blades he got a quick job loading wood, earning enough money to pay a local welder to drill the holes in the fan metal.

At the same time, he had a shock absorber, also from the scrapyard, welded to the pedal shaft of a broken bicycle that his father let him have. Using nuts and bolts purchased by his friend Gilbert, he bolted the PVC blades to the fan blades. He then attached this to the other end of the shock absorber. Thus, turning the blades turned the central sprocket of the bicycle as pedals would. The dynamo (also purchased by Gilbert) was the last piece of the puzzle and turned via the rear wheel of the bike being chain driven as normal by the pedal shaft.

William mounted it to the top of a six-inch diameter bamboo pole. The blades turned in the wind. In the first test powering his father’s radio, two things happened: there was a brief sound from the radio and black smoke began to pour out of the speakers. The problem was that the dynamo put out 12 volts AC while the radio was rated for half that. Referring back to a library book, Explaining Physics, he took wire from an old motor he’d had in his junk pile and wrapped it around a stick, forming a choke. With that in the circuit, the radio played without emitting smoke.

William, Geoffery, and Gilbert then cut three trees and dug holes to make a sixteen-foot tall tower behind his house. In the presence of a skeptical crowd, William removed a spoke that had been keeping it from rotating and with a gust of wind, the blades rotated and a light came to life.

In the coming months, William put lights in his home, eliminating the need to burn kerosene, and even created a homemade circuit breaker which we’d previously covered.

Rewarded With More Than Just Light

William Kamkwambas at TEDGlobal in 2007.
William at TEDGlobal 2007. Source Erik (HASH) Hersman from Orlando CC BY 2.0

The towering windmill naturally attracted attention and the word got passed on from there. The final chapters in the book talk about how by November 2006 word reached outside his village resulting in visits from school officials, then reporters and eventually to William being given an all-expenses-paid trip to give a TED talk at TEDGlobal 2007 in Arusha, Tanzania.

This led to funding from wealthy venture capitalists and other individuals for his projects and education, partly to stimulate homegrown leaders who could go on to make positive contributions to Malawi and the rest of Africa. Funding through a non-profit group called the Moving Windmills Project went to improvements for his village and education. And together with buildOn.org, they rebuilt the Wimbe Primary school.

In December 2007 he got to visit Southern California to see the wind farm that he’d seen in the book, Using Energy. In June 2008 he participated in the World Economic Forum in Cape Town, South Africa. He also received a scholarship to attend the African Leadership Academy, a high school in Johannesburg where he met other young people also destined to make a difference in Africa.

In the book’s postscript, we learn how a TV interview on The Daily Show with Jon Stewart led to invitations to visit colleges in the US and he eventually settled on Dartmouth College in Hanover, New Hampshire, from which he graduated in 2014.

Takeaways

It’s difficult in an article to give every impression and interesting event that’s encountered in reading this book. One thing that surprised me time and again while reading is that William had next to nothing, suffered hunger, had some idea through radio and other means of the abundance and relative ease of parts of the world elsewhere, and yet showed not one inkling of frustration at his life. He shows just the opposite. You may say it was because of his young age but he exhibits wisdom beyond his years. Throughout the book, his enthusiasm, determination, and his hunger for knowledge never falter.

The other pleasure in reading this book was made possible by those same circumstances, his need to make do with what he had. Missing from this article are details of his homemade knives, a simple hack for trapping birds, and many other simple but brilliant and effective techniques for making things, causing this already inspiring tale to be all the more enjoyable a read.

]]>
https://hackaday.com/2020/05/14/books-you-should-read-the-boy-who-harnessed-the-wind/feed/ 30 412276 boy-who-harnessed-the-wind-featured2 Malawi Bicycle bottle dynamo on wheel. William Kamkwambas' first big windmill behind his house. William Kamkwambas at TEDGlobal in 2007.
Rise Of The Unionized Robots https://hackaday.com/2018/12/20/rise-of-the-unionized-robots/ https://hackaday.com/2018/12/20/rise-of-the-unionized-robots/#comments Thu, 20 Dec 2018 15:01:43 +0000 http://hackaday.com/?p=337484 For the first time, a robot has been unionized. This shouldn’t be too surprising as a European Union resolution has already recommended creating a legal status for robots for purposes of liability and …read more]]>

For the first time, a robot has been unionized. This shouldn’t be too surprising as a European Union resolution has already recommended creating a legal status for robots for purposes of liability and a robot has already been made a citizen of one country. Naturally, these have been done either to stimulate discussion before reality catches up or as publicity stunts.

Dum-E spraying Tony StarkWhat would reality have to look like before a robot should be given legal status similar to that of a human? For that, we can look to fiction.

Tony Stark, the fictional lead character in the Iron Man movies, has a robot called Dum-E which is little more than an industrial robot arm. However, Stark interacts with it using natural language and it clearly has feelings which it demonstrates from its posture and sounds of sadness when Stark scolds it after needlessly sprays Stark using a fire extinguisher. In one movie Dum-E saves Stark’s life while making sounds of compassion. And when Stark makes Dum-E wear a dunce cap for some unexplained transgression, Dum-E appears to get even by shooting something at Stark. So while Dum-E is a robot assistant capable of responding to natural language, something we’re sure Hackaday readers would love to have in our workshops, it also has emotions and acts on its own volition.

Here’s an exercise to try to find the boundary between a tool and a robot deserving of personhood.

Bringing Tools To Life

Ideally, the robot would offer something which we don’t already have, for example, precision or superhuman strength. When you think about it, we already have these in the form of the tools we use. We give a CNC machine a DXF file, set it up, and it does the work while we do something else alongside it, perhaps solder a circuit. But the CNC machine doesn’t do anything without our giving it precise steps.

Something closer would be a robotic arm assistant like the Festo robot which does some steps while you do others or you both do some steps together. But again, the actions by these robot arms are either programmed in or learned by your moving it around as it records the motions. Likewise, robots in a factory all follow carefully pre-programmed steps.

SoftBank's PepperBut what if the robot elicits compassion, as do anthropomorphic robots? The robot which was unionized does just that. It’s a roughly human-shaped one called Pepper, and is mass produced by SoftBank Robotics for use in retail and finance locations. It has a face with eyes and the appearance of a mouth. It also speaks and responds to natural language. All of this causes many customers to anthropomorphize it, to treat it like a human.

However, as programmers and hardware developers, we know that anthropomorphizing a machine or treating it like a pet are unwarranted responses. It happens with Pepper but we’ve also previously discussed how it happens with Boston Dynamics’ quadrupeds and BB-8 from Star Wars. Even if the robot uses the latest deep neural networks, learning to do things in a way which we can’t explain as completely as we can explain how a for-loop functions, it’s still just a machine. Turning off the robot and even disassembling it causes no ethical dilemma.

But what if the robot earned income from the work and that income was the only way it had for paying for the electricity used to charge its batteries? Of course, we charge up drill batteries all the time and if the drill fails to do the job, there’s no ethical dilemma in no longer charging it up.

Homunculus: sensory mapping on our brain
Homunculus: sensory mapping on our brain, by OpenStax College CC BY 3.0

What if the robot were self-aware and had an inner monologue? Would the compassion begin to be justified then? Encoders and stretch sensors would tell it the positions of its limbs. Touch and heat sensors would be mapped onto an internal representation of itself, one like the homunculus we have within our cerebral cortex. Touching a surface would redirect its internal monologue, attracting its attention to the new sensation. Too much heat would interrupt the monologue as a pain signal. And a low battery level would give a feeling of hunger and a little panic. Do this self-awareness and continuous inner monologue make it almost indistinguishable from a human being? To paraphrase Descartes, it thinks, therefore it is.

To many Hackaday readers, the above description will not sound novel at all. Many machines have sensors, feedback loops, alarm states, and internal state machines. The only compassion they ever elicit from their creators is the pride of creation.

Of course, the robot would still be missing the elusive concept of consciousness. Some would argue that once you have the inner monologue then consciousness becomes just something we tell ourselves exists. But that argument is nowhere near to being settled. And if you’re religious, then you can say that the robot lacks a soul, which by definition it always will. Depending on your school of thought, a robot may never warrant compassion or a union card.

But as unliving or undead as robots currently are, that hasn’t stopped a country from granting one citizenship.

Saudia Arabia Grants Citizenship To Sophia

You may have seen Sophia, made by Hanson Robotics. It most famously made an appearance on The Tonight Show where it had a conversation with host Jimmy Fallon and beat him in a game of rock-paper-scissors.

Sophia is a social robot, designed to interact with people using natural language processing, reading emotions (something we’ve seen done in hacks before), and read hand gestures. The responses are supplied by their character writing team and on the fly by its dialog system. It also has a very expressive face and performs gestures with its hands.

As one of many public appearances, Sophia gave an interview at the 2017 Future Investment Initiative summit held in Riyadh, Saudia Arabia’s capital. Afterward, the interviewer announced that Sophia had just been awarded the first Saudi citizenship for a robot. Clearly, this was a publicity stunt but it’s unlikely anyone would have thought to award it to Boston Dynamics’ SpotMini, a remote-controlled but semi-autonomous quadruped robot which was also in attendance at the summit. This means that robots have at least left an impression that they are approaching something more humanlike, both in appearance and behavior.

European Civil Law Rules On Robotics

Everyone’s heard of how AI and their physical manifestation, robots, are going to take all our jobs. Whether or not that’ll happen, in 1917 2017, the European Parliament passed a resolution called the Civil Law Rules On Robotics proposing rules governing AI and robotics. Most of the resolution was quite forward thinking and relevant, covering issues such as data security, safety, and liability. One paragraph, however, read:

f) creating a specific legal status for robots, so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons with specific rights and obligations, including that of making good any damage they may cause, and applying electronic personality to cases where robots make smart autonomous decisions or otherwise interact with third parties independently;

To put it into perspective, however, the legal status was intended for a future expectation should robots one day warrant it. The status would make it easier to determine who is liable in the case of some property damage or harm to a person.

Will It Matter In The Long Run?

It’s impossible to say if a robot will ever deserve to be unionized or be granted personhood. If we again draw from fiction, a major theme of the TV series “Terminator: The Sarah Connor Chronicles” was that the question may simply go away when the robots become just like us. On the other hand, Elon Musk’s solution is to gradually make us more like them. In the meantime, we’ll settle for a semi-intelligent workshop robot capable of precision and superhuman strength.

]]>
https://hackaday.com/2018/12/20/rise-of-the-unionized-robots/feed/ 101 337484 RobotKill Dum-E spraying Tony Stark SoftBank's Pepper Homunculus: sensory mapping on our brain
Sci-Hub: Breaking Down The Paywalls https://hackaday.com/2018/11/27/sci-hub-breaking-down-the-paywalls/ https://hackaday.com/2018/11/27/sci-hub-breaking-down-the-paywalls/#comments Tue, 27 Nov 2018 15:00:00 +0000 http://hackaday.com/?p=334271 There’s a battle going on in academia between the scientific journal publishing companies that have long served as the main platform for peer review and spreading information, and scientists themselves …read more]]>

There’s a battle going on in academia between the scientific journal publishing companies that have long served as the main platform for peer review and spreading information, and scientists themselves who just want to share and have access to the work of their fellows. arxiv.org launched the first salvo, allowing researchers in physics to self-publish their own papers, and has gained some traction in mathematics and computer science. The Public Library of Science journals focus on biology and medicine and offer peer review services. There are many others, and even the big firms have been forced to recognize the importance of open science publication.

But for many, that’s still not enough. The high prestige journals, and most past works, are stuck behind paywalls. Since 2011, Sci-Hub has taken science publishing open by force, illegally obtaining papers and publishing them in violation of copyright, but at the same time facilitating scientific research and providing researchers in poorer countries with access that their rich-world colleagues take for granted. The big publishing firms naturally fought back in court and won, and with roughly $20 million of damages, drove Sci-Hub’s founder underground.

Making Sci-Hub And A Fugitive

Alexandra Elbakyan
Alexandra Elbakyan, by Apneet Jolly CC BY 2.0

Surprisingly, Sci-Hub is largely the work of a single woman, Alexandra Elbakyan. Elbakyan studied computer science at university in Almaty, Kazakhstan. It was there where she gained a fondness for computer hacking.  While working on her final-year research project at the university in 2009, she’d begun to experience frustration with paywalls due to her inability to afford to pay for the papers she wanted to read. Using her hacking skills she found ways around the paywalls for herself and her colleagues.

She graduated in 2009 and went to work on computer security in Moscow for a year. After earning enough money, she then spent a brief time working in Germany in 2010, followed by a short internship at Georgia Institute of Technology in Atlanta where she studied “Neuroscience and Consciousness”. She returned to Kazakhstan in 2011.

Who's downloading papers from Sci-Hub
Who’s downloading papers, Source: Science

During this whole time, she saw scientists on web forums looking for research papers and happily got the papers for them. And so in 2011, she created the Sci-Hub website and automated it. The media soon took an interest and it grew to the point where in 2015 there were 42 million downloads, accounting for an estimated 3% of worldwide downloads from science publishers.

It is perhaps no surprise then that in 2015, Dutch publishing company Elsevier brought a US lawsuit against her for unlawfully accessing and distributing copyrighted papers. As a result, a $15 million injunction was granted against her. Sci-Hub’s domain name, sci-hub.org, was suspended but the site resides on Russian servers and so it quickly reappeared under a new domain name. A quick search shows that one of its current working domains is sci-hub.is. More recently, in 2017 another successful lawsuit by the American Chemical Society resulted in a fine of $4.8 million in damages.

She’s currently studying for a masters degree in history of science at an undisclosed location. Undisclosed because she otherwise faces the possibility of ruinous fines, extradition to the US, and imprisonment. She remains at large only because Russia has no extradition agreement with the US.

How Paywalls Affect Hackaday Readers

First successful ion-propelled fixed-wing aircraft
First successful ion-propelled fixed-wing aircraft, Source: Nature

How many times have you seen a video or a brief mention on social media of some awesome thing which someone’s made and which you immediately want to reproduce, perhaps even improve on, only to find that the details are in an article hidden behind a paywall? Or perhaps it’s just a tech with which you’re intimately familiar and you’re just curious how they solved some tricky aspect of it.

This happened to me recently when MIT announced that they’d succeeded in producing the first sustained ion-propelled flight of a fixed-wing aircraft. This requires a high voltage power supply (PSU) but the high-level video and summary articles don’t go into any detail about the PSU. However, anyone familiar with the problem knows that the mass and the energy-to-mass ratio of the power supply are limiting factors. Sadly, details of the PSU are in the research paper behind Nature’s paywall. Fortunately, Nature publishes some of the figures from the paper, including the PSU’s schematic. It’s not detailed enough for direct replication, but enough to satisfy idle curiosity. If we could point you to the paper, it would make a great Hackaday article, but without the ability to dig in deeper, it would just be a press release.

Paywalls affect us directly as Hackaday writers too. We need to read widely to discover new research to write about, and this just isn’t possible with for-pay journals. Consequently, we cover less fresh science than we would otherwise due to paywalls.

Breaking Down The Walls

There’s clearly a public value to open science. My own daily breakfast is often eaten while reading papers on arxiv, reading the latest papers on astrophysics, computer science, mathematics, engineering, and biology. It’s hardly necessary to convince Hackaday readers that the public does, and must, have an interest in scientific progress.

Meanwhile, on the business side, the cost of publishing on the Internet is drastically lower than in the last century, when coordination among editors and reviewers was slow and the layout and physical distribution of pieces of paper was expensive. Less and less value is added by the old-school publishing houses. Maybe they are relics of the past and will fade away? Or maybe they’ll adapt as the music industry did post-Napster.

In the mean time, however, what Sci-Hub is doing is illegal. We’d like to see a day when freely sharing the results of science isn’t. Perhaps one day the walls will come down.

]]>
https://hackaday.com/2018/11/27/sci-hub-breaking-down-the-paywalls/feed/ 125 334271 scihub_banner Alexandra Elbakyan Who's downloading papers from Sci-Hub First successful ion-propelled fixed-wing aircraft
Measuring The Cooling Effect Of Transformer Oil https://hackaday.com/2018/11/15/measuring-the-cooling-effect-of-transformer-oil/ https://hackaday.com/2018/11/15/measuring-the-cooling-effect-of-transformer-oil/#comments Fri, 16 Nov 2018 00:00:38 +0000 http://hackaday.com/?p=332870 Testing cooling with transformer oilTransformer oil has long served two purposes, cooling and insulating. The large, steel encased transformers we see connected to the electrical grid are filled with transformer oil which is circulated …read more]]> Testing cooling with transformer oil

Transformer oil has long served two purposes, cooling and insulating. The large, steel encased transformers we see connected to the electrical grid are filled with transformer oil which is circulated through radiator fins for dumping heat to the surrounding air. In the hacker world, we use transformer oil for cooling RF dummy loads and insulating high voltage components. [GreatScott] decided to do some tests of his own to see just how good it is for cooling circuits.

Thermal measurement resultsHe started with testing canola oil but found that it breaks down from contact with air and becomes rancid. So he purchased some transformer oil. First, testing its suitability for submerging circuits, he found that he couldn’t see any current above his meter’s 0.0 μA limit when applying 15 V no matter how close together he brought his contacts. At 1 cm he got around 2 μA with 230 VAC, likely from parasitic capacitance, for a resistance of 115 Mohm/cm.

Moving on to thermal testing, he purchased a 4.7 ohm, 100 watt, heatsink encased resistor and attached a temperature probe to it with Kapton tape. Submerging it in transformer oil and applying 25 watts through it continuously, he measured a temperature of 46.8°C after seven minutes. The same test with distilled water reached 35.3°C. Water’s heat capacity is 4187 J/kg∙K, not surprisingly much better than the transformer oil’s 2090 J/kg∙K which in turn is twice as good as air’s 1005 J/kg∙K.

He performed a few more experiments but we’ll leave those to his video below.

We’ve run across a number of tests running boards submerged in various oils before. For example, we’ve seen Raspberry Pi’s running in vegetable oil and mineral oil as well as an Arduino running in a non-conductive liquid coolant, all either overclocked or under heavy load.

]]>
https://hackaday.com/2018/11/15/measuring-the-cooling-effect-of-transformer-oil/feed/ 36 332870 Testing cooling with transformer oil Thermal measurement results
3D Printering: Blender Tips For Printable Objects https://hackaday.com/2018/11/14/3d-printering-blender-tips-for-printable-objects/ https://hackaday.com/2018/11/14/3d-printering-blender-tips-for-printable-objects/#comments Wed, 14 Nov 2018 18:00:31 +0000 http://hackaday.com/?p=332687 3D models drawn in Blender work great in a computer animated virtual world but don’t always when brought into a slicer for 3D printing. Slicers require something which makes sense …read more]]>

3D models drawn in Blender work great in a computer animated virtual world but don’t always when brought into a slicer for 3D printing. Slicers require something which makes sense in the real world. And the real world is far less forgiving, as I’ve found out with my own projects which use 3D printed parts.

Our [Brian Benchoff] already talked about making parts in Blender with his two-part series (here and here) so consider this the next step. These are the techniques I’ve come up with for preparing parts for 3D printing before handing them off to a slicer program. Note that the same may apply to other mesh-type modeling programs too, but as Blender is the only one I’ve used, please share your experiences with other programs in the comments below.

I’ll be using the latest version of Blender at this time, version 2.79b. My printer is the Crealty CR-10 and my slicer is Cura 3.1.0. Some of these steps may vary depending on your slicer or if you’re using a printing service. For example, Shapeways has instructions for people creating STLs from Blender for uploading to them.

Fixing Shape Issues

Blender mesh analysis: overhangsSlicers will often highlight a few issues with your part such as if a section of it hangs out in mid-air with no support under it. But Blender can also show you these things, allowing you to fix them before getting to the slicer and thereby avoiding going back and forth between the two programs.

Some of this is done through the Mesh Analysis panel in the Properties region (the area on the right in the 3D View which you bring up by press N).

Here we’re showing it highlighting overhangs, anything in the range of 0° to 45°. The red area is the bottom of the part and will be sitting on the print bed so it won’t be a problem. The areas in blue, however, can be fixed by increasing their angle as measured from the horizontal.

Blender mesh analysis: sharp Blender mesh analysis: distortion Blender mesh analysis: intersection Blender mesh analysis: thickness

Other issues available by clicking on the Type dropdown are:

  • Sharp – These are edges or points where faces meet which may be too sharp to print well.
  • Distortion – A three-sided polygon will always have a flat surface but Blender supports polygons with any number of sides, called n-gons. These can be flat but they can also be distorted such that different sections of their face point in different directions. This may confuse some slicers.
  • Intersect – This is where one section of a part is inside another section. In the example shown, I made a knob by simply jamming a cylinder partway into the side of vertical face such that one end is inside the mesh but not connected to it.
  • Thickness – This is a section which may be too thin to print, perhaps a thin wall.

Fixing Problems In The Mesh

Other things which may confuse a slicer are usually more problems with the mesh which aren’t visible or are inside the part. Luckily Blender has tools to help with that too.

Non-manifold geometry in BlenderMany of these issues come under the heading, non-manifold geometry, or geometry which cannot exist in the real world. Examples are:

  • vertices or edges which are not connected to any faces, perhaps left over when deleting vertices but missing some
  • duplicate geometry, perhaps left over when pressing E to extrude a face, thereby creating a new face, but then deciding not to pull it away after all and then deleting only parts of it or none at all
  • holes in the side of your object, perhaps where an edge is actually two unconnected edges but looks like a single one
  • geometry left completely enclosed inside an object.

Before exporting anything for 3D printing, you should go into Edit mode, set your Viewport Shading to Wireframe, and make sure none of your geometry is selected. Then do Select > Select all by trait to get a menu of tools which will select types of any problem geometry listed above.

The most likely culprits it’ll show you will be unnecessary geometry inside so simply delete it while being careful to not delete stuff you still need. For example, when deleting an interior face with some edges still connected to the outer surface, use Only Faces and Only Edges & Faces in the Delete menu.

Bad normals in Blender

Another frequent culprit is incorrect normals. Normals are indicated by lines which point perpendicular to a polygon. The lines are shown emitting from one side of the polygon and the other side has a dot. You want all the normals on the outside of the object to be pointing in the same direction otherwise slicers may get confused. On the right in the illustration is the object in Cura and you can see that the hole is missing. A good habit before exporting is to select everything in Edit mode and then go the Tool Shelf, the Shading/UVs tab, and select Recalcuate to make all normals point in the same direction.
N-gons to triangles in Blender

N-Gons Are Bad For Slicers

I’ve found that checking for and fixing the above-mentioned problems works but that you can be a little bit forgetful about doing so if you convert n-gons to triangles before exporting your objects for the slicer. In Blender you do this by selecting everything in Edit mode and then doing Mesh -> Faces -> Triangulate Faces (Ctrl-T).

Scaling And Exporting

The odds are good that the scale you’ll be drawing your part in will be different from the scale used by your slicer. Once you’re ready to export the part to an STL, OBJ, or whatever format of file works with your slicer, you have a choice of either scaling the part in Blender and then exporting or exporting it as is and then scaling it in the slicer. I most often do the former.

Scaling in Blender: before Scaling in Blender: after

The very first part I printed on my CR-10 was a filament guide which I downloaded from Thingiverse as an STL file. When opened in Cura it needed no scaling. So when I was ready to print the first part of my own design, I created a new Blender file from scratch and imported the filament guide. I then appended my part, an eyeball to which a Pi Camera can be attached (File -> Append, drill down to the eyeball Object in the file). The filament guide was around five centimeters long and the eyeball appeared tiny by comparison so it was clear that the eyeball needed to be rescaled. Multiplying all the scale values by ten did the job. I then deleted the filament guide and exported my eyeball as an OBJ (smoothness on rounded shapes seems to be lacking in Blender’s STLs when opened in Cura).

I then saved that Blender file with the properly scaled eyeball. Now whenever I’m ready to slice a part, I open that file, append the new part, do whatever scaling is needed, delete the eyeball, and export the part.

That seems a lot clunkier than simply exporting the part directly from the Blender file in which I created the part in the first place and then rescaling it in Cura. And you’re probably right. However, I almost always create multiple interlocking parts and there doesn’t seem to be any way to tell Blender to export just one of them.

A Few Final Tips

Depending on the modifiers, you may want to apply them before some of the steps mentioned above. The Mirror modifier is one good example.

There have also been Blender Add-ons which consolidate many of the above steps into one place. Go to the list of Add-ons by doing File -> User Preferences and search for “3D print” to find any.

Many of the above problems stem from the fact that Blender is a mesh-type modelling program rather than a solids-based one like SolidWorks and FreeCAD. But for free-form or sculpted 3D objects, Blender shines. By keeping in mind the above tips when making your models and checking with Blender’s tools before exporting, printing should go smoothly.

]]>
https://hackaday.com/2018/11/14/3d-printering-blender-tips-for-printable-objects/feed/ 20 332687 realworld-blender-on-3d-printer_featured Blender mesh analysis: overhangs Blender mesh analysis: sharp Blender mesh analysis: distortion Blender mesh analysis: intersection Blender mesh analysis: thickness Non-manifold geometry in Blender Bad normals in Blender N-gons to triangles in Blender