Skip to main content

Home/ Advanced Concepts Team/ Group items tagged information

Rss Feed Group items tagged

Guido de Croon

Will robots be smarter than humans by 2029? - 2 views

  •  
    Nice discussion about the singularity. Made me think of drinking coffee with Luis... It raises some issues such as the necessity of embodiment, etc.
  • ...9 more comments...
  •  
    "Kurzweilians"... LOL. Still not sold on embodiment, btw.
  •  
    The biggest problem with embodiment is that, since the passive walkers (with which it all started), it hasn't delivered anything really interesting...
  •  
    The problem with embodiment is that it's done wrong. Embodiment needs to be treated like big data. More sensors, more data, more processing. Just putting a computer in a robot with a camera and microphone is not embodiment.
  •  
    I like how he attacks Moore's Law. It always looks a bit naive to me if people start to (ab)use it to make their point. No strong opinion about embodiment.
  •  
    @Paul: How would embodiment be done RIGHT?
  •  
    Embodiment has some obvious advantages. For example, in the vision domain many hard problems become easy when you have a body with which you can take actions (like looking at an object you don't immediately recognize from a different angle) - a point already made by researchers such as Aloimonos.and Ballard in the end 80s / beginning 90s. However, embodiment goes further than gathering information and "mental" recognition. In this respect, the evolutionary robotics work by for example Beer is interesting, where an agent discriminates between diamonds and circles by avoiding one and catching the other, without there being a clear "moment" in which the recognition takes place. "Recognition" is a behavioral property there, for which embodiment is obviously important. With embodiment the effort for recognizing an object behaviorally can be divided between the brain and the body, resulting in less computation for the brain. Also the article "Behavioural Categorisation: Behaviour makes up for bad vision" is interesting in this respect. In the field of embodied cognitive science, some say that recognition is constituted by the activation of sensorimotor correlations. I wonder to which extent this is true, and if it is valid for extremely simple creatures to more advanced ones, but it is an interesting idea nonetheless. This being said, if "embodiment" implies having a physical body, then I would argue that it is not a necessary requirement for intelligence. "Situatedness", being able to take (virtual or real) "actions" that influence the "inputs", may be.
  •  
    @Paul While I completely agree about the "embodiment done wrong" (or at least "not exactly correct") part, what you say goes exactly against one of the major claims which are connected with the notion of embodiment (google for "representational bottleneck"). The fact is your brain does *not* have resources to deal with big data. The idea therefore is that it is the body what helps to deal with what to a computer scientist appears like "big data". Understanding how this happens is key. Whether it is the problem of scale or of actually understanding what happens should be quite conclusively shown by the outcomes of the Blue Brain project.
  •  
    Wouldn't one expect that to produce consciousness (even in a lower form) an approach resembling that of nature would be essential? All animals grow from a very simple initial state (just a few cells) and have only a very limited number of sensors AND processing units. This would allow for a fairly simple way to create simple neural networks and to start up stable neural excitation patterns. Over time as complexity of the body (sensors, processors, actuators) increases the system should be able to adapt in a continuous manner and increase its degree of self-awareness and consciousness. On the other hand, building a simulated brain that resembles (parts of) the human one in its final state seems to me like taking a person who is just dead and trying to restart the brain by means of electric shocks.
  •  
    Actually on a neuronal level all information gets processed. Not all of it makes it into "conscious" processing or attention. Whatever makes it into conscious processing is a highly reduced representation of the data you get. However that doesn't get lost. Basic, low processed data forms the basis of proprioception and reflexes. Every step you take is a macro command your brain issues to the intricate sensory-motor system that puts your legs in motion by actuating every muscle and correcting every step deviation from its desired trajectory using the complicated system of nerve endings and motor commands. Reflexes which were build over the years, as those massive amounts of data slowly get integrated into the nervous system and the the incipient parts of the brain. But without all those sensors scattered throughout the body, all the little inputs in massive amounts that slowly get filtered through, you would not be able to experience your body, and experience the world. Every concept that you conjure up from your mind is a sort of loose association of your sensorimotor input. How can a robot understand the concept of a strawberry if all it can perceive of it is its shape and color and maybe the sound that it makes as it gets squished? How can you understand the "abstract" notion of strawberry without the incredibly sensible tactile feel, without the act of ripping off the stem, without the motor action of taking it to our mouths, without its texture and taste? When we as humans summon the strawberry thought, all of these concepts and ideas converge (distributed throughout the neurons in our minds) to form this abstract concept formed out of all of these many many correlations. A robot with no touch, no taste, no delicate articulate motions, no "serious" way to interact with and perceive its environment, no massive flow of information from which to chose and and reduce, will never attain human level intelligence. That's point 1. Point 2 is that mere pattern recogn
  •  
    All information *that gets processed* gets processed but now we arrived at a tautology. The whole problem is ultimately nobody knows what gets processed (not to mention how). In fact an absolute statement "all information" gets processed is very easy to dismiss because the characteristics of our sensors are such that a lot of information is filtered out already at the input level (e.g. eyes). I'm not saying it's not a valid and even interesting assumption, but it's still just an assumption and the next step is to explore scientifically where it leads you. And until you show its superiority experimentally it's as good as all other alternative assumptions you can make. I only wanted to point out is that "more processing" is not exactly compatible with some of the fundamental assumptions of the embodiment. I recommend Wilson, 2002 as a crash course.
  •  
    These deal with different things in human intelligence. One is the depth of the intelligence (how much of the bigger picture can you see, how abstract can you form concept and ideas), another is the breadth of the intelligence (how well can you actually generalize, how encompassing those concepts are and what is the level of detail in which you perceive all the information you have) and another is the relevance of the information (this is where the embodiment comes in. What you do is to a purpose, tied into the environment and ultimately linked to survival). As far as I see it, these form the pillars of human intelligence, and of the intelligence of biological beings. They are quite contradictory to each other mainly due to physical constraints (such as for example energy usage, and training time). "More processing" is not exactly compatible with some aspects of embodiment, but it is important for human level intelligence. Embodiment is necessary for establishing an environmental context of actions, a constraint space if you will, failure of human minds (i.e. schizophrenia) is ultimately a failure of perceived embodiment. What we do know is that we perform a lot of compression and a lot of integration on a lot of data in an environmental coupling. Imo, take any of these parts out, and you cannot attain human+ intelligence. Vary the quantities and you'll obtain different manifestations of intelligence, from cockroach to cat to google to random quake bot. Increase them all beyond human levels and you're on your way towards the singularity.
LeopoldS

Google Says the FBI Is Secretly Spying on Some of Its Customers | Threat Level | Wired.com - 3 views

  •  
    not a surprise though still bad to read ....
  •  
    On a side note, it's hilarious to read an article on something repeatedly referred to as being secret...
  •  
    quite self-explanatory described though: "The terrorists apparently would win if Google told you the exact number of times the Federal Bureau of Investigation invoked a secret process to extract data about the media giant's customers. That's why it is unlawful for any record-keeper to disclose it has received a so-called National Security Letter. But under a deal brokered with the President Barack Obama administration, Google on Tuesday published a "range" of times it received National Security Letters demanding it divulge account information to the authorities without warrants. It was the first time a company has ever released data chronicling the volume of National Security Letter requests. National Security Letters allow the government to get detailed information on Americans' finances and communications without oversight from a judge. The FBI has issued hundreds of thousands of NSLs and has even been reprimanded for abusing them. The NSLs are written demands from the FBI that compel internet service providers, credit companies, financial institutions and businesses like Google to hand over confidential records about their customers, such as subscriber information, phone numbers and e-mail addresses, websites visited and more as long as the FBI says the information is "relevant" to an investigation." and ""You'll notice that we're reporting numerical ranges rather than exact numbers. This is to address concerns raised by the FBI, Justice Department and other agencies that releasing exact numbers might reveal information about investigations. We plan to update these figures annually," Richard Salgado, a Google legal director, wrote in a blog post. Salgado was not available for comment. What makes the government's position questionable is that it is required by Congress to disclose the number of times the bureau issues National Security Letters. In 2011, the year with the latest available figures, the FBI issued 16,511 National Sec
pacome delva

Ants Take a Cue From Facebook - ScienceNOW - 2 views

  • This pattern of interactions matches how humans share information on social networking sites like Facebook, says the study's lead author, biologist Noa Pinter-Wollman. Most Facebook users are connected to a relatively small number of friends. A handful of users, however, have thousands of friends and act as information hubs.
  • computer simulations of the ants' social networks showed that information flows fastest when a small number of individuals act as information hubs. Fast-flowing information allows ant colonies to respond faster to threats such as predators and weather hazards, Pinter-Wollman says.
  • These well-connected ants might have an advantage in responding to threats, but they are also more vulnerable to infectious diseases, which can spread quickly through the colony.
  •  
    for Tobi! nice analogy between the threat and the fast responding in human network
  •  
    Yet another example of "because scientifically accurate title would sound sooo boring".
Thijs Versloot

Lasers May Solve the Black Hole Information Paradox - 0 views

  •  
    "In an effort to help solve the black hole information paradox that has immersed theoretical physics in an ocean of soul searching for the past two years, two researchers have thrown their hats into the ring with a novel solution: Lasers. Technically, we're not talking about the little flashy devices you use to keep your cat entertained, we're talking about the underlying physics that produces laser light and applying it to information that falls into a black hole. According to the researchers, who published a paper earlier this month to the journal Classical and Quantum Gravity (abstract), the secret to sidestepping the black hole information paradox (and, by extension, the 'firewall' hypothesis that was recently argued against by Stephen Hawking) lies in stimulated emission of radiation (the underlying physics that generates laser light) at the event horizon that is distinct from Hawking radiation, but preserves information as matter falls into a black hole."
Annalisa Riccardi

The Computer That Stores and Processes Information At the Same Time | MIT Technology Re... - 3 views

  •  
    The human brain both stores and processes information at the same time. Now computer scientists say they can do the same thing The human brain is an extraordinary computing machine. Nobody understands exactly how it works its magic but part of the trick is the ability to store and process information at the same time.
johannessimon81

A Different Form of Color Vision in Mantis Shrimp - 4 views

  •  
    Mantis shrimp seem to have 12 types of photo-receptive sensors - but this does not really improve their ability to discriminate between colors. Speculation is that they serve as a form of pre-processing for visual information: the brain does not need to decode full color information from just a few channels which would would allow for a smaller brain. I guess technologically the two extremes of light detection would be RGB cameras which are like our eyes and offer good spatial resolution, and spectrometers which have a large amount of color channels but at the cost of spatial resolution. It seems the mantis shrimp uses something that is somewhere between RGB cameras and spectrometers. Could there be a use for this in space?
  •  
    > RGB cameras which are like our eyes ...apart from the fact that the spectral response of the eyes is completely different from "RGB" cameras (http://en.wikipedia.org/wiki/File:Cones_SMJ2_E.svg) ... and that the eyes have 4 types of light-sensitive cells, not three (http://en.wikipedia.org/wiki/File:Cone-response.svg) ... and that, unlike cameras, human eye is precise only in a very narrow centre region (http://en.wikipedia.org/wiki/Fovea) ...hmm, apart from relying on tri-stimulus colour perception it seems human eyes are in fact completely different from "RGB cameras" :-) OK sorry for picking on this - that's just the colour science geek in me :-) Now seriously, on one hand the article abstract sounds very interesting, but on the other the statement "Why use 12 color channels when three or four are sufficient for fine color discrimination?" reveals so much ignorance to the very basics of colour science that I'm completely puzzled - in the end, it's a Science article so it should be reasonably scientifically sound, right? Pity I can't access full text... the interesting thing is that more channels mean more information and therefore should require *more* power to process - which is exactly opposite to their theory (as far as I can tell it from the abstract...). So the key is to understand *what* information about light these mantises are collecting and why - definitely it's not "colour" in the sense of human perceptual experience. But in any case - yes, spectrometry has its uses in space :-)
pacome delva

Information converted to energy - physicsworld.com - 4 views

  • By tracking the particle's motion using a video camera and then using image-analysis software to identify when the particle had rotated against the field, the researchers were able to raise the metaphorical barrier behind it by inverting the field's phase. In this way they could gradually raise the potential of the particle even though they had not imparted any energy to it directly.
  • "Nobody thinks of using bits to boil water," he says, "but that would in principle be possible at nanometre scales." And he speculates that molecular processes occurring in nature might already be converting information to energy in some way. "The message is that processes taking place on the nanoscale are completely different from those we are familiar with, and that information is part of that picture."
  •  
    crazy, the Maxwell's demon at work !
  •  
    crazy indeed
ESA ACT

Scaling theory for information networks - Journal Article - 0 views

  •  
    This is an article that examines information network both enigneered and evolved ones. The find striking similarities and examine the differences.
Dario Izzo

Ushahidi :: Crowdsourcing Crisis Information (FOSS) - 3 views

shared by Dario Izzo on 08 Jul 10 - Cached
  •  
    This platform could be used to collect information from astronomers? Or, can we come up with a cool space app for it?
ESA ACT

STIX Fonts - General Information - 0 views

  •  
    First time I heard about this relevant project. In brief: The mission of the Scientific and Technical Information Exchange (STIX) font creation project is the preparation of a comprehensive set of fonts that serve the scientific and engineering community
jcunha

'Superman memory crystal' that could store 360TB of data forever | ExtremeTech - 0 views

  •  
    A new so called 5D data storage that could potentially survive for billions of years. The research consists of nanostructured glass that can record digital data in five dimensions using femtosecond laser writing.
  • ...2 more comments...
  •  
    Very scarce scientific info available.. I'm very curious to see a bit more in future. From https://spie.org/PWL/conferencedetails/laser-micro-nanoprocessing I made a back of envelop calc: for 20 nm spaced, each laser spot in 5D encryption encodes 3 bits (it seemed to me) written in 3 planes, to obtain the claimed 360TB disk one needs very roughly 6000mm2, which does not complain with the dimensions shown in video. Only with larger number of planes (order of magnitude higher) it could be.. Also, at current commercial trends NAND Flash and HDD allow for 1000 Gb/in2. This means a 360 TB could hypothetically fit in 1800mm2.
  •  
    I had the same issue with the numbers when I saw the announcement a few days back (https://www.southampton.ac.uk/news/2016/02/5d-data-storage-update.page). It doesn't seem to add up. Plus, the examples they show are super low amounts of data (the bible probably fits on a few 1.44 MB floppy disk). As for the comparison with NAND and HDD, I think the main argument for their crystal is that it is supposedly more durable. HDDs are chronically bad at long term storage, and also NAND as far as I know needs to be refreshed frequently.
  •  
    Yes Alex, indeed, the durability is the point I think they highlight and focus on (besides the fact the abstract says something as the extrapolated decay time being comparable to the age of the Universe..). Indeed memories face problems with retention time. Most of the disks retain the information up to 10 years. When enterprises want to store data for longer times than this they use... yeah, magnetic tapes :-). Check a interesting article about magnetic tape market revival here http://www.information-age.com/technology/data-centre-and-it-infrastructure/123458854/rise-fall-and-re-rise-magnetic-tape I compared for fun, to have one idea of what we were talking about. I am also very curious so see the writing and reading times in this new memory :)
  •  
    But how can glass store the information so long? Glass is not even solid?!
jaihobah

Does the brain store information in discrete or analog form? - 1 views

  •  
    "...measured the way people make certain types of decisions and say that their statistical analysis of the results strongly suggests that the brain must store information in discrete form. Their conclusion has significant implications for neuroscientists and other researchers building devices to connect to the brain."
Luís F. Simões

Billion-euro brain simulation and graphene projects win European funds - 1 views

  •  
    winners of the Future and Emerging Technologies (FET) Flagship competition (informally) announced
  •  
    Hopefully the money wasted on the brain project will be offset by the gains on graphene... When I heard the proposals presentations on fet11 conference back in 2011, the graphene project was my bet.. Although its motivations were mostly political ("everyone else is working on graphene so if Europe won't do something, we'll soon be far behind"), in contrast to other projects it appeared to have well defined tangible objectives and gave hope of actually delivering something.
jaihobah

The Cure For Fear | New Republic - 2 views

  •  
    A long read but very interesting and well written.
  •  
    PS: Does this quote from the article not sound a lot like Inception? 'In any given situation, the brain will retrieve old memories to inform an organism's behavior. If the memory is relevant to the situation, the organism can act on the information; if it is not relevant, then the organism can learn from the situation and create a new memory. With reconsolidation, researchers argued, there seemed to be a brief window in between the retrieval of an old memory and the creation of a new memory in which the old memory is vulnerable to manipulation.'
LeopoldS

self archiving open access - 4 views

  •  
    interesting information - was not that clear to me instead of doing this via mendeley, we should go over our act publication page and upload all those pdf that fall into this category ...
Thijs Versloot

Putting 1.6TB on a DVD sized disk using muliplexed optical recording @Nature - 0 views

  •  
    Multiplexed optical recording provides an unparalleled approach to increasing the information density beyond 1012 bits per cm3 (1 Tbit cm-3) by storing multiple, individually addressable patterns within the same recording volume. Although wavelength, polarization and spatial dimension have all been exploited for multiplexing, these approaches have never been integrated into a single technique that could ultimately increase the information capacity by orders of magnitude.
LeopoldS

An optical lattice clock with accuracy and stability at the 10-18 level : Nature : Natu... - 0 views

  •  
    Progress in atomic, optical and quantum science1, 2 has led to rapid improvements in atomic clocks. At the same time, atomic clock research has helped to advance the frontiers of science, affecting both fundamental and applied research. The ability to control quantum states of individual atoms and photons is central to quantum information science and precision measurement, and optical clocks based on single ions have achieved the lowest systematic uncertainty of any frequency standard3, 4, 5. Although many-atom lattice clocks have shown advantages in measurement precision over trapped-ion clocks6, 7, their accuracy has remained 16 times worse8, 9, 10. Here we demonstrate a many-atom system that achieves an accuracy of 6.4 × 10−18, which is not only better than a single-ion-based clock, but also reduces the required measurement time by two orders of magnitude. By systematically evaluating all known sources of uncertainty, including in situ monitoring of the blackbody radiation environment, we improve the accuracy of optical lattice clocks by a factor of 22. This single clock has simultaneously achieved the best known performance in the key characteristics necessary for consideration as a primary standard-stability and accuracy. More stable and accurate atomic clocks will benefit a wide range of fields, such as the realization and distribution of SI units11, the search for time variation of fundamental constants12, clock-based geodesy13 and other precision tests of the fundamental laws of nature. This work also connects to the development of quantum sensors and many-body quantum state engineering14 (such as spin squeezing) to advance measurement precision beyond the standard quantum limit.
Beniamino Abis

The Wisdom of (Little) Crowds - 1 views

  •  
    What is the best (wisest) size for a group of individuals? Couzin and Kao put together a series of mathematical models that included correlation and several cues. In one model, for example, a group of animals had to choose between two options-think of two places to find food. But the cues for each choice were not equally reliable, nor were they equally correlated. The scientists found that in these models, a group was more likely to choose the superior option than an individual. Common experience will make us expect that the bigger the group got, the wiser it would become. But they found something very different. Small groups did better than individuals. But bigger groups did not do better than small groups. In fact, they did worse. A group of 5 to 20 individuals made better decisions than an infinitely large crowd. The problem with big groups is this: a faction of the group will follow correlated cues-in other words, the cues that look the same to many individuals. If a correlated cue is misleading, it may cause the whole faction to cast the wrong vote. Couzin and Kao found that this faction can drown out the diversity of information coming from the uncorrelated cue. And this problem only gets worse as the group gets bigger.
  •  
    Couzin research was the starting point that co-inspired PaGMO from the very beginning. We invited him (and he came) at a formation flying conference for a plenary here in ESTEC. You can see PaGMO as a collective problem solving simulation. In that respect, we learned already that the size of the group and its internal structure (topology) counts and cannot be too large or too random. One of the project the ACT is running (and currently seeking for new ideas/actors) is briefly described here (http://esa.github.io/pygmo/examples/example2.html) and attempts answering the question :"How is collective decision making influenced by the information flow through the group?" by looking at complex simulations of large 'archipelagos'.
Luís F. Simões

Singularity University, class of 2010: projects that aim to impact a billion people wit... - 8 views

  •  
    At the link below you find additional information about the projects: Education: Ten weeks to save the world http://www.nature.com/news/2010/100915/full/467266a.html
  • ...8 more comments...
  •  
    this is the podcast I was listening to ...
  •  
    We can do it in nine :)
  •  
    why wait then?
  •  
    hmm, wonder how easy it is to get funding for that, 25k is a bit steep for 10weeks :)
  •  
    well, we wait for the same fundings they get and then we will do it in nine.... as we say in Rome "a mettece un cartello so bboni tutti". (italian check for Juxi)
  •  
    and what you think about the project subjects?
  •  
    I like the fact that there are quite a lot of space projects .... and these are not even bad in my view: The space project teams have developed imaginative new solutions for space and spinoffs for Earth. The AISynBio project team is working with leading NASA scientists to design bioengineered organisms that can use available resources to mitigate harsh living environments (such as lack of air, water, food, energy, atmosphere, and gravity) - on an asteroid, for example, and also on Earth . The SpaceBio Labs team plans to develop methods for doing low-cost biological research in space, such as 3D tissue engineering and protein crystallization. The Made in Space team plans to bring 3D printing to space to make space exploration cheaper, more reliable, and fail-safe ("send the bits, not the atoms"). For example, they hope to replace some of the $1 billion worth of spare parts and tools that are on the International Space Station.
  •  
    and all in only a three months summer graduate program!! that is impressive. God I feel so stupid!!!
  •  
    well, most good ideas probably take only a second to be formulated, it's the details that take years :-)
  •  
    I do not think the point of the SU is to formulate new ideas (infact there is nothing new in the projects chosen). Their mission is to build and maintain a network of contacts among who they believe will be the 'future leaders' of space ... very similar to our beloved ISU.
1 - 20 of 131 Next › Last »
Showing 20 items per page