Skip to main content

Home/ Advanced Concepts Team/ Group items tagged compression

Rss Feed Group items tagged

Dario Izzo

File Compression: New Tool for Life Detection? - 4 views

  •  
    As mentioned today during coffee .... we could think to link this to source localization
  • ...3 more comments...
  •  
    Not sure by what you mean by source localisation, but this using gzip to discern "biological" from "non-biological" images seems to me *very* tricky... I mean, there's a lot of other factors that may affect compressibility of an image than just mere "regularity" of the pattern, and if they haven't controlled for these, this is just bullsh1t... (For instance did they use the same imaging device to take those images? What about lighting conditions and exposure? etc). The apostle of sometimes surprising uses of compression is prof. Shmidhuber from IDSIA...
  •  
    I completely agree with you..... still if you have one instrument on board the spacecraft and your picture compressibility is a noisy indicator of some interesting source .... we could try to perform some probabilistic reasoning
  •  
    I think they (IDSIA-Schmidhuber) are planning on putting something about that also inside the Acta Futura paper...
  •  
    Really, you think they'd target such a low impact factor publication? ;-P
  •  
    you will all soon be begging to publish in Acta Futura! We will be bigger than Nature.
Alexander Wittig

On the extraordinary strength of Prince Rupert's drops - 1 views

  •  
    Prince Rupert's drops (PRDs), also known as Batavian tears, have been in existence since the early 17th century. They are made of a silicate glass of a high thermal expansion coefficient and have the shape of a tadpole. Typically, the diameter of the head of a PRD is in the range of 5-15 mm and that of the tail is 0.5 to 3.0 mm. PRDs have exceptional strength properties: the head of a PRD can withstand impact with a small hammer, or compression between tungsten carbide platens to high loads of ∼15 000 N, but the tail can be broken with just finger pressure leading to catastrophic disintegration of the PRD. We show here that the high strength of a PRD comes from large surface compressive stresses in the range of 400-700 MPa, determined using techniques of integrated photoelasticity. The surface compressive stresses can suppress Hertzian cone cracking during impact with a small hammer or compression between platens. Finally, it is argued that when the compressive force on a PRD is very high, plasticity in the PRD occurs, which leads to its eventual destruction with increasing load.
ESA ACT

Slashdot | Text Compressor 1% Away From AI Threshold - 0 views

  •  
    "Alexander Ratushnyak compressed the first 100,000,000 bytes of Wikipedia to a record-small 16,481,655 bytes (including decompression program), thereby not only winning the second payout of The Hutter Prize for Compression of Human Knowledge, but also bri
Francesco Biscani

NASA Will Crowdsource Its Photos of Mars | Motherboard - 4 views

  • Researchers hope that crowdsourcing imaging targets will increase the camera’s already bountiful science return.
  •  
    Here we go, material for curiosity cloning, life detection via image compression, etc. etc.
  •  
    tar cvfz compressed.tgz MarsImages/ Love it!
Thijs Versloot

#LEGO car running on compressed air - 0 views

  •  
    500000 lego bricks, a tank of compressed air, some mechanical engineering and a lot of time later...
Guido de Croon

Will robots be smarter than humans by 2029? - 2 views

  •  
    Nice discussion about the singularity. Made me think of drinking coffee with Luis... It raises some issues such as the necessity of embodiment, etc.
  • ...9 more comments...
  •  
    "Kurzweilians"... LOL. Still not sold on embodiment, btw.
  •  
    The biggest problem with embodiment is that, since the passive walkers (with which it all started), it hasn't delivered anything really interesting...
  •  
    The problem with embodiment is that it's done wrong. Embodiment needs to be treated like big data. More sensors, more data, more processing. Just putting a computer in a robot with a camera and microphone is not embodiment.
  •  
    I like how he attacks Moore's Law. It always looks a bit naive to me if people start to (ab)use it to make their point. No strong opinion about embodiment.
  •  
    @Paul: How would embodiment be done RIGHT?
  •  
    Embodiment has some obvious advantages. For example, in the vision domain many hard problems become easy when you have a body with which you can take actions (like looking at an object you don't immediately recognize from a different angle) - a point already made by researchers such as Aloimonos.and Ballard in the end 80s / beginning 90s. However, embodiment goes further than gathering information and "mental" recognition. In this respect, the evolutionary robotics work by for example Beer is interesting, where an agent discriminates between diamonds and circles by avoiding one and catching the other, without there being a clear "moment" in which the recognition takes place. "Recognition" is a behavioral property there, for which embodiment is obviously important. With embodiment the effort for recognizing an object behaviorally can be divided between the brain and the body, resulting in less computation for the brain. Also the article "Behavioural Categorisation: Behaviour makes up for bad vision" is interesting in this respect. In the field of embodied cognitive science, some say that recognition is constituted by the activation of sensorimotor correlations. I wonder to which extent this is true, and if it is valid for extremely simple creatures to more advanced ones, but it is an interesting idea nonetheless. This being said, if "embodiment" implies having a physical body, then I would argue that it is not a necessary requirement for intelligence. "Situatedness", being able to take (virtual or real) "actions" that influence the "inputs", may be.
  •  
    @Paul While I completely agree about the "embodiment done wrong" (or at least "not exactly correct") part, what you say goes exactly against one of the major claims which are connected with the notion of embodiment (google for "representational bottleneck"). The fact is your brain does *not* have resources to deal with big data. The idea therefore is that it is the body what helps to deal with what to a computer scientist appears like "big data". Understanding how this happens is key. Whether it is the problem of scale or of actually understanding what happens should be quite conclusively shown by the outcomes of the Blue Brain project.
  •  
    Wouldn't one expect that to produce consciousness (even in a lower form) an approach resembling that of nature would be essential? All animals grow from a very simple initial state (just a few cells) and have only a very limited number of sensors AND processing units. This would allow for a fairly simple way to create simple neural networks and to start up stable neural excitation patterns. Over time as complexity of the body (sensors, processors, actuators) increases the system should be able to adapt in a continuous manner and increase its degree of self-awareness and consciousness. On the other hand, building a simulated brain that resembles (parts of) the human one in its final state seems to me like taking a person who is just dead and trying to restart the brain by means of electric shocks.
  •  
    Actually on a neuronal level all information gets processed. Not all of it makes it into "conscious" processing or attention. Whatever makes it into conscious processing is a highly reduced representation of the data you get. However that doesn't get lost. Basic, low processed data forms the basis of proprioception and reflexes. Every step you take is a macro command your brain issues to the intricate sensory-motor system that puts your legs in motion by actuating every muscle and correcting every step deviation from its desired trajectory using the complicated system of nerve endings and motor commands. Reflexes which were build over the years, as those massive amounts of data slowly get integrated into the nervous system and the the incipient parts of the brain. But without all those sensors scattered throughout the body, all the little inputs in massive amounts that slowly get filtered through, you would not be able to experience your body, and experience the world. Every concept that you conjure up from your mind is a sort of loose association of your sensorimotor input. How can a robot understand the concept of a strawberry if all it can perceive of it is its shape and color and maybe the sound that it makes as it gets squished? How can you understand the "abstract" notion of strawberry without the incredibly sensible tactile feel, without the act of ripping off the stem, without the motor action of taking it to our mouths, without its texture and taste? When we as humans summon the strawberry thought, all of these concepts and ideas converge (distributed throughout the neurons in our minds) to form this abstract concept formed out of all of these many many correlations. A robot with no touch, no taste, no delicate articulate motions, no "serious" way to interact with and perceive its environment, no massive flow of information from which to chose and and reduce, will never attain human level intelligence. That's point 1. Point 2 is that mere pattern recogn
  •  
    All information *that gets processed* gets processed but now we arrived at a tautology. The whole problem is ultimately nobody knows what gets processed (not to mention how). In fact an absolute statement "all information" gets processed is very easy to dismiss because the characteristics of our sensors are such that a lot of information is filtered out already at the input level (e.g. eyes). I'm not saying it's not a valid and even interesting assumption, but it's still just an assumption and the next step is to explore scientifically where it leads you. And until you show its superiority experimentally it's as good as all other alternative assumptions you can make. I only wanted to point out is that "more processing" is not exactly compatible with some of the fundamental assumptions of the embodiment. I recommend Wilson, 2002 as a crash course.
  •  
    These deal with different things in human intelligence. One is the depth of the intelligence (how much of the bigger picture can you see, how abstract can you form concept and ideas), another is the breadth of the intelligence (how well can you actually generalize, how encompassing those concepts are and what is the level of detail in which you perceive all the information you have) and another is the relevance of the information (this is where the embodiment comes in. What you do is to a purpose, tied into the environment and ultimately linked to survival). As far as I see it, these form the pillars of human intelligence, and of the intelligence of biological beings. They are quite contradictory to each other mainly due to physical constraints (such as for example energy usage, and training time). "More processing" is not exactly compatible with some aspects of embodiment, but it is important for human level intelligence. Embodiment is necessary for establishing an environmental context of actions, a constraint space if you will, failure of human minds (i.e. schizophrenia) is ultimately a failure of perceived embodiment. What we do know is that we perform a lot of compression and a lot of integration on a lot of data in an environmental coupling. Imo, take any of these parts out, and you cannot attain human+ intelligence. Vary the quantities and you'll obtain different manifestations of intelligence, from cockroach to cat to google to random quake bot. Increase them all beyond human levels and you're on your way towards the singularity.
Christos Ampatzis

Evidence of life on Mars lurks beneath surface of meteorite, Nasa experts claim - Times... - 1 views

  •  
    Is there life on Mars? - We should try to compress those images!
  •  
    "Is there life on Mars?" Yes, mostly on Saturday night...
nikolas smyrlakis

Air powered motorbike - Reuters - Truveo Video Search - 0 views

  •  
    well it depends on how the air is compressed,but if clean electricity is used for it is kind of interesting
Ma Ru

Dark Matter or Black Hole Propulsion? - 1 views

  •  
    Anyone out there still doing propulsion stuff? Two more papers just waiting to get busted... http://arxiv.org/abs/0908.1429v1 http://arxiv.org/abs/0908.1803
  • ...5 more comments...
  •  
    What an awful bunch of complete nonsense!!! But I don't think anybody wants to hear MY opinion on this...
  •  
    wow, is this serious at all...!?
  •  
    Are you joking?? The BH drive propses a BH with a lifetime of about an year, just 10^7 tons, peanuts!! Then you have to produce it, better not on Earth, so you do this in space, with a laser that produces an equivalent of 10^9 tons highly foucussed, even more peanuts!! Reasonable losses in the production process (probably 99,999%) are not yet taken into account. Engineering problems... :-) The DM drive is even better, they want to collect DM and compress it in a propulsion chamber. Very easy to collect and compress a gas of particles that traverse the Earth without any interaction. Perhaps if the walls of the chamber are made of artificial BHs?? Who knows??
  •  
    WRONG!!! we are all just WAITING for your opinion on this ....!!!
  •  
    well, yes my remark was ironic... I'm surprised they did a magazine on these concepts...! But the press is always waiting for sensational. They do not even wait for the work to be peer-reviewed now to make an article on it ! This is one of the bad sides of arxiv in my opinion. It's like a journalist that make an article with a copy-paste in wikipedia ! Anyway, this is of course complete bullsh..., and I would have laughed if I had read this in a sci-fi book... but in a "serious" article i'm crying... For the DM i do not agree with your remark Luzi. It's not dark energy they want to use. The DM is baryonic, it's dark just because it's cold so we don't see it by usual means. If you believe the in the standard model of cosmology, then the DM should be somewhere around the galaxies. But it's of course not uniformly distributed, so a DM engine would work (if at all...) only in the periphery of galaxies. It's already impossible to get there...
  •  
    One reply to Pacome, though the discussion exceeds by far the relevance of the topic already. Baryonic DM is strictly limited by cosomology, if one believes in these models, of course. Anyway, even though most DM is cold, we are constantly bombarded by some DM particles that come together with cosmic radiation, solar wind etc. etc. If DM easily interacted with normal matter, we would have found it long ago. In the paper they consider DM as neutralinos, which are neither baryonic nor strongly or electromagnetically interacting.
  •  
    well then I agree, how the fu.. they want to collect them !!!
annaheffernan

How to make tiny 3D flowers and peacocks from silicon - 1 views

  •  
    "Tilted table", "peacock" and "triple-floor building" are just three of many fantastical 3D structures that have been created by compressing simple 2D patterns. The new technique for creating these objects is called compressive buckling, and has been developed by researchers in the US, China and South Korea.
LeopoldS

physicists explain what AI researchers are actually doing - 5 views

  •  
    love this one ... it seems to take physicist to explain to the AI crowd what they are actually doing ... Deep learning is a broad set of techniques that uses multiple layers of representation to automatically learn relevant features directly from structured data. Recently, such techniques have yielded record-breaking results on a diverse set of difficult machine learning tasks in computer vision, speech recognition, and natural language processing. Despite the enormous success of deep learning, relatively little is understood theoretically about why these techniques are so successful at feature learning and compression. Here, we show that deep learning is intimately related to one of the most important and successful techniques in theoretical physics, the renormalization group (RG). RG is an iterative coarse-graining scheme that allows for the extraction of relevant features (i.e. operators) as a physical system is examined at different length scales. We construct an exact mapping from the variational renormalization group, first introduced by Kadanoff, and deep learning architectures based on Restricted Boltzmann Machines (RBMs). We illustrate these ideas using the nearest-neighbor Ising Model in one and two-dimensions. Our results suggests that deep learning algorithms may be employing a generalized RG-like scheme to learn relevant features from data.
pandomilla

Not a scratch - 7 views

shared by pandomilla on 12 Apr 12 - No Cached
LeopoldS liked it
  •  
    I hate scorpions, but this could be a nice subject for a future Ariadna study! This north African desert scorpion, doesn't dig burrows to protect itself from the sand-laden wind (as the other scorpions do). When the sand whips by at speeds that would strip paint away from steel, the scorpion is able to scurry off without apparent damage.
  •  
    Nice research, though they have done almost all the work that we could do in an Ariadna, didnt they? "To check, they took further photographs. In particular, they used a laser scanning system to make a three-dimensional map of the armour and then plugged the result into a computer program that blasted the virtual armour with virtual sand grains at various angles of attack. This process revealed that the granules were disturbing the air flow near the skeleton's surface in ways that appeared to be reducing the erosion rate. Their model suggested that if scorpion exoskeletons were smooth, they would experience almost twice the erosion rate that they actually do. Having tried things out in a computer, the team then tried them for real. They placed samples of steel in a wind tunnel and fired grains of sand at them using compressed air. One piece of steel was smooth, but the others had grooves of different heights, widths and separations, inspired by scorpion exoskeleton, etched onto their surfaces. Each sample was exposed to the lab-generated sandstorm for five minutes and then weighed to find out how badly it had been eroded. The upshot was that the pattern most resembling scorpion armour-with grooves that were 2mm apart, 5mm wide and 4mm high-proved best able to withstand the assault. Though not as good as the computer model suggested real scorpion geometry is, such grooving nevertheless cut erosion by a fifth, compared with a smooth steel surface. The lesson for aircraft makers, Dr Han suggests, is that a little surface irregularity might help to prolong the active lives of planes and helicopters, as well as those of scorpions."
  •  
    What bugs me (pardon the pun) is that the dimensions of the pattern they used were scaled up by many orders of magnitude, while "grains of sand" with which the surface was bombarded apparently were not... Not being a specialist in the field, I would nevertheless expect that the size of the surface pattern *in relation to* to size of particles used for bombarding would be crucial.
Thijs Versloot

Vibrational free cooling systems for sensors - 1 views

  •  
    The system is based on two liquids which are adsorbed. As the sensor generates heat, the liquids desorb and the pressure builds up, it can then move to an expansion vessel which is held at a cooler temperature and the liquid then adsorb together again. This technique requires no mechanical compression and there are less vibration, leading to less wear and tear of components. It is being developed in a joint collaboration between UTwente and Dutch Space.
pacome delva

Spin-out puts new spin on wind energy - 0 views

  • The future of wind energy could involve huge blades spanning half a kilometre that generate compressed air – which is then piped into giant, underwater balloons.
Luís F. Simões

Lockheed Martin buys first D-Wave quantum computing system - 1 views

  • D-Wave develops computing systems that leverage the physics of quantum mechanics in order to address problems that are hard for traditional methods to solve in a cost-effective amount of time. Examples of such problems include software verification and validation, financial risk analysis, affinity mapping and sentiment analysis, object recognition in images, medical imaging classification, compressed sensing and bioinformatics.
  •  
    According to the company's wikipedia page, the computer costs $ 10 million. Can we then declare Quantum Computing has officially arrived?! quotes from elsewhere in the site: "first commercial quantum computing system on the market"; "our current superconducting 128-qubit processor chip is housed inside a cryogenics system within a 10 square meter shielded room" Link to the company's scientific publications. Interestingly, this company seems to have been running a BOINC project, AQUA@home, to "predict the performance of superconducting adiabatic quantum computers on a variety of hard problems arising in fields ranging from materials science to machine learning. AQUA@home uses Internet-connected computers to help design and analyze quantum computing algorithms, using Quantum Monte Carlo techniques". List of papers coming out of it.
Luís F. Simões

Wind Power Without the Blades: Big Pics : Discovery News - 4 views

  • The carbon-fiber stalks, reinforced with resin, are about a foot wide at the base tapering to about 2 inches at the top. Each stalk will contain alternating layers of electrodes and ceramic discs made from piezoelectric material, which generates a current when put under pressure. In the case of the stalks, the discs will compress as they sway in the wind, creating a charge.
  • Based on rough estimates, said Núñez-Ameni the output would be comparable to that of a conventional wind farm covering the same area
  • After completion, a Windstalk should be able to produce as much electricity as a single wind turbine, with the advantage that output could be increased with a denser array of stalks. Density is not possible with conventional turbines, which need to be spaced about three times the rotor's diameter in order to avoid air turbulence. But Windstalks work on chaos and turbulence so they can be installed much closer together, said Núñez-Ameni.
  • ...1 more annotation...
  • Núñez-Ameni also reports that the firm is currently working on taking the Windstalk idea underwater. Called Wavestalk, the whole system would be inverted to harness energy from the flow of ocean currents and waves.
  •  
    additional information: http://atelierdna.com/?p=144
  •  
    isn't this a bit of a contradiction: on the one hand: "Based on rough estimates, said Núñez-Ameni the output would be comparable to that of a conventional wind farm covering the same area" and on the other: "After completion, a Windstalk should be able to produce as much electricity as a single wind turbine, with the advantage that output could be increased with a denser array of stalks. Density is not possible with conventional turbines, which need to be spaced about three times the rotor's diameter in order to avoid air turbulence. " still, very interesting concept!
Thijs Versloot

Hypersonic Successor to Legendary SR-71 Blackbird Spy Plane Unveiled - 1 views

  •  
    he new SR-72 will use a turbine-based combined cycle (TBCC) that will employ the turbine engine at lower speeds, and use a scramjet at higher speeds. A scramjet engine is designed to operate at hypersonic velocities by compressing the air through a carefully designed inlet, but needs to be traveling supersonic before it is practical to begin with. So far research projects from NASA, the Air Force and other Pentagon entities have not been able to solve the problem of transitioning from the subsonic flight regime, through hypersonic flight with a single aircraft. Same problem as Reaction Engines is trying to solve, so I am not sure whether they actually cracked it. In any case, nice pictures. Not sure why the exhaust color is purple in color. Its not running on Argon I believe.
  •  
    Weird article. Intermediate thruster stage (Ramjet) is missing. Scramjet has supersonic combustion and a normal turbine delivers subsonic flows. Even with afterburner - the Scramjet inlet would decelerate the flow down to subsonic velocity with "normal" subsonic combustion. The only thing I can imagine is that the Scramjet stage is bi-functional and covers both, subsonic and supersonic combustion. But the article doesn't say anything about it.
1 - 17 of 17
Showing 20 items per page