Skip to main content

Home/ Advanced Concepts Team/ Group items tagged modelling

Rss Feed Group items tagged

Dario Izzo

Miguel Nicolelis Says the Brain Is Not Computable, Bashes Kurzweil's Singularity | MIT ... - 9 views

  •  
    As I said ten years ago and psychoanalysts 100 years ago. Luis I am so sorry :) Also ... now that the commission funded the project blue brain is a rather big hit Btw Nicolelis is a rather credited neuro-scientist
  • ...14 more comments...
  •  
    nice article; Luzi would agree as well I assume; one aspect not clear to me is the causal relationship it seems to imply between consciousness and randomness ... anybody?
  •  
    This is the same thing Penrose has been saying for ages (and yes, I read the book). IF the human brain proves to be the only conceivable system capable of consciousness/intelligence AND IF we'll forever be limited to the Turing machine type of computation (which is what the "Not Computable" in the article refers to) AND IF the brain indeed is not computable, THEN AI people might need to worry... Because I seriously doubt the first condition will prove to be true, same with the second one, and because I don't really care about the third (brains is not my thing).. I'm not worried.
  •  
    In any case, all AI research is going in the wrong direction: the mainstream is not on how to go beyond Turing machines, rather how to program them well enough ...... and thats not bringing anywhere near the singularity
  •  
    It has not been shown that intelligence is not computable (only some people saying the human brain isn't, which is something different), so I wouldn't go so far as saying the mainstream is going in the wrong direction. But even if that indeed was the case, would it be a problem? If so, well, then someone should quickly go and tell all the people trading in financial markets that they should stop using computers... after all, they're dealing with uncomputable undecidable problems. :) (and research on how to go beyond Turing computation does exist, but how much would you want to devote your research to a non existent machine?)
  •  
    [warning: troll] If you are happy with developing algorithms that serve the financial market ... good for you :) After all they have been proved to be useful for humankind beyond any reasonable doubt.
  •  
    Two comments from me: 1) an apparently credible scientist takes Kurzweil seriously enough to engage with him in polemics... oops 2) what worries me most, I didn't get the retail store pun at the end of article...
  •  
    True, but after Google hired Kurzweil he is de facto being taken seriously ... so I guess Nicolelis reacted to this.
  •  
    Crazy scientist in residence... interesting marketing move, I suppose.
  •  
    Unfortunately, I can't upload my two kids to the cloud to make them sleep, that's why I comment only now :-). But, of course, I MUST add my comment to this discussion. I don't really get what Nicolelis point is, the article is just too short and at a too popular level. But please realize that the question is not just "computable" vs. "non-computable". A system may be computable (we have a collection of rules called "theory" that we can put on a computer and run in a finite time) and still it need not be predictable. Since the lack of predictability pretty obviously applies to the human brain (as it does to any sufficiently complex and nonlinear system) the question whether it is computable or not becomes rather academic. Markram and his fellows may come up with a incredible simulation program of the human brain, this will be rather useless since they cannot solve the initial value problem and even if they could they will be lost in randomness after a short simulation time due to horrible non-linearities... Btw: this is not my idea, it was pointed out by Bohr more than 100 years ago...
  •  
    I guess chaos is what you are referring to. Stuff like the Lorentz attractor. In which case I would say that the point is not to predict one particular brain (in which case you would be right): any initial conditions would be fine as far as any brain gets started :) that is the goal :)
  •  
    Kurzweil talks about downloading your brain to a computer, so he has a specific brain in mind; Markram talks about identifying neural basis of mental diseases, so he has at least pretty specific situations in mind. Chaos is not the only problem, even a perfectly linear brain (which is not a biological brain) is not predictable, since one cannot determine a complete set of initial conditions of a working (viz. living) brain (after having determined about 10% the brain is dead and the data useless). But the situation is even worse: from all we know a brain will only work with a suitable interaction with its environment. So these boundary conditions one has to determine as well. This is already twice impossible. But the situation is worse again: from all we know, the way the brain interacts with its environment at a neural level depends on his history (how this brain learned). So your boundary conditions (that are impossible to determine) depend on your initial conditions (that are impossible to determine). Thus the situation is rather impossible squared than twice impossible. I'm sure Markram will simulate something, but this will rather be the famous Boltzmann brain than a biological one. Boltzman brains work with any initial conditions and any boundary conditions... and are pretty dead!
  •  
    Say one has an accurate model of a brain. It may be the case that the initial and boundary conditions do not matter that much in order for the brain to function an exhibit macro-characteristics useful to make science. Again, if it is not one particular brain you are targeting, but the 'brain' as a general entity this would make sense if one has an accurate model (also to identify the neural basis of mental diseases). But in my opinion, the construction of such a model of the brain is impossible using a reductionist approach (that is taking the naive approach of putting together some artificial neurons and connecting them in a huge net). That is why both Kurzweil and Markram are doomed to fail.
  •  
    I think that in principle some kind of artificial brain should be feasible. But making a brain by just throwing together a myriad of neurons is probably as promising as throwing together some copper pipes and a heap of silica and expecting it to make calculations for you. Like in the biological system, I suspect, an artificial brain would have to grow from a small tiny functional unit by adding neurons and complexity slowly and in a way that in a stable way increases the "usefulness"/fitness. Apparently our brain's usefulness has to do with interpreting inputs of our sensors to the world and steering the body making sure that those sensors, the brain and the rest of the body are still alive 10 seconds from now (thereby changing the world -> sensor inputs -> ...). So the artificial brain might need sensors and a body to affect the "world" creating a much larger feedback loop than the brain itself. One might argue that the complexity of the sensor inputs is the reason why the brain needs to be so complex in the first place. I never quite see from these "artificial brain" proposals in how far they are trying to simulate the whole system and not just the brain. Anyone? Or are they trying to simulate the human brain after it has been removed from the body? That might be somewhat easier I guess...
  •  
    Johannes: "I never quite see from these "artificial brain" proposals in how far they are trying to simulate the whole system and not just the brain." In Artificial Life the whole environment+bodies&brains is simulated. You have also the whole embodied cognition movement that basically advocates for just that: no true intelligence until you model the system in its entirety. And from that you then have people building robotic bodies, and getting their "brains" to learn from scratch how to control them, and through the bodies, the environment. Right now, this is obviously closer to the complexity of insect brains, than human ones. (my take on this is: yes, go ahead and build robots, if the intelligence you want to get in the end is to be displayed in interactions with the real physical world...) It's easy to dismiss Markram's Blue Brain for all their clever marketing pronouncements that they're building a human-level consciousness on a computer, but from what I read of the project, they seem to be developing a platfrom onto which any scientist can plug in their model of a detail of a detail of .... of the human brain, and get it to run together with everyone else's models of other tiny parts of the brain. This is not the same as getting the artificial brain to interact with the real world, but it's a big step in enabling scientists to study their own models on more realistic settings, in which the models' outputs get to effect many other systems, and throuh them feed back into its future inputs. So Blue Brain's biggest contribution might be in making model evaluation in neuroscience less wrong, and that doesn't seem like a bad thing. At some point the reductionist approach needs to start moving in the other direction.
  •  
    @ Dario: absolutely agree, the reductionist approach is the main mistake. My point: if you take the reductionsit approach, then you will face the initial and boundary value problem. If one tries a non-reductionist approach, this problem may be much weaker. But off the record: there exists a non-reductionist theory of the brain, it's called psychology... @ Johannes: also agree, the only way the reductionist approach could eventually be successful is to actually grow the brain. Start with essentially one neuron and grow the whole complexity. But if you want to do this, bring up a kid! A brain without body might be easier? Why do you expect that a brain detached from its complete input/output system actually still works. I'm pretty sure it does not!
  •  
    @Luzi: That was exactly my point :-)
santecarloni

Engineers enlist weather model to optimize offshore wind plan | Stanford School of Engi... - 0 views

  •  
    Using a sophisticated weather model, environmental engineers at Stanford have defined optimal placement of a grid of four wind farms off the U.S. East Coast. The model successfully balances production at times of peak demand and significantly reduces costly spikes and zero-power events.
Guido de Croon

Will robots be smarter than humans by 2029? - 2 views

  •  
    Nice discussion about the singularity. Made me think of drinking coffee with Luis... It raises some issues such as the necessity of embodiment, etc.
  • ...9 more comments...
  •  
    "Kurzweilians"... LOL. Still not sold on embodiment, btw.
  •  
    The biggest problem with embodiment is that, since the passive walkers (with which it all started), it hasn't delivered anything really interesting...
  •  
    The problem with embodiment is that it's done wrong. Embodiment needs to be treated like big data. More sensors, more data, more processing. Just putting a computer in a robot with a camera and microphone is not embodiment.
  •  
    I like how he attacks Moore's Law. It always looks a bit naive to me if people start to (ab)use it to make their point. No strong opinion about embodiment.
  •  
    @Paul: How would embodiment be done RIGHT?
  •  
    Embodiment has some obvious advantages. For example, in the vision domain many hard problems become easy when you have a body with which you can take actions (like looking at an object you don't immediately recognize from a different angle) - a point already made by researchers such as Aloimonos.and Ballard in the end 80s / beginning 90s. However, embodiment goes further than gathering information and "mental" recognition. In this respect, the evolutionary robotics work by for example Beer is interesting, where an agent discriminates between diamonds and circles by avoiding one and catching the other, without there being a clear "moment" in which the recognition takes place. "Recognition" is a behavioral property there, for which embodiment is obviously important. With embodiment the effort for recognizing an object behaviorally can be divided between the brain and the body, resulting in less computation for the brain. Also the article "Behavioural Categorisation: Behaviour makes up for bad vision" is interesting in this respect. In the field of embodied cognitive science, some say that recognition is constituted by the activation of sensorimotor correlations. I wonder to which extent this is true, and if it is valid for extremely simple creatures to more advanced ones, but it is an interesting idea nonetheless. This being said, if "embodiment" implies having a physical body, then I would argue that it is not a necessary requirement for intelligence. "Situatedness", being able to take (virtual or real) "actions" that influence the "inputs", may be.
  •  
    @Paul While I completely agree about the "embodiment done wrong" (or at least "not exactly correct") part, what you say goes exactly against one of the major claims which are connected with the notion of embodiment (google for "representational bottleneck"). The fact is your brain does *not* have resources to deal with big data. The idea therefore is that it is the body what helps to deal with what to a computer scientist appears like "big data". Understanding how this happens is key. Whether it is the problem of scale or of actually understanding what happens should be quite conclusively shown by the outcomes of the Blue Brain project.
  •  
    Wouldn't one expect that to produce consciousness (even in a lower form) an approach resembling that of nature would be essential? All animals grow from a very simple initial state (just a few cells) and have only a very limited number of sensors AND processing units. This would allow for a fairly simple way to create simple neural networks and to start up stable neural excitation patterns. Over time as complexity of the body (sensors, processors, actuators) increases the system should be able to adapt in a continuous manner and increase its degree of self-awareness and consciousness. On the other hand, building a simulated brain that resembles (parts of) the human one in its final state seems to me like taking a person who is just dead and trying to restart the brain by means of electric shocks.
  •  
    Actually on a neuronal level all information gets processed. Not all of it makes it into "conscious" processing or attention. Whatever makes it into conscious processing is a highly reduced representation of the data you get. However that doesn't get lost. Basic, low processed data forms the basis of proprioception and reflexes. Every step you take is a macro command your brain issues to the intricate sensory-motor system that puts your legs in motion by actuating every muscle and correcting every step deviation from its desired trajectory using the complicated system of nerve endings and motor commands. Reflexes which were build over the years, as those massive amounts of data slowly get integrated into the nervous system and the the incipient parts of the brain. But without all those sensors scattered throughout the body, all the little inputs in massive amounts that slowly get filtered through, you would not be able to experience your body, and experience the world. Every concept that you conjure up from your mind is a sort of loose association of your sensorimotor input. How can a robot understand the concept of a strawberry if all it can perceive of it is its shape and color and maybe the sound that it makes as it gets squished? How can you understand the "abstract" notion of strawberry without the incredibly sensible tactile feel, without the act of ripping off the stem, without the motor action of taking it to our mouths, without its texture and taste? When we as humans summon the strawberry thought, all of these concepts and ideas converge (distributed throughout the neurons in our minds) to form this abstract concept formed out of all of these many many correlations. A robot with no touch, no taste, no delicate articulate motions, no "serious" way to interact with and perceive its environment, no massive flow of information from which to chose and and reduce, will never attain human level intelligence. That's point 1. Point 2 is that mere pattern recogn
  •  
    All information *that gets processed* gets processed but now we arrived at a tautology. The whole problem is ultimately nobody knows what gets processed (not to mention how). In fact an absolute statement "all information" gets processed is very easy to dismiss because the characteristics of our sensors are such that a lot of information is filtered out already at the input level (e.g. eyes). I'm not saying it's not a valid and even interesting assumption, but it's still just an assumption and the next step is to explore scientifically where it leads you. And until you show its superiority experimentally it's as good as all other alternative assumptions you can make. I only wanted to point out is that "more processing" is not exactly compatible with some of the fundamental assumptions of the embodiment. I recommend Wilson, 2002 as a crash course.
  •  
    These deal with different things in human intelligence. One is the depth of the intelligence (how much of the bigger picture can you see, how abstract can you form concept and ideas), another is the breadth of the intelligence (how well can you actually generalize, how encompassing those concepts are and what is the level of detail in which you perceive all the information you have) and another is the relevance of the information (this is where the embodiment comes in. What you do is to a purpose, tied into the environment and ultimately linked to survival). As far as I see it, these form the pillars of human intelligence, and of the intelligence of biological beings. They are quite contradictory to each other mainly due to physical constraints (such as for example energy usage, and training time). "More processing" is not exactly compatible with some aspects of embodiment, but it is important for human level intelligence. Embodiment is necessary for establishing an environmental context of actions, a constraint space if you will, failure of human minds (i.e. schizophrenia) is ultimately a failure of perceived embodiment. What we do know is that we perform a lot of compression and a lot of integration on a lot of data in an environmental coupling. Imo, take any of these parts out, and you cannot attain human+ intelligence. Vary the quantities and you'll obtain different manifestations of intelligence, from cockroach to cat to google to random quake bot. Increase them all beyond human levels and you're on your way towards the singularity.
LeopoldS

PLOS ONE: Galactic Cosmic Radiation Leads to Cognitive Impairment and Increas... - 1 views

  •  
    Galactic Cosmic Radiation consisting of high-energy, high-charged (HZE) particles poses a significant threat to future astronauts in deep space. Aside from cancer, concerns have been raised about late degenerative risks, including effects on the brain. In this study we examined the effects of 56Fe particle irradiation in an APP/PS1 mouse model of Alzheimer's disease (AD). We demonstrated 6 months after exposure to 10 and 100 cGy 56Fe radiation at 1 GeV/µ, that APP/PS1 mice show decreased cognitive abilities measured by contextual fear conditioning and novel object recognition tests. Furthermore, in male mice we saw acceleration of Aβ plaque pathology using Congo red and 6E10 staining, which was further confirmed by ELISA measures of Aβ isoforms. Increases were not due to higher levels of amyloid precursor protein (APP) or increased cleavage as measured by levels of the β C-terminal fragment of APP. Additionally, we saw no change in microglial activation levels judging by CD68 and Iba-1 immunoreactivities in and around Aβ plaques or insulin degrading enzyme, which has been shown to degrade Aβ. However, immunohistochemical analysis of ICAM-1 showed evidence of endothelial activation after 100 cGy irradiation in male mice, suggesting possible alterations in Aβ trafficking through the blood brain barrier as a possible cause of plaque increase. Overall, our results show for the first time that HZE particle radiation can increase Aβ plaque pathology in an APP/PS1 mouse model of AD.
Dario Izzo

IPCC models getting mushy | Financial Post - 2 views

  •  
    why am I not surprised .....
  •  
    http://www.academia.edu/4210419/Can_climate_models_explain_the_recent_stagnation_in_global_warming A view of well-respected scientists on how to proceed from here, that was rejected from Nature. In any case, a long way to go...
  •  
    unfortunately it's too early to cheer and burn more coal ... there is also a nice podcast associated to this paper from nature Recent global-warming hiatus tied to equatorial Pacific surface cooling Yu Kosaka & Shang-Ping Xie Nature 501, 403-407 (19 September 2013) doi:10.1038/nature12534 Received 18 June 2013 Accepted 08 August 2013 Published online 28 August 2013 Despite the continued increase in atmospheric greenhouse gas concentrations, the annual-mean global temperature has not risen in the twenty-first century1, 2, challenging the prevailing view that anthropogenic forcing causes climate warming. Various mechanisms have been proposed for this hiatus in global warming3, 4, 5, 6, but their relative importance has not been quantified, hampering observational estimates of climate sensitivity. Here we show that accounting for recent cooling in the eastern equatorial Pacific reconciles climate simulations and observations. We present a novel method of uncovering mechanisms for global temperature change by prescribing, in addition to radiative forcing, the observed history of sea surface temperature over the central to eastern tropical Pacific in a climate model. Although the surface temperature prescription is limited to only 8.2% of the global surface, our model reproduces the annual-mean global temperature remarkably well with correlation coefficient r = 0.97 for 1970-2012 (which includes the current hiatus and a period of accelerated global warming). Moreover, our simulation captures major seasonal and regional characteristics of the hiatus, including the intensified Walker circulation, the winter cooling in northwestern North America and the prolonged drought in the southern USA. Our results show that the current hiatus is part of natural climate variability, tied specifically to a La-Niña-like decadal cooling. Although similar decadal hiatus events may occur in the future, the multi-decadal warming trend is very likely to continue with greenhouse gas
LeopoldS

Common ecology quantifies human insurgency : Article : Nature - 0 views

  •  
    nice paper: like especially: To our knowledge, our model provides the first unified explanation of high-frequency, intra-conflict data across human insurgencies. Other explanations of human insurgency are possible, though any competing theory would also need to replicate the results of Figs 1, 2, 3. Our model's specific mechanisms challenge traditional ideas of insurgency based on rigid hierarchies and networks, whereas its striking similarity to multi-agent financial market models24, 25, 26 hints at a possible link between collective human dynamics in violent and non-violent settings1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19. Top of page
  •  
    There was also this paper ... Power Law Explains Insurgent Violence (http://sciencenow.sciencemag.org/cgi/content/full/2009/1216/1?rss=1)
johannessimon81

Is It Foolish to Model Nature's Complexity With Equations? - 1 views

  •  
    They use a technique they call "Empirical Dynamic Modeling" to find correlations between different variables of chaotic systems. Might be interesting for things like climate modeling and similar chaotic systems. The idea seems pretty straight forward but I never encountered it before - so I'm no sure if this is really a new development (curious if anybody else knows). If you are short on time just watch the video embedded in the article.
  •  
    Just by reading the page and the related material I didn't really get much, but I think it could be worth investing some time in reading more about it. But I'm interested in this, so I'll try to dig deeper!
jaihobah

The Network Behind the Cosmic Web - 1 views

shared by jaihobah on 18 Apr 16 - No Cached
  •  
    "The concept of the cosmic web-viewing the universe as a set of discrete galaxies held together by gravity-is deeply ingrained in cosmology. Yet, little is known about architecture of this network or its characteristics. Our research used data from 24,000 galaxies to construct multiple models of the cosmic web, offering complex blueprints for how galaxies fit together. These three interactive visualizations help us imagine the cosmic web, show us differences between the models, and give us insight into the fundamental structure of the universe."
Lionel Jacques

String-theory calculations describe 'birth of the universe' - 0 views

  •  
    Researchers in Japan have developed what may be the first string-theory model with a natural mechanism for explaining why our universe would seem to exist in three spatial dimensions if it actually has six more. According to their model, only three of the nine dimensions started to grow at the beginning of the universe, accounting both for the universe's continuing expansion and for its apparently three-dimensional nature. ... The team has yet to prove that the Standard Model of particle physics will show up in its model,...
Friederike Sontag

Retooling the ocean conveyor belt - 1 views

  • Climate Ecosystems Reference Ocean current Atmospheric circulation Gulf Stream Mid-ocean ridge In a paper in the June 18 issue of Science, a Duke University oceanographer reviews the growing body of evidence that suggests it's time to rethink the conveyor belt model. "The old model is no longer valid for the ocean's overturning, not because it's a gross simplification, but because it i
  •  
    "The old model is no longer valid for the ocean's overturning, not because it's a gross simplification, but because it ignores crucial elements such as eddies and the wind field. The concept of a conveyor belt for the overturning was developed decades ago, before oceanographers had measured the eddy field of the ocean and before they understood how energy from the wind impacts the overturning,"
Nina Nadine Ridder

'This Planet Tastes Funny,' According to Spitzer - 2 views

  •  
    Spectra of the atmosphere of the planet GJ436b in the constellation Loe show evidence for carbon monoxide. Theoretical studies using numerical models however predicted that this planet's carbon inventory should be stored in the form of methane rather than CO as its temperature is estimated to be 800K. Where this inaccuracy of atmospheric models comes from is not known and has to be investigated further.
Ma Ru

IEEE Trans. Evolutionary Computation - Special Issue on Differential Evolution - 3 views

  •  
    Dario - perhaps worth giving a look to be up-to-date... There's even an article "Improving Classical and Decentralized Differential Evolution with New Mutation Operator and Population Topologies". They quote our CEC paper, but not the ParCo.
  • ...1 more comment...
  •  
    Don't know if you have full text access, so here goes the quote: "Recently, Izzo et al. designed in [27] a heterogeneous asynchronous island model for DE. They considered five islands and five DE strategies (DE/best/1/exp, DE/rand/1/exp, DE/rand-to-best/1/exp, DE/best/2/exp, and DE/rand/2/exp), and studied five distributed DEs using the same DE strategy in all the islands, and a heterogeneous model with one different DE strategy in every island. As a result, the heterogeneous model is not outstanding, but performs as well as the others."
  •  
    Isn't it a bit a paper-killing quote?
  •  
    :) It's in the context of a review of the work that's been done about DE with island model in general, they don't evaluate. Pity they didn't refer to the ParCo article on topologies, as it was a bit more extensive and more focused on the method (as they do in the article) rather than on the problem (as was our CEC paper, if I recall well).
Joris _

Global warming: Our best guess is likely wrong - 0 views

  • theoretical models cannot explain what we observe in the geological record
  • There appears to be something fundamentally wrong
  • something other than carbon dioxide caused much of the heating during the PETM
  •  
    I find the title of the article misleading at best, but probably plainly wrong since they seem to talk about conditions way back and I am not sure how well our current models have been designed to work in these very different conditions? - but should probably be rather another good reason to put more effort into improving the models!
annaheffernan

Plasmons excite hot carriers - 1 views

  •  
    The first complete theory of how plasmons produce "hot carriers" has been developed by researchers in the US. The new model could help make this process of producing carriers more efficient, which would be good news for enhancing solar-energy conversion in photovoltaic devices.
  •  
    I did not read the paper but what is further down written in the article, does not give much hope that this actually gives much more insight than what we had nor that it could be used in any way to improve current PV cells soon: e.g. "To fully exploit these carriers for such applications, researchers need to understand the physical processes behind plasmon-induced hot-carrier generation. Nordlander's team has now developed a simple model that describes how plasmons produce hot carriers in spherical silver nanoparticles and nanoshells. The model describes the conduction electrons in the metal as free particles and then analyses how plasmons excite hot carriers using Fermi's golden rule - a way to calculate how a quantum system transitions from one state into another following a perturbation. The model allows the researchers to calculate how many hot carriers are produced as a function of the light frequency used to excite the metal, as well as the rate at which they are produced. The spectral profile obtained is, to all intents and purposes, the "plasmonic spectrum" of the material. Particle size and hot-carrier lifetimes "Our analyses reveal that particle size and hot-carrier lifetimes are central for determining both the production rate and the energies of the hot carriers," says Nordlander. "Larger particles and shorter lifetimes produce more carriers with lower energies and smaller particles produce fewer carriers, but with higher energies."
mkisantal

Better Language Models and Their Implications - 1 views

  •  
    Just read some of the samples of text generated with their neural networks, insane.
  • ...3 more comments...
  •  
    "Pérez and his friends were astonished to see the unicorn herd. These creatures could be seen from the air without having to move too much to see them - they were so close they could touch their horns. While examining these bizarre creatures the scientists discovered that the creatures also spoke some fairly regular English. Pérez stated, "We can see, for example, that they have a common 'language,' something like a dialect or dialectic."
  •  
    Shocking. I assume that this could indeed have severe implications if it gets in the "wrong hands".
  •  
    "Feed it the first few paragraphs of a Guardian story about Brexit, and its output is plausible newspaper prose, replete with "quotes" from Jeremy Corbyn, mentions of the Irish border, and answers from the prime minister's spokesman." https://www.youtube.com/watch?time_continue=37&v=XMJ8VxgUzTc "Feed it the opening line of George Orwell's Nineteen Eighty-Four - "It was a bright cold day in April, and the clocks were striking thirteen" - and the system recognises the vaguely futuristic tone and the novelistic style, and continues with: "I was in my car on my way to a new job in Seattle. I put the gas in, put the key in, and then I let it run. I just imagined what the day would be like. A hundred years from now. In 2045, I was a teacher in some school in a poor part of rural China. I started with Chinese history and history of science." (https://www.theguardian.com/technology/2019/feb/14/elon-musk-backed-ai-writes-convincing-news-fiction)
  •  
    It's really lucky that it was OpenAI who made that development and Elon Musk is so worried about AI. This way at least they try to assess the whole spectrum of abilities and applications of this model before releasing the full research to the public.
  •  
    They released a smaller model, I got it running on Sandy. It's fairly straight forward: https://github.com/openai/gpt-2
Marcus Maertens

AI competitions don't produce useful models - Luke Oakden-Rayner - 2 views

  •  
    This is an interesting viewpoint on the applicability (usefulness) of AI models devised by competitions, backed up by easy statistics. Worth a read!
santecarloni

Computer Model Replays Europe's Cultural History  - Technology Review - 2 views

  •  
    A simple mathematical model of the way cultures spread reproduces some aspects of European history, say complexity scientists
Thijs Versloot

The big data brain drain - 3 views

  •  
    Echoing this, in 2009 Google researchers Alon Halevy, Peter Norvig, and Fernando Pereira penned an article under the title The Unreasonable Effectiveness of Data. In it, they describe the surprising insight that given enough data, often the choice of mathematical model stops being as important - that particularly for their task of automated language translation, "simple models and a lot of data trump more elaborate models based on less data." If we make the leap and assume that this insight can be at least partially extended to fields beyond natural language processing, what we can expect is a situation in which domain knowledge is increasingly trumped by "mere" data-mining skills. I would argue that this prediction has already begun to pan-out: in a wide array of academic fields, the ability to effectively process data is superseding other more classical modes of research.
Dario Izzo

Climate scientists told to 'cover up' the fact that the Earth's temperature hasn't rise... - 5 views

  •  
    This is becoming a mess :)
  • ...2 more comments...
  •  
    I would avoid reading climate science from political journals, for a less selective / dramatic picture :-) . Here is a good start: http://www.realclimate.org/ And an article on why climate understanding should be approached hierarcically, (that is not the way done in the IPCC), a view with insight, 8 years ago: http://www.princeton.edu/aos/people/graduate_students/hill/files/held2005.pdf
  •  
    True, but fundings are allocated to climate modelling 'science' on the basis of political decisions, not solid and boring scientific truisms such as 'all models are wrong'. The reason so many people got trained on this area in the past years is that resources were allocated to climate science on the basis of the dramatic picture depicted by some scientists when it was indeed convenient for them to be dramatic.
  •  
    I see your point, and I agree that funding was also promoted through the energy players and their political influence. A coincident parallel interest which is irrelevant to the fact that the question remains vital. How do we affect climate and how does it respond. Huge complex system to analyse which responds in various time scales which could obscure the trend. What if we made a conceptual parallelism with the L Ácquila case : Is the scientific method guilty or the interpretation of uncertainty in terms of societal mobilization? Should we leave the humanitarian aspect outside any scientific activity?
  •  
    I do not think there is anyone arguing that the question is not interesting and complex. The debate, instead, addresses the predictive value of the models produced so far. Are they good enough to be used outside of the scientific process aimed at improving them? Or should one wait for "the scientific method" to bring forth substantial improvements to the current understanding and only then start using its results? One can take both stand points, but some recent developments will bring many towards the second approach.
Beniamino Abis

The Wisdom of (Little) Crowds - 1 views

  •  
    What is the best (wisest) size for a group of individuals? Couzin and Kao put together a series of mathematical models that included correlation and several cues. In one model, for example, a group of animals had to choose between two options-think of two places to find food. But the cues for each choice were not equally reliable, nor were they equally correlated. The scientists found that in these models, a group was more likely to choose the superior option than an individual. Common experience will make us expect that the bigger the group got, the wiser it would become. But they found something very different. Small groups did better than individuals. But bigger groups did not do better than small groups. In fact, they did worse. A group of 5 to 20 individuals made better decisions than an infinitely large crowd. The problem with big groups is this: a faction of the group will follow correlated cues-in other words, the cues that look the same to many individuals. If a correlated cue is misleading, it may cause the whole faction to cast the wrong vote. Couzin and Kao found that this faction can drown out the diversity of information coming from the uncorrelated cue. And this problem only gets worse as the group gets bigger.
  •  
    Couzin research was the starting point that co-inspired PaGMO from the very beginning. We invited him (and he came) at a formation flying conference for a plenary here in ESTEC. You can see PaGMO as a collective problem solving simulation. In that respect, we learned already that the size of the group and its internal structure (topology) counts and cannot be too large or too random. One of the project the ACT is running (and currently seeking for new ideas/actors) is briefly described here (http://esa.github.io/pygmo/examples/example2.html) and attempts answering the question :"How is collective decision making influenced by the information flow through the group?" by looking at complex simulations of large 'archipelagos'.
1 - 20 of 192 Next › Last »
Showing 20 items per page