Skip to main content

Home/ Advanced Concepts Team/ Group items tagged Sleep

Rss Feed Group items tagged

Francesco Biscani

STLport: An Interview with A. Stepanov - 2 views

  • Generic programming is a programming method that is based in finding the most abstract representations of efficient algorithms.
  • I spent several months programming in Java.
  • for the first time in my life programming in a new language did not bring me new insights
  • ...2 more annotations...
  • it has no intellectual value whatsoever
  • Java is clearly an example of a money oriented programming (MOP).
  •  
    One of the authors of the STL (C++'s Standard Template Library) explains generic programming and slams Java.
  • ...6 more comments...
  •  
    "Java is clearly an example of a money oriented programming (MOP)." Exactly. And for the industry it's the money that matters. Whatever mathematicians think about it.
  •  
    It is actually a good thing that it is "MOP" (even though I do not agree with this term): that is what makes it inter-operable, light and easy to learn. There is no point in writing fancy codes, if it does not bring anything to the end-user, but only for geeks to discuss incomprehensible things in forums. Anyway, I am pretty sure we can find a Java guy slamming C++ ;)
  •  
    Personally, I never understood what the point of Java is, given that: 1) I do not know of any developer (maybe Marek?) that uses it for intellectual pleasure/curiosity/fun whatever, given the possibility of choice - this to me speaks loudly on the objective qualities of the language more than any industrial-corporate marketing bullshit (for the record, I argue that Python is more interoperable, lighter and easier to learn than Java - which is why, e.g., Google is using it heavily); 2) I have used a software developed in Java maybe a total of 5 times on any computer/laptop I owned over 15 years. I cannot name of one single Java project that I find necessary or even useful; for my usage of computers, Java could disappear overnight without even noticing. Then of course one can argue as much as one wants about the "industry choosing Java", to which I would counterargue with examples of industry doing stupid things and making absurd choices. But I suppose it would be a kind of pointless discussion, so I'll just stop here :)
  •  
    "At Google, python is one of the 3 "official languages" alongside with C++ and Java". Java runs everywhere (the byte code itself) that is I think the only reason it became famous. Python, I guess, is more heavy if it were to run on your web browser! I think every language has its pros and cons, but I agree Java is not the answer to everything... Java is used in MATLAB, some web applications, mobile phones apps, ... I would be a bit in trouble if it were to disappear today :(
  •  
    I personally do not believe in interoperability :)
  •  
    Well, I bet you'd notice an overnight disappearance of java, because half of the internet would vanish... J2EE technologies are just omnipresent there... I'd rather not even *think* about developing a web application/webservice/web-whatever in standard C++... is it actually possible?? Perhaps with some weird Microsoft solutions... I bet your bank online services are written in Java. Certainly not in PHP+MySQL :) Industry has chosen Java not because of industrial-corporate marketing bullshit, but because of economics... it enables you develop robustly, reliably, error-prone, modular, well integrated etc... software. And the costs? Well, using java technologies you can set-up enterprise-quality web application servers, get a fully featured development environment (which is better than ANY C/C++/whatever development environment I've EVER seen) at the cost of exactly 0 (zero!) USD/GBP/EUR... Since many years now, the central issue in software development is not implementing algorithms, it's building applications. And that's where Java outperforms many other technologies. The final remark, because I may be mistakenly taken for an apostle of Java or something... I love the idea of generic programming, C++ is my favourite programming language (and I used to read Stroustroup before sleep), at leisure time I write programs in Python... But if I were to start a software development company, then, apart from some very niche applications like computer games, it most probably would use Java as main technology.
  •  
    "I'd rather not even *think* about developing a web application/webservice/web-whatever in standard C++... is it actually possible?? Perhaps with some weird Microsoft solutions... I bet your bank online services are written in Java. Certainly not in PHP+MySQL :)" Doing in C++ would be awesomely crazy, I agree :) But as I see it there are lots of huge websites that operate on PHP, see for instance Facebook. For the banks and the enterprise market, as a general rule I tend to take with a grain of salt whatever spin comes out from them; in the end behind every corporate IT decision there is a little smurf just trying to survive and have the back covered :) As they used to say in the old times, "No one ever got fired for buying IBM". "Industry has chosen Java not because of industrial-corporate marketing bullshit, but because of economics... it enables you develop robustly, reliably, error-prone, modular, well integrated etc... software. And the costs? Well, using java technologies you can set-up enterprise-quality web application servers, get a fully featured development environment (which is better than ANY C/C++/whatever development environment I've EVER seen) at the cost of exactly 0 (zero!) USD/GBP/EUR... Since many years now, the central issue in software development is not implementing algorithms, it's building applications. And that's where Java outperforms many other technologies." Apart from the IDE considerations (on which I cannot comment, since I'm not a IDE user myself), I do not see how Java beats the competition in this regard (again, Python and the huge software ecosystem surrounding it). My impression is that Java's success is mostly due to Sun pushing it like there is no tomorrow and bundling it with their hardware business.
  •  
    OK, I think there is a bit of everything, wrong and right, but you have to acknowledge that Python is not always the simplest. For info, Facebook uses Java (if you upload picture for instance), and PHP is very limited. So definitely, in company, engineers like you and me select the language, it is not a marketing or political thing. And in the case of fb, they come up with the conclusion that PHP, and Java don't do everything but complement each other. As you say Python as many things around, but it might be too much for simple applications. Otherwise, I would seriously be interested by a study of how to implement a Python-like system on-board spacecrafts and what are the advantages over mixing C, Ada and Java.
Tobias Seidl

Airport Reviews | Budget Traveller's Guide to Sleeping in Airports - 0 views

  •  
    How to reduce mission costs.
nikolas smyrlakis

BBC NEWS | Technology | 'Road trains' get ready to roll - 3 views

  •  
    sleeping and driving on the highway
  •  
    "The lead vehicle would be handled by a professional driver who would monitor the status of the road train. Those in following vehicles could take their hands off the wheel, read a book or watch TV, while they travel along the motorway. Their vehicle would be controlled by the lead vehicle." .... what or who is defining a professional driver? one of the always overly tired truck drivers?
Marion Nachon

Galaxy collisions not the only source of monster black hole activity | Space | EarthSky - 1 views

  •  
    In a surprise announcement earlier today (July 13), the European Southern Observatory said that monster black holes - those giants of millions or billions of solar masses, thought to lurk at the hearts of most galaxies - have a mechanism to become active other than galaxy collisions.
  •  
    "A new study combining data from ESO's Very Large Telescope and ESA's XMM-Newton X-ray space observatory has turned up a surprise. Most of the huge black holes in the centers of galaxies in the past 11 billion years were not turned on by mergers between galaxies, as had been previously thought." and "The process that activates a sleeping black hole - turning its galaxy from quiet to active - has been a mystery in astronomy. What triggers the violent outbursts at a galaxy's center, which then becomes an active galactic nucleus? Up to now, many astronomers thought that most of these active nuclei were turned on when two galaxies merged, or when they passed close to each other and the disrupted material became fuel for the central black hole. Results of the new study indicate this idea may be wrong for many active galaxies." very interesting indeed
Thijs Versloot

Coffee Naps Better For Alertness Than Coffee Or Naps Alone - Slashdot - 2 views

  •  
    Scientific proof, shotgun on the red couch!
  •  
    this really only works if you can fall asleep instantly. If it takes you at least 10mins to sleep the whole procedure fails miserably.
Thijs Versloot

Is increased light exposure from screens and phones bad for your health? @Wired - 1 views

  •  
    As Stevens says in the new article, researchers now know that increased nighttime light exposure tracks with increased rates of breast cancer, obesity and depression. Correlation isn't causation, of course, and it's easy to imagine all the ways researchers might mistake those findings. The easy availability of electric lighting almost certainly tracks with various disease-causing factors: bad diets, sedentary lifestyles, exposure to they array of chemicals that come along with modernity. Very difficult to prove causation I would think, but there are known relationships between hormone levels and light.
  •  
    There is actually a windows program called flux, that changes the temperature on your screen to match normal light cycles. When the sun sets it switches to a "warmer" more reddish tint on your screen to promote sleepiness. The typically bright blue/neon white settings of most pc settings is quite "awakening" and keeps your brain running for longer. This impacts your sleeping patterns and all the consequences of that. Amazingly, this flux thing does have an effect. That being said, I wouldn't be too quick to blame it all on PC/artificial lighting time. Sedentary lifestyles, etc can very well place one in a position of long term pc/phone usage so it's quite hard to draw a causal link.
  •  
    nice - also exists for MAC btw: https://justgetflux.com/news/pages/mac/
Dario Izzo

Miguel Nicolelis Says the Brain Is Not Computable, Bashes Kurzweil's Singularity | MIT ... - 9 views

  •  
    As I said ten years ago and psychoanalysts 100 years ago. Luis I am so sorry :) Also ... now that the commission funded the project blue brain is a rather big hit Btw Nicolelis is a rather credited neuro-scientist
  • ...14 more comments...
  •  
    nice article; Luzi would agree as well I assume; one aspect not clear to me is the causal relationship it seems to imply between consciousness and randomness ... anybody?
  •  
    This is the same thing Penrose has been saying for ages (and yes, I read the book). IF the human brain proves to be the only conceivable system capable of consciousness/intelligence AND IF we'll forever be limited to the Turing machine type of computation (which is what the "Not Computable" in the article refers to) AND IF the brain indeed is not computable, THEN AI people might need to worry... Because I seriously doubt the first condition will prove to be true, same with the second one, and because I don't really care about the third (brains is not my thing).. I'm not worried.
  •  
    In any case, all AI research is going in the wrong direction: the mainstream is not on how to go beyond Turing machines, rather how to program them well enough ...... and thats not bringing anywhere near the singularity
  •  
    It has not been shown that intelligence is not computable (only some people saying the human brain isn't, which is something different), so I wouldn't go so far as saying the mainstream is going in the wrong direction. But even if that indeed was the case, would it be a problem? If so, well, then someone should quickly go and tell all the people trading in financial markets that they should stop using computers... after all, they're dealing with uncomputable undecidable problems. :) (and research on how to go beyond Turing computation does exist, but how much would you want to devote your research to a non existent machine?)
  •  
    [warning: troll] If you are happy with developing algorithms that serve the financial market ... good for you :) After all they have been proved to be useful for humankind beyond any reasonable doubt.
  •  
    Two comments from me: 1) an apparently credible scientist takes Kurzweil seriously enough to engage with him in polemics... oops 2) what worries me most, I didn't get the retail store pun at the end of article...
  •  
    True, but after Google hired Kurzweil he is de facto being taken seriously ... so I guess Nicolelis reacted to this.
  •  
    Crazy scientist in residence... interesting marketing move, I suppose.
  •  
    Unfortunately, I can't upload my two kids to the cloud to make them sleep, that's why I comment only now :-). But, of course, I MUST add my comment to this discussion. I don't really get what Nicolelis point is, the article is just too short and at a too popular level. But please realize that the question is not just "computable" vs. "non-computable". A system may be computable (we have a collection of rules called "theory" that we can put on a computer and run in a finite time) and still it need not be predictable. Since the lack of predictability pretty obviously applies to the human brain (as it does to any sufficiently complex and nonlinear system) the question whether it is computable or not becomes rather academic. Markram and his fellows may come up with a incredible simulation program of the human brain, this will be rather useless since they cannot solve the initial value problem and even if they could they will be lost in randomness after a short simulation time due to horrible non-linearities... Btw: this is not my idea, it was pointed out by Bohr more than 100 years ago...
  •  
    I guess chaos is what you are referring to. Stuff like the Lorentz attractor. In which case I would say that the point is not to predict one particular brain (in which case you would be right): any initial conditions would be fine as far as any brain gets started :) that is the goal :)
  •  
    Kurzweil talks about downloading your brain to a computer, so he has a specific brain in mind; Markram talks about identifying neural basis of mental diseases, so he has at least pretty specific situations in mind. Chaos is not the only problem, even a perfectly linear brain (which is not a biological brain) is not predictable, since one cannot determine a complete set of initial conditions of a working (viz. living) brain (after having determined about 10% the brain is dead and the data useless). But the situation is even worse: from all we know a brain will only work with a suitable interaction with its environment. So these boundary conditions one has to determine as well. This is already twice impossible. But the situation is worse again: from all we know, the way the brain interacts with its environment at a neural level depends on his history (how this brain learned). So your boundary conditions (that are impossible to determine) depend on your initial conditions (that are impossible to determine). Thus the situation is rather impossible squared than twice impossible. I'm sure Markram will simulate something, but this will rather be the famous Boltzmann brain than a biological one. Boltzman brains work with any initial conditions and any boundary conditions... and are pretty dead!
  •  
    Say one has an accurate model of a brain. It may be the case that the initial and boundary conditions do not matter that much in order for the brain to function an exhibit macro-characteristics useful to make science. Again, if it is not one particular brain you are targeting, but the 'brain' as a general entity this would make sense if one has an accurate model (also to identify the neural basis of mental diseases). But in my opinion, the construction of such a model of the brain is impossible using a reductionist approach (that is taking the naive approach of putting together some artificial neurons and connecting them in a huge net). That is why both Kurzweil and Markram are doomed to fail.
  •  
    I think that in principle some kind of artificial brain should be feasible. But making a brain by just throwing together a myriad of neurons is probably as promising as throwing together some copper pipes and a heap of silica and expecting it to make calculations for you. Like in the biological system, I suspect, an artificial brain would have to grow from a small tiny functional unit by adding neurons and complexity slowly and in a way that in a stable way increases the "usefulness"/fitness. Apparently our brain's usefulness has to do with interpreting inputs of our sensors to the world and steering the body making sure that those sensors, the brain and the rest of the body are still alive 10 seconds from now (thereby changing the world -> sensor inputs -> ...). So the artificial brain might need sensors and a body to affect the "world" creating a much larger feedback loop than the brain itself. One might argue that the complexity of the sensor inputs is the reason why the brain needs to be so complex in the first place. I never quite see from these "artificial brain" proposals in how far they are trying to simulate the whole system and not just the brain. Anyone? Or are they trying to simulate the human brain after it has been removed from the body? That might be somewhat easier I guess...
  •  
    Johannes: "I never quite see from these "artificial brain" proposals in how far they are trying to simulate the whole system and not just the brain." In Artificial Life the whole environment+bodies&brains is simulated. You have also the whole embodied cognition movement that basically advocates for just that: no true intelligence until you model the system in its entirety. And from that you then have people building robotic bodies, and getting their "brains" to learn from scratch how to control them, and through the bodies, the environment. Right now, this is obviously closer to the complexity of insect brains, than human ones. (my take on this is: yes, go ahead and build robots, if the intelligence you want to get in the end is to be displayed in interactions with the real physical world...) It's easy to dismiss Markram's Blue Brain for all their clever marketing pronouncements that they're building a human-level consciousness on a computer, but from what I read of the project, they seem to be developing a platfrom onto which any scientist can plug in their model of a detail of a detail of .... of the human brain, and get it to run together with everyone else's models of other tiny parts of the brain. This is not the same as getting the artificial brain to interact with the real world, but it's a big step in enabling scientists to study their own models on more realistic settings, in which the models' outputs get to effect many other systems, and throuh them feed back into its future inputs. So Blue Brain's biggest contribution might be in making model evaluation in neuroscience less wrong, and that doesn't seem like a bad thing. At some point the reductionist approach needs to start moving in the other direction.
  •  
    @ Dario: absolutely agree, the reductionist approach is the main mistake. My point: if you take the reductionsit approach, then you will face the initial and boundary value problem. If one tries a non-reductionist approach, this problem may be much weaker. But off the record: there exists a non-reductionist theory of the brain, it's called psychology... @ Johannes: also agree, the only way the reductionist approach could eventually be successful is to actually grow the brain. Start with essentially one neuron and grow the whole complexity. But if you want to do this, bring up a kid! A brain without body might be easier? Why do you expect that a brain detached from its complete input/output system actually still works. I'm pretty sure it does not!
  •  
    @Luzi: That was exactly my point :-)
gpetit

Rocking synchronizes brain waves during a short nap - 0 views

  •  
    A rocking bed might help you to fall asleep and maintain you in deep sleep!
‹ Previous 21 - 29 of 29
Showing 20 items per page