Skip to main content

Home/ Groups/ PluUltraTech Programming Languages
spiritandfire

The Xcode cliff: is Apple teaching kids to code, or just about code? - The Verge - 0 views

  • Swift Playgrounds is a wonderful introduction to programming. It introduces imperative logic, functions, methods, loops, and many of the marvelous APIs that are available to iOS developers. But it’s called a “playground” for a reason: you can’t make an app with Swift Playgrounds. You play with code, you learn about code, and you do indeed code. But if you want to build something useful and distributable, you need to look elsewhere. Some popular options are Codea, which allows you to build full apps in Lua on your iPad; Pythonista, which offers a Python IDE and a number of popular libraries to work with; and of course there’s always the cloud. But inside the Apple ecosystem, this “elsewhere” is called Xcode. It’s a huge and complicated application that runs only on Macs, and requires an Apple Developer account to effectively distribute the software you build.
spiritandfire

The Coming Software Apocalypse - The Atlantic - 0 views

  • It’s been said that software is “eating the world.” More and more, critical systems that were once controlled mechanically, or by people, are coming to depend on code. This was perhaps never clearer than in the summer of 2015, when on a single day, United Airlines grounded its fleet because of a problem with its departure-management system; trading was suspended on the New York Stock Exchange after an upgrade; the front page of The Wall Street Journal’s website crashed; and Seattle’s 911 system went down again, this time because a different router failed. The simultaneous failure of so many software systems smelled at first of a coordinated cyberattack. Almost more frightening was the realization, late in the day, that it was just a coincidence.
  • “When we had electromechanical systems, we used to be able to test them exhaustively,” says Nancy Leveson, a professor of aeronautics and astronautics at the Massachusetts Institute of Technology who has been studying software safety for 35 years. She became known for her report on the Therac-25, a radiation-therapy machine that killed six patients because of a software error. “We used to be able to think through all the things it could do, all the states it could get into.” The electromechanical interlockings that controlled train movements at railroad crossings, for instance, only had so many configurations; a few sheets of paper could describe the whole system, and you could run physical trains against each configuration to see how it would behave. Once you’d built and tested it, you knew exactly what you were dealing with.
  • Software is different. Just by editing the text in a file somewhere, the same hunk of silicon can become an autopilot or an inventory-control system. This flexibility is software’s miracle, and its curse. Because it can be changed cheaply, software is constantly changed; and because it’s unmoored from anything physical—a program that is a thousand times more complex than another takes up the same actual space—it tends to grow without bound. “The problem,” Leveson wrote in a book, “is that we are attempting to build systems that are beyond our ability to intellectually manage.”
  • ...11 more annotations...
  • Our standard framework for thinking about engineering failures—reflected, for instance, in regulations for medical devices—was developed shortly after World War II, before the advent of software, for electromechanical systems. The idea was that you make something reliable by making its parts reliable (say, you build your engine to withstand 40,000 takeoff-and-landing cycles) and by planning for the breakdown of those parts (you have two engines). But software doesn’t break. Intrado’s faulty threshold is not like the faulty rivet that leads to the crash of an airliner. The software did exactly what it was told to do. In fact it did it perfectly. The reason it failed is that it was told to do the wrong thing. Software failures are failures of understanding, and of imagination. Intrado actually had a backup router, which, had it been switched to automatically, would have restored 911 service almost immediately. But, as described in a report to the FCC, “the situation occurred at a point in the application logic that was not designed to perform any automated corrective actions.”This is the trouble with making things out of code, as opposed to something physical. “The complexity,” as Leveson puts it, “is invisible to the eye.”
  • Software has enabled us to make the most intricate machines that have ever existed. And yet we have hardly noticed, because all of that complexity is packed into tiny silicon chips as millions and millions of lines of code. But just because we can’t see the complexity doesn’t mean that it has gone away.
  • “spaghetti code,” programmer lingo for software that has become a tangled mess. Code turns to spaghetti when it accretes over many years, with feature after feature piling on top of, and being woven around, what’s already there; eventually the code becomes impossible to follow, let alone to test exhaustively for flaws.
  • The bias against model-based design, sometimes known as model-driven engineering, or MDE, is in fact so ingrained that according to a recent paper, “Some even argue that there is a stronger need to investigate people’s perception of MDE than to research new MDE technologies.”
  • The surprising part was that this description was said to be mathematically precise: An algorithm written in TLA+ could in principle be proven correct. In practice, it allowed you to create a realistic model of your problem and test it not just thoroughly, but exhaustively. This was exactly what he’d been looking for: a language for writing perfect algorithms.TLA+, which stands for “Temporal Logic of Actions,” is similar in spirit to model-based design: It’s a language for writing down the requirements—TLA+ calls them “specifications”—of computer programs. These specifications can then be completely verified by a computer. That is, before you write any code, you write a concise outline of your program’s logic, along with the constraints you need it to satisfy (say, if you were programming an ATM, a constraint might be that you can never withdraw the same money twice from your checking account). TLA+ then exhaustively checks that your logic does, in fact, satisfy those constraints. If not, it will show you exactly how they could be violated.
  • anguage was invented by Leslie Lamport, a Turing Award–winning computer scientist. With a big white beard and scruffy white hair, and kind eyes behind large glasses, Lamport looks like he might be one of the friendlier professors at the American Hogwarts. Now at Microsoft Research, he is known as one of the pioneers of the theory of “distributed systems,” which describes any computer system made of multiple parts that communicate with each other.
  • Lamport, a major reason today’s software is so full of bugs is that programmers jump straight into writing code. “Architects draw detailed plans before a brick is laid or a nail is hammered,” he wrote in an article. “But few programmers write even a rough sketch of what their programs will do before they start coding.” Programmers are drawn to the nitty-gritty of coding because code is what makes programs go; spending time on anything else can seem like a distraction. And there is a patient joy, a meditative kind of satisfaction, to be had from puzzling out the micro-mechanics of code. But code, Lamport argues, was never meant to be a medium for thought. “It really does constrain your ability to think when you’re thinking in terms of a programming language,” he says. Code makes you miss the forest for the trees: It draws your attention to the working of individual pieces, rather than to the bigger picture of how your program fits together, or what it’s supposed to do—and whether it actually does what you think.
  • TLA+ occupies just a small, far corner of the mainstream, if it can be said to take up any space there at all. Even to a seasoned engineer like Newcombe, the language read at first as bizarre and esoteric—a zoo of symbols. For Lamport, this is a failure of education. Though programming was born in mathematics, it has since largely been divorced from it. Most programmers aren’t very fluent in the kind of math—logic and set theory, mostly—that you need to work with TLA+. “Very few programmers—and including very few teachers of programming—understand the very basic concepts and how they’re applied in practice. And they seem to think that all they need is code,” Lamport says. “The idea that there’s some higher level than the code in which you need to be able to think precisely, and that mathematics actually allows you to think precisely about it, is just completely foreign. Because they never learned it.”
  • Lamport sees this failure to think mathematically about what they’re doing as the problem of modern software development in a nutshell: The stakes keep rising, but programmers aren’t stepping up—they haven’t developed the chops required to handle increasingly complex problems. “In the 15th century,” he said, “people used to build cathedrals without knowing calculus, and nowadays I don’t think you’d allow anyone to build a cathedral without knowing calculus. And I would hope that after some suitably long period of time, people won’t be allowed to write programs if they don’t understand these simple things.”
  • “There are lots of bugs in cars,” Gerard Berry, the French researcher behind Esterel, said in a talk. “It’s not like avionics—in avionics it’s taken very seriously. And it’s admitted that software is different from mechanics.” The automotive industry is perhaps among those that haven’t yet realized they are actually in the software business.
  • The same regulatory pressures that have made model-based design and code generation attractive to the aviation industry have been slower to come to car manufacturing. Emmanuel Ledinot, of Dassault Aviation, speculates that there might be economic reasons for the difference, too. Automakers simply can’t afford to increase the price of a component by even a few cents, since it is multiplied so many millionfold; the computers embedded in cars therefore have to be slimmed down to the bare minimum, with little room to run code that hasn’t been hand-tuned to be as lean as possible. “Introducing model-based software development was, I think, for the last decade, too costly for them.”
spiritandfire

Transcript of "Inventing on Principle" : Inside 245-5D - 0 views

  • Doug Engelbart. Doug Engelbart basically invented interactive computing. The concept of putting information on a screen. Navigating through it. Looking at information in different ways. Pointing at things and manipulating them. He came up with all this at a time when real-time interaction with a computer was just almost unheard of. Today he is best known as the inventor of the mouse, but what he really invented is this entirely new way of working with knowledge. His explicit goal from the beginning was to enable mankind to solve the world's urgent problems. And his vision, he had this vision of what he called knowledge workers using complex powerful information tools to harness their collective intelligence. And he only got into computers because he had a hunch that these new things called computer things could help him realize that vision. Everything that he did was almost single-mindedly driven by pursuing this vision.
spiritandfire

How to set up an iPad for web development - The Verge - 0 views

  • A few months ago, I detailed my process for setting up a Pixelbook to code on. It wasn’t easy or simple or straightforward, but it worked in the end: I had all the power and flexibility of Linux, access to my favorite code editor (VS Code), and, of course, the slick web browsing experience that Chromebooks are known for. Around that same time, I attempted to set up an iPad for coding. I failed miserably. I love using the iPad for writing and other creative work. It’s super portable, has great battery life, an excellent screen, and the limited multitasking keeps me focused. Unfortunately, it’s very bad for complex tasks and intentionally crippled for software development. But I’m older and wiser now, and after an entire Saturday spent bashing my head against a wall, I’m happy to report that I can use a $799 tablet computer to write software. Will I ever actually use it for this purpose? Maybe! But we’ll get to that.
spiritandfire

In Search of God's Mathematical Perfect Proofs | WIRED - 0 views

  • Paul Erdős, the famously eccentric, peripatetic and prolific 20th-century mathematician, was fond of the idea that God has a celestial volume containing the perfect proof of every mathematical theorem. “This one is from The Book,” he would declare when he wanted to bestow his highest praise on a beautiful proof.
spiritandfire

The Hilarious (and Terrifying?) Ways Algorithms Have Outsmarted Their Creators - 0 views

  • Flying saucers have yet to land—at least, not that we've confirmed—but alien intelligence is already here. As research into AI grows ever more ambitious and complex, these robot brains will challenge the fundamental assumptions of how we humans do things. And, as ever, the only true law of robotics is that computers will always do literally, exactly what you tell them to.
spiritandfire

Developers love trendy new languages, but earn more with functional programming | Ars T... - 0 views

  • Developer Q&A site Stack Overflow performs an annual survey to find out more about the programmer community, and the latest set of results has just been published. JavaScript remains the most widely used programming language among professional developers, making that six years at the top for the lingua franca of Web development. Other Web tech including HTML (#2 in the ranking), CSS (#3), and PHP (#9). Business-oriented languages were also in wide use, with SQL at #4, Java at #5, and C# at #8. Shell scripting made a surprising showing at #6 (having not shown up at all in past years, which suggests that the questions have changed year-to-year), Python appeared at #7, and systems programming stalwart C++ rounded out the top 10. These aren't, however, the languages that developers necessarily want to use. Only three languages from the most-used top ten were in the most-loved list; Python (#3), JavaScript (#7), and C# (#8). For the third year running, that list was topped by Rust, the new systems programming language developed by Mozilla. Second on the list was Kotlin, which wasn't even in the top 20 last year. This new interest is likely due to Google's decision last year to bless the language as an official development language for Android. TypeScript, Microsoft's better JavaScript than JavaScript comes in at fourth, with Google's Go language coming in at fifth. Smalltalk, last year's second-most loved, is nowhere to be seen this time around. These languages may be well-liked, but it looks as if the big money is elsewhere. Globally, F# and OCaml are the top average earners, and in the US, Erlang, Scala, and OCaml are the ones to aim for. Visual Basic 6, Cobol, and CoffeeScript were the top three most-dreaded, which is news that will surprise nobody who is still maintaining Visual Basic 6 applications thousands of years after they were originally written. Stack Overflow also asked devs about one of today's hot-button issues: artificial intelligence. Only 20 percent of devs were worried about AI taking jobs (compared to 41 percent excited by that possibility—no doubt the Visual Basic 6 devs hope that one day computers will be able to do their jobs for them), but a remarkable 28 percent were concerned by AI intelligence surpassing human intelligence, and 29 percent concerned about algorithms making important decisions more generally. Among developers that actually know what they're talking about, however, the concerns seemed to shift: data scientists and machine-learning specialists were 1.5 times more likely to be concerned about algorithmic fairness of AI systems than they were any singularity. Even if AI is evil, most developers don't think it's the fault of the programmers. Fifty-eight percent say that ethics are the responsibility of upper management, 23 percent the inventor of the unethical idea, and just 20 percent think that they're the responsibility of the developer who actually wrote the code. If the Volkswagen emissions scandal is anything to judge by, the developers may not be completely off the mark; thus far, arrests appear to have been restricted to executives and engineers who designed the emissions test-defeating software, leaving the people who wrote the code unscathed.
spiritandfire

Apple's Swift Programming Language Is Now Top Tier | WIRED - 0 views

  • Apple's programming language Swift is less than four years old, but a new report finds that it's already as popular as its predecessor, Apple's more established Objective-C language.Swift is now tied with Objective-C at number 10 in the rankings conducted by analyst firm RedMonk. It's hardly a surprise that programmers are interested in Apple's language, which can be used to build applications for the iPhone, Apple Watch, Macintosh computers, and even web applications. But the speed at which it jumped in the ranks is astonishing. Swift is the fastest growing language RedMonk has seen since it started compiling these rankings in 2011. Even Go, a programming language that Google released in 2009, hasn't been able to break into the top 10.The second fastest grower is Kotlin, which Google now officially supports on Android. It leaped from number 46 in the third quarter of 2017 to number 27 in January.RedMonk's rankings don't necessarily reflect whether companies are using these languages for real-world projects, or how many jobs are available for developers who know them. Instead, the firm tries to gauge how interested programmers are in these languages. Popularity among programmers could influence business decisions such as what languages to use for new projects.RedMonk compiles its rankings by looking at the number of questions people ask about each language on the question and answer site Stack Overflow as well as the number of projects using particular languages on the code hosting and collaboration site GitHub. The methodology was originally created by data scientists Drew Conway and John Myles White in 2010.Apple first released Swift in 2014. The idea was not just to make it easier for new developers to learn to program, but to simplify life for experienced coders as well. Many languages over the years have aimed to smooth the programming process by offering syntax that's easier to read or building in features that programmers otherwise commonly write from scratch. But these sorts of languages often produced applications that ran more slowly than ones written in more difficult programming languages. Swift aimed to combine programmer-friendly features with performance.Kotlin, which was created by the company JetBrains and officially released in 2016, has similar goals. What sets Kotlin apart is that it's compatible with the widely used Java programming language, which means programmers can include Java code in their Kotlin programs, or even write new features for Java applications using Kotlin. Kotlin had already received widespread attention from Java developers, but once Google announced full support for the language on Android, interest skyrocketed. RedMonk analyst Stephen O'Grady pointed out in the report that Kotlin’s Java roots could help it find its way into more places than Swift, such as large enterprise applications.Apart from the big gains for Swift and Kotlin, the RedMonk rankings changed fairly little this quarter. JavaScript and Java remained the two most popular languages, closely followed by Python, PHP, and C#. As O'Grady notes in the report, it’s becoming harder and harder for new languages to break into the top 20. That makes the rise of Swift and Kotlin all the more impressive.
spiritandfire

Ubisoft's AI in Far Cry 5 and Watch Dogs could change gaming | WIRED UK - 0 views

  • AI has a new task: helping to keep the bugs out of video games.At the recent Ubisoft Developer Conference in Montreal, the French gaming company unveiled a new AI assistant for its developers. Dubbed Commit Assistant, the goal of the AI system is to catch bugs before they're ever committed into code, saving developers time and reducing the number of flaws that make it into a game before release. "I think like many good ideas, it's like 'how come we didn't think about that before?'," says Yves Jacquier, who heads up La Forge, Ubisoft's R&D division in Montreal. His department partners with local universities including McGill and Concordia to collaborate on research intended to advance the field of artificial intelligence as a whole, not just within the industry.La Forge fed Commit Assistant with roughly ten years' worth of code from across Ubisoft's software library, allowing it to learn where mistakes have historically been made, reference any corrections that were applied, and predict when a coder may be about to write a similar bug. "It's all about comparing the lines of code we've created in the past, the bugs that were created in them, and the bugs that were corrected, and finding a way to make links [between them] to provide us with a super-AI for programmers," explains Jacquier.
spiritandfire

The tyranny of algorithms is part of our lives: soon they could rate everything we do |... - 0 views

  • For the past couple of years a big story about the future of China has been the focus of both fascination and horror. It is all about what the authorities in Beijing call “social credit”, and the kind of surveillance that is now within governments’ grasp. The official rhetoric is poetic. According to the documents, what is being developed will “allow the trustworthy to roam everywhere under heaven while making it hard for the discredited to take a single step”. As China moves into the newly solidified President Xi Jinping era, the basic plan is intended to be in place by 2020. Some of it will apply to businesses and officials, so as to address corruption and tackle such high-profile issues as poor food hygiene. But other elements will be focused on ordinary individuals, so that transgressions such as dodging transport fares and not caring sufficiently for your parents will mean penalties, while living the life of a good citizen will bring benefits and opportunities. Data is the new lifeblood of capitalism – don't hand corporate America control Read more Online behaviour will inevitably be a big part of what is monitored, and algorithms will be key to everything, though there remain doubts about whether something so ambitious will ever come to full fruition. One of the scheme’s basic aims is to use a vast amount of data to create individual ratings, which will decide people’s access – or lack of it – to everything from travel to jobs. The Chinese notion of credit – or xinyong – has a cultural meaning that relates to moral ideas of honesty and trust. There are up to 30 local social credit pilots run by local authorities, in huge cities such as Shanghai and Hangzhou and much smaller towns. Meanwhile, eight ostensibly private companies have been trialling a different set of rating systems, which seem to chime with the government’s controlling objectives.
1 - 16 of 16
Showing 20 items per page