Cohen outlined a vision of 'Net-enabled scholarly publishing that I can only think to call the aggregation model: editorial committees scanning the 'Net to find the most interesting scholarly content in a given field or discipline, and highlighting it through websites and e-mail blasts that hearken back to the early days when weblogs were literally just collections of links with one- or two-sentence summaries attached. (An example, edited by Cohen and some of his associates: Digital Humanities Now.) Some of that work consists of traditional books and articles, but much of it consists of blog posts, online debates, etc. This model gives us scholarly work from the bottom up, instead of generating published scholarly work by tossing a piece into the random crapshoot of putatively blind peer-review and crossing your fingers to see what happens. It also gives us scholarly work that can be certified as such by the collective deliberation of the community, which "votes" for pieces and ideas by reading them, recirculating them, linking to them, and other signs of interest and approval that can be easily tracked with traffic-tracing tools. And then, on top of that editorial aggregation -- Cohen made a great point that this kind of aggregation shouldn't be fully automated, because automated tools reward "loudmouths" and popular voices that just get retweeted a lot; human editors can do a lot to surface novel insights and new voices -- an open-access journal that curates the best of those linked items into published pieces, perhaps with some revisions and peer review/commentary.