Skip to main content

Home/ Memedia/ Group items tagged it

Rss Feed Group items tagged

arden dzx

比尔·盖茨:21世纪的新型资本主义 - 0 views

  • 为了让制度可以有持续性,我们必须用利润来进行激励。而如果企业服务的对象非常贫困,那利润就不大可能产生,那这时我们就需要另一个激励手段,那就是认可(recognition)。企业得到认可就意味着它的知名度提高了,知名度能吸引顾客,更为重要的是,它可以感召优秀的人才前来加盟。这种知名度能够让好的行为得到市场的嘉奖。当企业在市场上无法赢利的情况下,知名度可以是一种替代;而如果可以实现市场利润,则知名度又是额外的激励。   我们的挑战就是设计出一个新的制度体系,让利润和知名度这样的市场激励发挥作用,使企业更加倾向于为穷人服务。我把这种想法称为创新型资本主义(creative capitalism)。通过这种途径,政府、企业及非赢利组织可以进行合作,让市场的作用在更大的范围内发挥作用,从而更多的人可以从中赚取利润,或是得到认可,最终改善全球不平等的现象。
  •  
    也只有从比尔·盖茨的口中说出,大家才会考虑聆听这样的观点。让一个被公认为贪婪、垄断成性的大亨来谈慈善,似乎很有讽刺意味,但物极必反,未必可能在机制上创建什么,至少这样一种思潮,也能逐渐成为制衡的一种力量。
Jean Chen

IT经理世界:分众的互联网征途(2)_互联网_科技时代_新浪网 - 0 views

  • “和电通的合作,是彼此互相扩大市场份额的选择。”江南春说电众数码的出现,是媒体公司和广告代理公司互相助力的结果。事实上,强势数字媒体和强势广告公司的融合,正在为未来的互联网广告领域增添新的变量。
Jean Chen

IT经理世界:分众的互联网征途_互联网_科技时代_新浪网 - 0 views

  • 你不断前进的动力是什么?”   “恐惧!”   分众传媒CEO江南春这样回答记者。这个对海德格尔、米兰·昆德拉情有独钟的文学青年,用德国哲学家海德格尔的那句名言“你为什么活着,因为你终将死去”解释了自己持续向前的动力。而江南春马不停蹄的直接成果,便是一次次资本运作大手笔。
isaac Mao

CN Reviews Interview with Livid - 0 views

  • Tell us more about why it was blocked? Livid: A journalist from Life Weekly (三联生活周刊, a magazine found in 1920s) magazine interviewed me in Dec. 2006. A few days after the story was published in the first issue of 2007, on Jan. 11 2007, the net wires connected to the servers were ordered to be unplugged (Livid’s notes in Chinese here) and all the data were not accessible.
shi zhao

东拉西扯:从海内的群组不火说开去 - 对牛乱弹琴 | Playin' with IT - DonewsBlog - 0 views

  • 豆瓣小组能发展起来,得益于它可以展现人的性格的文化产品定位,以及由此带来的用户多样性。长尾价值成立的两个前提是:丰饶和可获得。
Jean Chen

垂直整合 | It Talks-魏武挥的blog - 0 views

  • 如果我们看整个互联网版图,就可以发现,单机其实是一个私密空间,这非常像现实生活中的“家”。微软基本上已经控制了这个私密空间。如果这些单机从来不曾互联,这就像“老死不相往来”的小国寡民的世界,“社会”是没有任何力量的。
  • 我从来不认为Google是一个互联网公司,我更愿意把它看成是一个握有媒体的广告代理公司。Google adsense和adwards的模式并非仅仅建立在Google.com这唯一一家网络媒体上,而是建立在大大小小不计其数的网站页面上,甚至包括一些门户型的大型网站。它在客户那一端实施买家竞价的策略,出价高者得广告位,在媒体那一端则实施按效果付费的策略,没点击就白放。这是天才的模式。而这个模式,我们可以看到,天才就天才在放大了agency的利益和树立了它的强势位置,而尽可能地压缩了媒体们的利益。按效果付费的模式,基本上将“展示”这种价值的使用价值变成了零。
isaac Mao

【关注】河蟹上岸尝试直播科索沃独立事件 | 与G共舞·IT - 0 views

  • 从从昨晚10点30分起,河蟹上岸开始直播科索沃独立事件,这是河蟹上岸首次尝试对重大事件进行直播。我们试图跳过官管媒体的束缚,对国外媒体的消息进行第一时间的传递。从昨晚至今,有多条消息至今仍是简体中文独家
Jean Chen

IT经理世界:分众的互联网征途(3)_互联网_科技时代_新浪网 - 0 views

  • “中国的互联网是一个被严重低估的产业,在我看来,中国的网络广告规模至少应该达到三四百亿元的规模才算合理。”对上述一系列的调查数据,江南春语出惊人,“中国有4万个广告客户,但只有3500多个是互联网的客户,其潜力可见一斑。”在江南春看来,此次和电通的合资,一方面是进行资源整合提高竞争力,另一方面也减少了行业内公司间的恶意竞争。这也是他几乎进行任何收购的两个主要理由。
feng37

The DigiActive Guide to Twitter for Activism | DigiActive.org - 0 views

  • We are very excited to announce the release of The DigiActive Guide to Twitter for Activism.  Following the recent protests in Moldova, the value of Twitter as a tool for digital activism is more prominent than ever.  Yet in addition to bringing greater awareness to that tool, the hype surrounding Moldova revealed misunderstanding of the value of of Twitter for activism and, even though the realists responded strongly, there was not a stand-alone resource which clearly defined how Twitter could be used by activists.  We hope this guide will fill that void.
feng37

…My heart's in Accra » Studying Twitter and the Moldovan protests - 0 views

  • At some point on Friday, we hit a peak tweet density - 410 of 100,000 tweets included the #pman tag. Had I been scraping results by iterating 100,000 tweets at a time, I would have had four pages of new results - my script is only looking at the first page, so I’d be dropping results. If I ran the script again, I’d try to figure out the maximum tweet density by looking for the moment where the meme was most hyped, try to do a back of the envelope calculation as to an optimum step size and then halve it - that would probably have me using 20,000 steps for this set.
  • Density of tweets charted against blocks of 100,000 tweets
  • http://search.twitter.com/search?max_id=1511783811&page=2&q=%23pman&rpp=100 Picking apart the URL: max_id=1511783811 - Only return results up to tweet #1511783811 in the database page=2 - Hand over the second page of results q=%23pman - The query is for the string #pman, encoded to escape the hash rpp=100 - Give the user 100 results per page While you can manipulate these variables to your heart’s content, you can’t get more than 100 results per page. And if you retrieve 100 results per page, your results will stop at around 15 pages - the engine, by default, wants to give you only 1500 results on any search. This makes sense from a user perspective - it’s pretty rare that you actually want to read the last 1500 posts that mention the fail whale - but it’s a pain in the ass for researchers.
  • ...3 more annotations...
  • What you need to do is figure out the approximate tweet ID number that was current when the phenomenon you’re studying was taking place. If you’re a regular twitterer, go to your personal timeline, find a tweet you posted on April 7th, and click on the date to get the ID of the tweet. In the early morning (GMT) of the 7th, the ID for a new tweet was roughly 1468000000 - the URL http://search.twitter.com/search?max_id=1468000000&q=%23pman&rpp=100 retrieves the first four tweets to use the tag #pman, including our Ur-tweet: evisoft: neata, propun sa utilizam tag-ul #pman pentru mesajele din piata marii adunari nationale My Romanian’s a little rusty, but Vitalie Eşanu appears to be suggesting we use the tag #pman - short for Piata Marii Adunari Nationale, the main square in Chisinau where the protests were slated to begin - in reference to posts about the protests. His post is timestamped 4:40am GMT, suggesting that there were at least some discussions about promoting the protests on Twitter before protesters took to the streets.
  • Now the key is to grab URLs from Twitter, increasing the max_id variable in steps so that we’re getting all results from the start tweet ID to the current tweet ID. My perl script to do this steps by 10,000 results at a time, scraping the results I get from Twitter (using the Atom feed, not the HTML) and dumping novel results into a database. This seems like a pretty fine-toothed comb to use… but if you want to be comprehensive, it’s important to figure out what maximum “tweet density” is before running your code.
  •  
    http://search.twitter.com/search?max_id=1511783811&page=2&q=%23pman&rpp=100 Picking apart the URL: max_id=1511783811 - Only return results up to tweet #1511783811 in the database page=2 - Hand over the second page of results q=%23pman - The query is for the string #pman, encoded to escape the hash rpp=100 - Give the user 100 results per page While you can manipulate these variables to your heart's content, you can't get more than 100 results per page. And if you retrieve 100 results per page, your results will stop at around 15 pages - the engine, by default, wants to give you only 1500 results on any search. This makes sense from a user perspective - it's pretty rare that you actually want to read the last 1500 posts that mention the fail whale - but it's a pain in the ass for researchers. What you need to do is figure out the approximate tweet ID number that was current when the phenomenon you're studying was taking place. If you're a regular twitterer, go to your personal timeline, find a tweet you posted on April 7th, and click on the date to get the ID of the tweet. In the early morning (GMT) of the 7th, the ID for a new tweet was roughly 1468000000 - the URL http://search.twitter.com/search?max_id=1468000000&q=%23pman&rpp=100 retrieves the first four tweets to use the tag #pman, including our Ur-tweet: evisoft: neata, propun sa utilizam tag-ul #pman pentru mesajele din piata marii adunari nationale My Romanian's a little rusty, but Vitalie Eşanu appears to be suggesting we use the tag #pman - short for Piata Marii Adunari Nationale, the main square in Chisinau where the protests were slated to begin - in reference to posts about the protests. His post is timestamped 4:40am GMT, suggesting that there were at least some discussions about promoting the protests on Twitter before protesters took to the streets. Now the key is to grab URLs from Twitter, increasing the max_id variable in steps so that we're getting all results from the st
feng37

Brain Power - Brain Researchers Open Door to Editing Memory - Series - NYTimes.com - 0 views

  • Suppose scientists could erase certain memories by tinkering with a single substance in the brain. Could make you forget a chronic fear, a traumatic loss, even a bad habit.
  • Researchers in Brooklyn have recently accomplished comparable feats, with a single dose of an experimental drug delivered to areas of the brain critical for holding specific types of memory, like emotional associations, spatial knowledge or motor skills.
  • The drug blocks the activity of a substance that the brain apparently needs to retain much of its learned information.
  •  
    打一次针,人的记忆可以彻底被和谐了
isaac Mao

Google Update No Longer Runs in Background - Google update - 0 views

  • Good news for users of Google Chrome, Picasa, and other Google desktop apps on Windows systems: Google Update, previously a background new version checker that was mighty hard to kill off, runs as a scheduled task, either when your system is idle or every so many hours. Better still, if you no longer use Google apps at all, it uninstalls itself. [via Google Operating System]
  •  
    这也是Google 不作恶的一个侧面
yuancheng

Propaganda | Berkeley Institute of Design - 29 views

  • It acknowledges that design in the era of ubiquitous technologies means not only technical innovation, but deep understanding of behavior
  • This compels a new approach to design that is partly technical, but also deeply social and humanist.
  • Understanding activity means understanding values, needs, lifestyle, mythologies, aesthetics, social and cultural norms, and individual and social psychology
  • ...4 more annotations...
  • Context-aware and ambient systems.
  • Location-based services (LBSes) and LB collaborative systems.
  • The Master’s degree will comprise a core program of six courses and two or more optional courses.
  • Design grapples with the impossible complexity of everyday human action, and shines light on a path that can lead to better quality of life.
« First ‹ Previous 141 - 154 of 154
Showing 20 items per page