Skip to main content

Home/ Sensorica Knowledge/ Group items tagged decisions

Rss Feed Group items tagged

Tiberius Brastaviceanu

Decision making - Wikipedia, the free encyclopedia - 1 views

  • mental processes
  • examine individual decisions in the context of a set of needs, preferences an individual has and values they seek.
  • psychological perspective
  • ...59 more annotations...
  • cognitive perspective
  • continuous process integrated in the interaction with the environment
  • normative perspective
  • logic of decision making
  • and rationality
  • decision making is a reasoning or emotional process which can be rational or irrational, can be based on explicit assumptions or tacit assumptions.
  • Logical decision making
  • making informed decisions
  • recognition primed decision approach
  • without weighing alternatives
  • integrated uncertainty into the decision making process
  • A major part of decision making involves the analysis of a finite set of alternatives described in terms of some evaluative criteria.
  • multi-criteria decision analysis (MCDA) also known as multi-criteria decision making (MCDM).
  • differentiate between problem analysis and decision making
  • Problem analysis must be done first, then the information gathered in that process may be used towards decision making.[4]
  • decision making techniques people use in everyday life
  • Pros and Cons
  • Simple Prioritization:
  • Decision-Making Stages
  • Orientation stage
  • Conflict stage
  • Emergence stage
  • Reinforcement stage
  • Decision-Making Steps
  • Outline your goal and outcome
  • Gather data
  • Brainstorm to develop alternatives
  • List pros and cons of each alternative
  • Make the decision
  • take action
  • Learn from, and reflect on the decision making
  • Cognitive and personal biases
  • Selective search for evidence
  • Premature termination of search for evidence
  • Inertia
  • Selective perception
  • Wishful thinking or optimism bias
  • Choice-supportive bias
  • Recency
  • Repetition bias
  • Anchoring and adjustment
  • Group think – Peer pressure
  • Source credibility bias
  • Incremental decision making and escalating commitment
  • Attribution asymmetry
  • Role fulfillment
  • Underestimating uncertainty and the illusion of control
  • a person's decision making process depends to a significant degree on their cognitive style
  • thinking and feeling; extroversion and introversion; judgment and perception; and sensing and intuition.
  • someone who scored near the thinking, extroversion, sensing, and judgment
  • would tend to have a logical, analytical, objective, critical, and empirical decision making style.
  • national or cross-cultural differences
  • distinctive national style of decision making
  • human decision-making is limited by available information, available time, and the information-processing ability of the mind.
  • two cognitive styles: maximizers
  • satisficers
    • Tiberius Brastaviceanu
       
      I think we are at the CONFLICT stage at this moment
    • Tiberius Brastaviceanu
       
      These are the steps we need to go through to make a decision of the 4 items proposed by Ivan
    • Tiberius Brastaviceanu
       
      This is also interesting, where are you on these 4 dimensions? 
Tiberius Brastaviceanu

GitHub Has Big Dreams for Open-Source Software, and More - NYTimes.com - 0 views

  • GitHub has no managers among its 140 employees, for example. “Everyone has management interests,” he said. “People can work on things that are interesting to them. Companies should exist to optimize happiness, not money. Profits follow.” He does, however, retain his own title and decides things like salaries.
  • Another member of GitHub has posted a talk that stresses how companies flourish when people want to work on certain things, not because they are told to.
  • Asana bases work on a series of to-do lists that people assign one another. Inside Asana there are no formal titles, though like GitHub there are bosses at the top who make final decisions.
  • ...8 more annotations...
  • For all the happiness and sharing, real money is involved here. In July GitHub received $100 million from the venture capital firm Andreessen Horowitz. This early in most software companies’ lives, $20 million would be a fortune.
  • GitHub’s popularity has also made it an important way for companies to recruit engineers, because some of the best people in the business are showing their work or dissecting the work of others inside some of the public pull requests.
  • Mr. Preston-Werner thinks the way open source requires a high degree of trust and collaboration among relative equals (plus a few high-level managers who define the scope of a job and make final decisions) can be extended more broadly, even into government.
  • “For now this is about code, but we can make the burden of decision-making into an opportunity,” he said. “It would be useful if you could capture the process of decision-making, and see who suggested the decisions that created a law or a bill.”
  • Can this really be extended across a large, complex organization, however?
  • As complex as an open-source project may be, it is also based on a single, well-defined outcome, and an engineering task that is generally free of concepts like fairness and justice, about which people can debate endlessly.
  • Google once prided itself on few managers and fast action, but has found that getting big can also involve lots more meetings.
  • Still, these fast-rising successes may be on to something more than simply universalizing the means of their own good fortune. An early guru of the Information Age, Peter Drucker, wrote often in the latter part of his career of the need for managers to define tasks, and for workers to seek fulfillment before profits.
Tiberius Brastaviceanu

Access control - Wikipedia, the free encyclopedia - 0 views

  • The act of accessing may mean consuming, entering, or using.
  • Permission to access a resource is called authorization.
  • Locks and login credentials are two analogous mechanisms of access control.
  • ...26 more annotations...
  • Geographical access control may be enforced by personnel (e.g., border guard, bouncer, ticket checker)
  • n alternative of access control in the strict sense (physically controlling access itself) is a system of checking authorized presence, see e.g. Ticket controller (transportation). A variant is exit control, e.g. of a shop (checkout) or a country
  • access control refers to the practice of restricting entrance to a property, a building, or a room to authorized persons
  • can be achieved by a human (a guard, bouncer, or receptionist), through mechanical means such as locks and keys, or through technological means such as access control systems like the mantrap.
  • Physical access control is a matter of who, where, and when
  • Historically, this was partially accomplished through keys and locks. When a door is locked, only someone with a key can enter through the door, depending on how the lock is configured. Mechanical locks and keys do not allow restriction of the key holder to specific times or dates. Mechanical locks and keys do not provide records of the key used on any specific door, and the keys can be easily copied or transferred to an unauthorized person. When a mechanical key is lost or the key holder is no longer authorized to use the protected area, the locks must be re-keyed.[citation needed] Electronic access control uses computers to solve the limitations of mechanical locks and keys. A wide range of credentials can be used to replace mechanical keys. The electronic access control system grants access based on the credential presented. When access is granted, the door is unlocked for a predetermined time and the transaction is recorded. When access is refused, the door remains locked and the attempted access is recorded. The system will also monitor the door and alarm if the door is forced open or held open too long after being unlocked
  • Credential
  • Access control system operation
  • The above description illustrates a single factor transaction. Credentials can be passed around, thus subverting the access control list. For example, Alice has access rights to the server room, but Bob does not. Alice either gives Bob her credential, or Bob takes it; he now has access to the server room. To prevent this, two-factor authentication can be used. In a two factor transaction, the presented credential and a second factor are needed for access to be granted; another factor can be a PIN, a second credential, operator intervention, or a biometric input
  • There are three types (factors) of authenticating information:[2] something the user knows, e.g. a password, pass-phrase or PIN something the user has, such as smart card or a key fob something the user is, such as fingerprint, verified by biometric measurement
  • Passwords are a common means of verifying a user's identity before access is given to information systems. In addition, a fourth factor of authentication is now recognized: someone you know, whereby another person who knows you can provide a human element of authentication in situations where systems have been set up to allow for such scenarios
  • When a credential is presented to a reader, the reader sends the credential’s information, usually a number, to a control panel, a highly reliable processor. The control panel compares the credential's number to an access control list, grants or denies the presented request, and sends a transaction log to a database. When access is denied based on the access control list, the door remains locked.
  • A credential is a physical/tangible object, a piece of knowledge, or a facet of a person's physical being, that enables an individual access to a given physical facility or computer-based information system. Typically, credentials can be something a person knows (such as a number or PIN), something they have (such as an access badge), something they are (such as a biometric feature) or some combination of these items. This is known as multi-factor authentication. The typical credential is an access card or key-fob, and newer software can also turn users' smartphones into access devices.
  • An access control point, which can be a door, turnstile, parking gate, elevator, or other physical barrier, where granting access can be electronically controlled. Typically, the access point is a door. An electronic access control door can contain several elements. At its most basic, there is a stand-alone electric lock. The lock is unlocked by an operator with a switch. To automate this, operator intervention is replaced by a reader. The reader could be a keypad where a code is entered, it could be a card reader, or it could be a biometric reader. Readers do not usually make an access decision, but send a card number to an access control panel that verifies the number against an access list
  • monitor the door position
  • Generally only entry is controlled, and exit is uncontrolled. In cases where exit is also controlled, a second reader is used on the opposite side of the door. In cases where exit is not controlled, free exit, a device called a request-to-exit (REX) is used. Request-to-exit devices can be a push-button or a motion detector. When the button is pushed, or the motion detector detects motion at the door, the door alarm is temporarily ignored while the door is opened. Exiting a door without having to electrically unlock the door is called mechanical free egress. This is an important safety feature. In cases where the lock must be electrically unlocked on exit, the request-to-exit device also unlocks the doo
  • Access control topology
  • Access control decisions are made by comparing the credential to an access control list. This look-up can be done by a host or server, by an access control panel, or by a reader. The development of access control systems has seen a steady push of the look-up out from a central host to the edge of the system, or the reader. The predominant topology circa 2009 is hub and spoke with a control panel as the hub, and the readers as the spokes. The look-up and control functions are by the control panel. The spokes communicate through a serial connection; usually RS-485. Some manufactures are pushing the decision making to the edge by placing a controller at the door. The controllers are IP enabled, and connect to a host and database using standard networks
  • Access control readers may be classified by the functions they are able to perform
  • and forward it to a control panel.
  • Basic (non-intelligent) readers: simply read
  • Semi-intelligent readers: have all inputs and outputs necessary to control door hardware (lock, door contact, exit button), but do not make any access decisions. When a user presents a card or enters a PIN, the reader sends information to the main controller, and waits for its response. If the connection to the main controller is interrupted, such readers stop working, or function in a degraded mode. Usually semi-intelligent readers are connected to a control panel via an RS-485 bus.
  • Intelligent readers: have all inputs and outputs necessary to control door hardware; they also have memory and processing power necessary to make access decisions independently. Like semi-intelligent readers, they are connected to a control panel via an RS-485 bus. The control panel sends configuration updates, and retrieves events from the readers.
  • Systems with IP readers usually do not have traditional control panels, and readers communicate directly to a PC that acts as a host
  • a built in webservice to make it user friendly
  • Some readers may have additional features such as an LCD and function buttons for data collection purposes (i.e. clock-in/clock-out events for attendance reports), camera/speaker/microphone for intercom, and smart card read/write support
Tiberius Brastaviceanu

Decision Quality - 0 views

Kurt Laitner

What do we need corporations for and how does Valve's management structure fit into tod... - 0 views

  • Valve’s management model; one in which there are no bosses, no delegation, no commands, no attempt by anyone to tell someone what to do
  • Every social order, including that of ants and bees, must allocate its scarce resources between different productive activities and processes, as well as establish patterns of distribution among individuals and groups of output collectively produced.
  • the allocation of resources, as well as the distribution of the produce, is based on a decentralised mechanism functioning by means of price signals:
  • ...18 more annotations...
  • Interestingly, however, there is one last bastion of economic activity that proved remarkably resistant to the triumph of the market: firms, companies and, later, corporations. Think about it: market-societies, or capitalism, are synonymous with firms, companies, corporations. And yet, quite paradoxically, firms can be thought of as market-free zones. Within their realm, firms (like societies) allocate scarce resources (between different productive activities and processes). Nevertheless they do so by means of some non-price, more often than not hierarchical, mechanism!
  • they are the last remaining vestiges of pre-capitalist organisation within… capitalism
  • The miracle of the market, according to Hayek, was that it managed to signal to each what activity is best for herself and for society as a whole without first aggregating all the disparate and local pieces of knowledge that lived in the minds and subconscious of each consumer, each designer, each producer. How does this signalling happen? Hayek’s answer (borrowed from Smith) was devastatingly simple: through the movement of prices
  • The idea of spontaneous order comes from the Scottish Enlightenment, and in particular David Hume who, famously, argued against Thomas Hobbes’ assumption that, without some Leviathan ruling over us (keeping us “all in awe”), we would end up in a hideous State of Nature in which life would be “nasty, brutish and short”
  • Hume’s counter-argument was that, in the absence of a system of centralised command, conventions emerge that minimise conflict and organise social activities (including production) in a manner that is most conducive to the Good Life
  • Hayek’s argument was predicated upon the premise that knowledge is always ‘local’ and all attempts to aggregate it are bound to fail. The world, in his eyes, is too complex for its essence to be distilled in some central node; e.g. the state.
  • The idea here is that, through this ever-evolving process, people’s capacities, talents and ideas are given the best chance possible to develop and produce synergies that promote the Common Good. It is as if an invisible hand guides Valve’s individual members to decisions that both unleash each person’s potential and serve the company’s collective interest (which does not necessarily coincide with profit maximisation).
  • Valve differs in that it insists that its employees allocate 100% of their time on projects of their choosing
  • In contrast, Smith and Hayek concentrate their analysis on a single passion: the passion for profit-making
  • Hume also believed in a variety of signals, as opposed to Hayek’s exclusive reliance on price signalling
  • One which, instead of price signals, is based on the signals Valve employees emit to one another by selecting how to allocate their labour time, a decision that is bound up with where to wheel their tables to (i.e. whom to work with and on what)
  • He pointed out simply and convincingly that the cost of subcontracting a good or service, through some market, may be much larger than the cost of producing that good or service internally. He attributed this difference to transactions costs and explained that they were due to the costs of bargaining (with contractors), of enforcing incomplete contracts (whose incompleteness is due to the fact that some activities and qualities cannot be fully described in a written contract), of imperfect monitoring and asymmetrically distributed information, of keeping trade secrets… secret, etc. In short, contractual obligations can never be perfectly stipulated or enforced, especially when information is scarce and unequally distributed, and this gives rise to transaction costs which can become debilitating unless joint production takes place within the hierarchically structured firm. Optimal corporation size corresponds, in Coase’s scheme of things, to a ‘point’ where the net marginal cost of contracting out a service or good (including transaction costs) tends to zero 
  • As Coase et al explained in the previous section, the whole point about a corporation is that its internal organisation cannot turn on price signals (for if it could, it would not exist as a corporation but would, instead, contract out all the goods and services internally produced)
  • Each employee chooses (a) her partners (or team with which she wants to work) and (b) how much time she wants to devote to various competing projects. In making this decision, each Valve employee takes into account not only the attractiveness of projects and teams competing for their time but, also, the decisions of others.
  • Hume thought that humans are prone to all sorts of incommensurable passions (e.g. the passion for a video game, the passion for chocolate, the passion for social justice) the pursuit of which leads to many different types of conventions that, eventually, make up our jointly produced spontaneous order
  • Valve is, at least in one way, more radical than a traditional co-operative firm. Co-ops are companies whose ownership is shared equally among its members. Nonetheless, co-ops are usually hierarchical organisations. Democratic perhaps, but hierarchical nonetheless. Managers may be selected through some democratic or consultative process involving members but, once selected, they delegate and command their ‘underlings’ in a manner not at all dissimilar to a standard corporation. At Valve, by contrast, each person manages herself while teams operate on the basis of voluntarism, with collective activities regulated and coordinated spontaneously via the operations of the time allocation-based spontaneous order mechanism described above.
  • In contrast, co-ops and Valve feature peer-based systems for determining the distribution of a firm’s surplus among employees.
  • There is one important aspect of Valve that I did not focus on: the link between its horizontal management structure and its ‘vertical’ ownership structure. Valve is a private company owned mostly by few individuals. In that sense, it is an enlightened oligarchy: an oligarchy in that it is owned by a few and enlightened in that those few are not using their property rights to boss people around. The question arises: what happens to the alternative spontaneous order within Valve if some or all of the owners decide to sell up?
Tiberius Brastaviceanu

Federated Decision Making v1.0.pdf - Google Docs - 0 views

  •  
    A concept introduced b Roy Zuninga, tibi's contact
Tiberius Brastaviceanu

Federated Decision Making v1.0.docx - Google Docs - 0 views

  •  
    a concept introduced  by Roy Zuninga - Tibi's contact
Tiberius Brastaviceanu

Beyond Blockchain: Simple Scalable Cryptocurrencies - The World of Deep Wealth - Medium - 0 views

  • I clarify the core elements of cryptocurrency and outline a different approach to designing such currencies rooted in biomimicry
  • This post outlines a completely different strategy for implementing cryptocurrencies with completely distributed chains
  • Rather than trying to make one global, anonymous, digital cash
  • ...95 more annotations...
  • we are interested in the resilience that comes from building a rich ecosystem of interoperable currencies
  • What are the core elements of a modern cryptocurrency?
  • Digital
  • Holdings are electronic and only exist and operate by virtue of a community’s agreement about how to interpret digital bits according to rules about operation and accounting of the currency.
  • Trustless
  • don’t have to trust a 3rd party central authority
  • Decentralized
  • Specifically, access, issuance, transaction accounting, rules & policies, should be collectively visible, known, and held.
  • Cryptographic
  • This cryptographic structure is used to enable a variety of people to host the data without being able to alter it.
  • Identity
  • there must be a way to associate these bits with some kind of account, wallet, owner, or agent who can use them
  • Other things that many take for granted in blockchains may not be core but subject to decisions in design and implementation, so they can vary between implementations
  • It does not have to be stored in a synchronized global ledger
  • does not have to be money. It may be a reputation currency, or data used for identity, or naming, etc
  • Its units do not have to be cryptographic tokens or coins
  • It does not have to protect the anonymity of users, although it may
  • if you think currency is only money, and that money must be artificially scarce
  • Then you must tackle the problem of always tracking which coins exist, and which have been spent. That is one approach — the one blockchain takes.
  • You might optimize for anonymity if you think of cryptocurrency as a tool to escape governments, regulations, and taxes.
  • if you want to establish and manage membership in new kinds of commons, then identity and accountability for actions may turn out to be necessary ingredients instead of anonymity.
  • In the case of the MetaCurrency Project, we are trying to support many use cases by building tools to enable a rich ecosystem of communities and current-sees (many are non-monetary) to enhance collective intelligence at all scales.
  • Managing consensus about a shared reality is a central challenge at the heart of all distributed computing solutions.
  • If we want to democratize money by having cryptocurrencies become a significant and viable means of transacting on a daily basis, I believe we need fundamentally more scalable approaches that don’t require expensive, dedicated hardware just to participate.
  • We should not need system wide consensus for two people to do a transaction in a cryptocurrency
  • Blockchain is about managing a consensus about what was “said.” Ceptr is about distributing a consensus about how to “speak.”
  • how nature gets the job done in massively scalable systems which require coordination and consistency
  • Replicate the same processes across all nodes
  • Empower every node with full agency
  • Hold this transformed state locally and reliably
  • Establish protocols for interaction
  • Each speaker of a language carries the processes to understand sentences they hear, and generate sentences they need
  • we certainly don’t carry some kind of global ledger of everything that’s ever been said, or require consensus about what has been said
  • Language IS a communication protocol we learn by emulating the processes of usage.
  • Dictionaries try to catch up when the usage
  • there is certainly no global ledger with consensus about the state of trillions of cells. Yet, from a single zygote’s copy of DNA, our cells coordinate in a highly decentralized manner, on scales of trillions, and without the latency or bottlenecks of central control.
  • Imagine something along the lines of a Java Virtual Machine connected to a distributed version of Github
  • Every time this JVM runs a program it confirms the hash of the code it is about to execute with the hash signed into the code repository by its developers
  • This allows each node that intends to be honest to be sure that they’re running the same processes as everyone else. So when two parties want to do a transaction, and each can have confidence their own code, and the results that your code produces
  • Then you treat it as authoritative and commit it to your local cryptographically self-validating data store
  • Allowing each node to treat itself as a full authority to process transactions (or interactions via shared protocols) is exactly how you empower each node with full agency. Each node runs its copy of the signed program/processes on its own virtual machine, taking the transaction request combined with the transaction chains of the parties to the transaction. Each node can confirm their counterparty’s integrity by replaying their transactions to produce their current state, while confirming signatures and integrity of the chain
  • If both nodes are in an appropriate state which allows the current transaction, then they countersign the transaction and append to their respective chains. When you encounter a corrupted or dishonest node (as evidenced by a breach of integrity of their chain — passing through an invalid state, broken signatures, or broken links), your node can reject the transaction you were starting to process. Countersigning allows consensus at the appropriate scale of the decision (two people transacting in this case) to lock data into a tamper-proof state so it can be stored in as many parallel chains as you need.
  • When your node appends a mutually validated and signed transaction to its chain, it has updated its local state and is able to represent the integrity of its data locally. As long as each transaction (link in the chain) has valid linkages and countersignatures, we can know that it hasn’t been tampered with.
  • If you can reliably embody the state of the node in the node itself using Intrinsic Data Integrity, then all nodes can interact in parallel, independent of other interactions to maximize scalability and simultaneous processing. Either the node has the credits or it doesn’t. I don’t have to refer to a global ledger to find out, the state of the node is in the countersigned, tamper-proof chain.
  • Just like any meaningful communication, a protocol needs to be established to make sure that a transaction carries all the information needed for each node to run the processes and produce a new signed and chained state. This could be debits or credits to an account which modify the balance, or recoding courses and grades to a transcript which modify a Grade Point Average, or ratings and feedback contributing to a reputation score, and so on.
  • By distributing process at the foundation, and leveraging Intrinsic Data Integrity, our approach results in massive improvements in throughput (from parallel simultaneous independent processing), speed, latency, efficiency, and cost of hardware.
  • You also don’t need to incent people to hold their own record — they already want it.
  • Another noteworthy observation about humans, cells, and atoms, is that each has a general “container” that gets configured to a specific use.
  • Likewise, the Receptors we’ve built are a general purpose framework which can load code for different distributed applications. These Receptors are a lightweight processing container for the Ceptr Virtual Machine Host
  • Ceptr enables a developer to focus on the rules and transactions for their use case instead of building a whole framework for distributed applications.
  • how units in a currency are issued
  • Most people think that money is just money, but there are literally hundreds of decisions you can make in designing a currency to target particular needs, niches, communities or patterns of flow.
  • Blockchain cryptocurrencies are fiat currencies. They create tokens or coins from nothing
  • These coins are just “spoken into being”
  • the challenging task of
  • ensure there is no counterfeiting or double-spending
  • Blockchain cryptocurrencies are fiat currencies
  • These coins are just “spoken into being”
  • the challenging task of tracking all the coins that exist to ensure there is no counterfeiting or double-spending
  • You wouldn’t need to manage consensus about whether a cryptocoin is spent, if your system created accounts which have normal balances based on summing their transactions.
  • In a mutual credit system, units of currency are issued when a participant extends credit to another user in a standard spending transaction
  • Alice pays Bob 20 credits for a haircut. Alice’s account now has -20, and Bob’s has +20.
  • Alice spent credits she didn’t have! True
  • Managing the currency supply in a mutual credit system is about managing credit limits — how far people can spend into a negative balance
  • Notice the net number units in the system remains zero
  • One elegant approach to managing mutual credit limits is to set them based on actual demand.
  • concerns about manufacturing fake accounts to game credit limits (Sybil Attacks)
  • keep in mind there can be different classes of accounts. Easy to create, anonymous accounts may get NO credit limit
  • What if I alter my code to give myself an unlimited credit limit, then spend as much as I want? As soon as you pass the credit limit encoded in the shared agreements, the next person you transact with will discover you’re in an invalid state and refuse the transaction.
  • If two people collude to commit an illegal transaction by both hacking their code to allow a normally invalid state, the same still pattern still holds. The next person they try to transact with using untampered code will detect the problem and decline to transact.
  • Most modern community currency systems have been implemented as mutual credit,
  • Hawala is a network of merchants and businessmen, which has been operating since the middle ages, performing money transfers on an honor system and typically settling balances through merchandise instead of transferring money
  • Let’s look at building a minimum viable cryptocurrency with the hawala network as our use case
  • To minimize key management infrastructure, each hawaladar’s public key is their address or identity on the network. To join the network you get a copy of the software from another hawaladar, generate your public and private keys, and complete your personal profile (name, location, contact info, etc.). You call, fax, or email at least 10 hawaladars who know you, and give them your IP address and ask them to vouch for you.
  • Once 10 other hawaladars have vouched for you, you can start doing other transactions because the protocol encoded in every node will reject a transaction chain that doesn’t start with at least 10 vouches
  • seeding your information with those other peers so you can be found by the rest of the network.
  • As described in the Mutual Credit section, at the time of transaction each party audits the counterparty’s transaction chain.
  • Our hawala crypto-clearinghouse protocol has two categories of transactions: some used for accounting and others for routing. Accounting transactions change balances. Routing transactions maintain network integrity by recording information about hawaladar
  • Accounting Transactions create signed data that changes account balances and contains these fields:
  • The final hash of all of the above fields is used as a unique transaction ID and is what each of party signs with their private keys. Signing indicates a party has agreed to the terms of the transaction. Only transactions signed by both parties are considered valid. Nodes can verify signatures by confirming that decryption of the signature using the public key yields a result which matches the transaction ID.
  • Routing Transactions sign data that changes the peers list and contain these fields:
  • As with accounting transactions, the hash of the above fields is used as the transaction’s unique key and the basis for the cryptographic signature of both counterparties.
  • Remember, instead of making changes to account balances, routing transactions change a node’s local list of peers for finding each other and processing.
  • a distributed network of mutual trust
  • operates across national boundaries
  • everyone already keeps and trusts their own separate records
  • Hawaladars are not anonymous
  • “double-spending”
  • It would be possible for someone to hack the code on their node to “forget” their most recent transaction (drop the head of their chain), and go back to their previous version of the chain before that transaction. Then they could append a new transaction, drop it, and append again.
  • After both parties have signed the agreed upon transaction, each party submits the transaction to separate notaries. Notaries are a special class of participant who validate transactions (auditing each chain, ensuring nobody passes through an invalid state), and then they sign an outer envelope which includes the signatures of the two parties. Notaries agree to run high-availability servers which collectively manage a Distributed Hash Table (DHT) servicing requests for transaction information. As their incentive for providing this infrastructure, notaries get a small transaction fee.
  • This approach introduces a few more steps and delays to the transaction process, but because it operates on independent parallel chains, it is still orders of magnitude more efficient and decentralized than reaching consensus on entries in a global ledger
  • millions of simultaneous transactions could be getting processed by other parties and notaries with no bottlenecks.
  • There are other solutions to prevent nodes from dropping the head of their transaction chain, but the approach of having notaries serve out a DHT solves a number of common objections to completely distributed accounting. Having access to reliable lookups in a DHT provides a similar big picture view that you get from a global ledger. For example, you may want a way to look up transactions even when the parties to that transaction are offline, or to be able to see the net system balance at a particular moment in time, or identify patterns of activity in the larger system without having to collect data from everyone individually.
  • By leveraging Intrinsic Data Integrity to run numerous parallel tamper-proof chains you can enable nodes to do various P2P transactions which don’t actually require group consensus. Mutual credit is a great way to implement cryptocurrencies to run in this peered manner. Basic PKI with a DHT is enough additional infrastructure to address main vulnerabilities. You can optimize your solution architecture by reserving reserve consensus work for tasks which need to guarantee uniqueness or actually involve large scale agreement by humans or automated contracts.
  • It is not only possible, but far more scalable to build cryptocurrencies without a global ledger consensus approach or cryptographic tokens.
  •  
    Article written by Arthur Brook, founder of Metacurrency project and of Ceptr.
Tiberius Brastaviceanu

Free-Form Authority Models - P2P Foundation - 0 views

  • ‘authority models’in peer production, contrasts owner-centric authority models from free-form models
  • define the authority models at work in such projects. The models define access and the workflow, and whether there is any quality control.
  • the owner-centric model, entries can only be modified with the permission of a specific ‘owner’ who has to defend the integrity of his module.
  • ...11 more annotations...
  • The free-form model connotes more of a sense that all users are on the “same level," and that expertise will be universally recognized and deferred to.
  • the owner-centric authority model assumes the owner is the de facto expert in the topic at hand
  • In the case of the Wikipedia, the adherents of the owner-centric model, active in the pre-Wikipedia "Nupedia" model, lost out, and presumable, the success of Wikipedia has proven them wrong
  • dominance of difficult people, trolls, and their enablers
  • Far too much credence and respect accorded to people who in other Internet contexts would be labelled "trolls."
  • Wikipedia has, to its credit, done something about the most serious trolling and other kinds of abuse: there is an Arbitration Committee that provides a process whereby the most disruptive users of Wikipedia can be ejected from the project. But there are myriad abuses and problems that never make it to mediation, let alone arbitration.
  • most people working on Wikipedia--the constant fighting can be so off-putting as to drive them away
  • any person who can and wants to work politely with well-meaning
  • root problem: anti-elitism, or lack of respect for expertise.
  • Wikipedia lacks the habit or tradition of respect for expertise
  • nearly everyone with much expertise but little patience will avoid editing Wikipedia
  •  
    from p2p foundation 
1 - 20 of 54 Next › Last »
Showing 20 items per page