Skip to main content

Home/ ODI Data Infrastructure, Institutions & Data Access Initiatives/ Group items tagged UK

Rss Feed Group items tagged

jeni10

UK Research and Development Roadmap - GOV.UK - 0 views

  • Another underutilised lever is procurement, in which government and public service providers can act as an early adopter and first customer for new technologies and ways of working.
  • Build on our innovation enabling infrastructure Innovation happens throughout the UK, but access to the right support and facilities is not consistently available. Wherever we have high-quality R&D infrastructure we need to take full advantage of this, by encouraging the creation of new innovation zones and clusters of innovative firms around existing infrastructure around the UK. We will consider the full range of levers in doing so, from direct funding, to business support, to government’s ability to convene different actors and promote new opportunities. We want to build on our Catapult Network’s existing performance, boosting the benefits the network brings to local economies and addressing major challenges such as net zero carbon emissions. We will review whether they should all continue in their current form, exploring the potential to seize new opportunities. We will consider the opportunities provided by PSREs and other publicly funded research institutes, including establishing how government can best drive innovation through these organisations, for example through proof of concept for innovations and better sharing of ideas. To support publicly funded research infrastructures to make the most of their innovations, we will establish a fund to invest in innovative public sector ideas and a new unit to scout for and develop these opportunities.
  • Taking forward our National Data Strategy, we will improve the access to trusted data resources at local and regional levels to improve the availability of evidence at those levels to give local leaders what they need to build robust R&D ecosystems.
  • ...4 more annotations...
  • Data that could drive new discoveries or innovation is not always as available as it could be.
  • make the most of PSREs, which have the potential to deliver broad public policy objectives and help innovation translation enable work across institutions to solve the grand challenges of our time make the most of our institutions, to use research to improve both UK and devolved policy outcomes and to measure and refine programme performance
  • Crucially, we must embrace the potential of open research practices. First, we will require that research outputs funded by the UK government are freely available to the taxpayer who funds research. Such open publication will also ensure that UK research is cited and built on all over the world. We will mandate open publication and strongly incentivise open data sharing where appropriate, so that reproducibility is enabled, and knowledge is shared and spread collaboratively. Second, we will ensure that more modern research outputs are recognised and rewarded. For example, we will ensure that digital software and datasets are properly recognised as research outputs, so that we can minimise efforts spent translating digital outputs into more traditional formats. Third, we will consider the case for new infrastructure to enable more effective sharing of knowledge between researchers and with industry to accelerate open innovation where possible.
  • PSREs and other publicly funded research institutes Public sector research establishments (PSREs) and other publicly funded institutes – including UKRI-funded institutes and institutes funded by the devolved administrations, are a diverse collection of bodies carrying out research. This research supports government objectives, including informing policy-making, statutory and regulatory functions and providing a national strategic resource in key areas of research. They can also provide emergency response services. They interact with businesses around a wide array of innovation-related functions. We want to get the most value out of the whole range of PSREs and publicly funded institutes, laboratories and campuses. The current PSRE and institute landscape is complex. There is an opportunity to raise awareness and support development of strategic national laboratory capability, develop closer relationships across the network of PSREs and institutes to address cross-cutting priorities and develop more consistent and co-ordinated, accessible funding for PSREs. Most programmes do not include funding for the full costs of overheads – this sometimes prevents our national laboratories from participating in UK government priority programmes without making a loss. A more flexible approach and funding a higher proportion of the economic costs would increase spending efficiency and encourage more effective investments and maximise their benefits. Building on the 2019 Science Capability Review, we will: champion the development of a truly strategic, national laboratory capability and identify opportunities to strengthen their capabilities and ability to collaborate, especially with the private sector, devolved administrations and local civic authorities work to understand current capacity and capability, including spare capability, and to ensure that national labs, PSREs and other publicly funded institutes are working together as part of business as usual rather than only in times of crisis explore the potential for all PSREs to have access to more funding opportunities from UKRI so that PSREs are viewed as national assets rather than the property of UK government departments
Sonia Duarte

Clinical Service Accreditation - HQIP - 0 views

  • CSA provides this quality improvement framework in three main ways: by supporting clinical services to develop new accreditation schemes by supporting existing clinical services accreditation schemes to attain professionally-led standards by demonstrating the impact of accreditation in achieving high quality clinical services
  • HQIP (on behalf of the Clinical Service Accreditation Sponsor Gorup), BSI and the United Kingdom Accreditation Service (UKAS) share a common purpose in improving the quality of healthcare services through standards and accreditation.
  • These resources provide professional bodies, CEOs, managers and clinical leaders with a framework for setting up and managing clinical accreditation schemes.
  • ...1 more annotation...
  • Development of standards for clinical service accreditation schemes Sharing and improving accreditation methodologies How to map clinical services into groupings for the development of accreditation schemes PAS 1616  – A generic framework of standards for accrediting clinical services from the British Standards Institute  Information management, data, and systems Support for development of accreditation schemes
Ben Snaith

Patterns of data institution that support people to steward data themselves, or become ... - 0 views

  • it enables people to contribute data about them to it and, on a case-by-case basis, people can choose to permit third parties to access that data. This is the pattern that many personal data stores and personal data management systems adopt in holding data and enabling users to unlock new apps and services that can plug into it. Health Bank enables people to upload their medical records and other information like wearable readings and scans to share with doctors or ‘loved ones’ to help manage their care; Japan’s accredited information banks might undertake a similar role. Other examples — such as Savvy and Datacoup — seem to be focused on sharing data with market research companies willing to offer a form of payment. Some digital identity services may also conform to this pattern.
  • it enables people to contribute data about them to it and, on a case-by-case basis, people can choose whether that data is shared with third parties as part of aggregate datasets. OpenHumans is an example that enables communities of people to share data for group studies and other activities. Owners of a MIDATA account can “actively contribute to medical research and clinical studies by granting selective access to their personal data”. The approach put forward by the European DECODE project would seem to support this type of individual buy-in to collective data sharing, in that case with a civic purpose. The concept of data unions advocated by Streamr seeks to create financial value for individuals by creating aggregate collections of data in this way. Although Salus Coop asks its users to “share and govern [their] data together.. to put it at the service of collective return”, it looks as though individuals can choose which uses to put it to.
  • it enables people to contribute data about them to it and decisions about what third parties can access aggregate datasets are taken collectively. As an example, The Good Data seeks to sell browsing data generated by its users “entirely on their members’ terms… [where] any member can participate in deciding these rules”. The members of the Holland Health Data Cooperative would similarly appear to “determine what happens to their data” collectively, as would drivers and other workers who contribute data about them to Workers Info Exchange.
  • ...6 more annotations...
  • it enables people to contribute data about them and defer authority to it to decide who can access the data. A high-profile proposal of this pattern comes in the form of ‘bottom-up data trusts’ — Mozilla Fellow Anouk Ruhaak has described scenarios where multiple people “hand over their data assets or data rights to a trustee”. Some personal data stores and personal information management systems will also operate under this kind of delegated authority within particular parameters or settings.
  • people entrust it to mediate their relationships with services that collect data about them. This is more related to decisions about data collection rather than decisions about access to existing data, but involves the stewardship of data nonetheless. For example, Tom Steinberg has described a scenario whereby “you would nominate a Personal Data Representative to make choices for you about which apps can do what with your data.. [it] could be a big internet company, it could be a church, it could be a trade union, or it could be a dedicated rights group like the Electronic Frontier Foundation”. Companies like Disconnect.Me and Jumbo are newer examples of this type of approach in practice.
  • it enables people to collect or create new data. Again, this pattern describes the collection rather than the re-use of existing data. For example, OpenBenches enables volunteers to contribute information about memorial benches, and OpenStreetMap does similar at much larger scale to collaboratively create and maintain a free map of the world. The ODI has published research into well-known collaboratively maintained datasets, including Wikidata, Wikipedia and MusicBrainz, and a library of related design patterns. I’ve included this pattern here as to me it represents a way for people to be directly involved in the stewardship of data, personal or not.
  • it collects data in providing a service to users and, on a case-by-case basis, users can share that data directly with third parties. This pattern enables users to unlock new services by sharing data about them (such as via Open Banking and other initiatives labelled as ‘data portability’), or to donate data for broader notions of good (such as Strava’s settings that enable its users to contribute data about them to aggregate datasets shared with cities for planning). I like IF’s catalogue of approaches for enabling people to permit access to data in this way, and its work to show how services can design for the fact that data is often about multiple people.
  • it collects data by providing a service to users and shares that data directly with third parties as provisioned for in its Terms and Conditions. This typically happens when we agree to Ts&Cs that allow data about us to be shared with third parties of an organisation’s choice, such as for advertising, and so might be considered a ‘dark’ pattern. However, some data collectors are beginning to do this for more public, educational or charitable purposes — such as Uber’s sharing of aggregations of data with cities via the SharedStreets initiative. Although the only real involvement we have here in stewarding data is in choosing to use the service, might we not begin to choose between services, in part, based on how well they act as data institutions?
  • I echo the point that Nesta recently made in their paper on ‘citizen-led data governance’, that “while it can be useful to assign labels to different approaches, in reality no clear-cut boundary exists between each of the models, and many of the models may overlap”
Ben Snaith

345725803-The-state-of-weather-data-infrastructure-white-paper.pdf - 1 views

  • From its early beginnings over 150 years ago, weather forecasting at the Met Office has been driven by data. Simple observations recorded and used to hand-plot synoptic charts have been exchanged for the billions of observations received and handled every day, mainly from satellites but also from weather stations, radar , ocean buoys, planes, shipping and the public.
  • The key stages of the weather data value chain are as follows: Ÿ Monitoring and observation of the weather and environment, e.g. by NMSs. Ÿ Numerical weather prediction (NWP) and climate modelling carried out by NMSs to create global, regional and limited area weather forecasts. Private companies are growing their presence in the market and challenging the traditional role of NMSs to provide forecasts to the public, by statistically blending data from NMS models to create their own forecast models, for example. Other companies providing data via online channels and/or apps include The Weather Company, Accuweather or the Climate Corporation. Ÿ Communication and dissemination of forecasts by news, NMS and media organisations like the BBC, Yahoo and Google, or within consumer-targeted mobile and web applications. Ÿ Decision making by individuals and businesses across a variety of sectors, which draws on weather data and reporting.
  • The core data asset of our global weather data infrastructure is observation data that captures a continuous record of weather and climate data around the world. This includes temperature, rainfall, wind speed and details of a host of other atmospheric, surface and marine conditions.
  • ...5 more annotations...
  • The collection of observation data is a global effort. The Global Observing System consists of around 11,000 ground-based monitoring stations supplemented with thousands of sensors installed on weather balloons, aircraft and ships. 3 Observations are also collected from a network of radar installations and satellite-based sensors. As we see later, the ‘official’ observation system is increasingly being supplemented with new sources of observation data from the Internet of Things (IoT).
  • Ensemble model forecasts aim to give an indication of the range of possible future states of the atmosphere and oceans (which are a significant driver of the weather in the atmosphere). This overcomes errors introduced by using imperfect measurement of initial starting conditions that are then amplified by the chaotic nature of the atmosphere. Increasing the number of forecast members over a global scale and at higher resolutions result in data volumes increasing exponentially .
  • Created in 1950, The World Meteorological Organisation (WMO) is made up of 191 member states and territories around the world. The WMO was founded on the principle that global coordination was necessary to reap the benefits of weather and climate data. This includes a commitment to weather data and products being freely exchanged around the world (Obasi, 1995).
  • While the WMO has a global outlook, its work is supplemented by regional meteorological organisations like the European Centre for Medium Range Weather Forecasts (ECMWF) and NMSs, such as the Met Office in the UK
  • There are increasing new sources of weather observation data. In recent years, services like Weather Underground and the Met Office’s Weather Observation Website have demonstrated the potential for people around the world to contribute weather observations about their local areas – using low-cost home weather stations and sensors, for example. But there is now potential for sensors in cities, homes, cars, cell towers and even mobile phones to contribute observational data that could also be fed into forecast models.
fionntan

UKAS : The Benefits - 1 views

shared by fionntan on 15 Jun 20 - No Cached
  • It provides the following benefits: Competitive advantage: accreditation provides independent assurance that your staff is competent. It can sets you apart from the competition, and enable you to compete with larger organisations. Market access: accreditation is specified by an increasing number of public and private sector organisations. UKAS accreditation is also recognised and accepted globally, therefore opening up opportunities overseas. Accreditation can highlight gaps in capability, thereby providing the opportunity for improved organisational efficiency and outputs There are a number of insurance brokers and underwriters that recognise accreditation as an important factor in assessing risk, and can therefore offer lower premiums.
  • Organisations can save time and money by selecting an accredited and therefore competent supplier.
  • Provide an alternative to Regulation whilst ensuring the reliability of activities that have the potential to impact on public confidence, health and safety or the environment.
  • ...1 more annotation...
  • Accreditation is a means of assessing, in the public interest, the technical competence and integrity of organisations offering evaluation services.Accreditation, with its many potential benefits for the quality of goods and in the provision of services throughout the supply chain, underpins practical applications of an increasingly wide range of activities across all sectors of the economy, from fishing to forestry, construction to communications. Independent research has confirmed that accreditation has a positive economic value of nearly £1bn on the UK economy each year. 
fionntan

Declaration of Conformity - Work equipment and machinery - 0 views

  • The precise requirements are specified in each relevant directive, but essentially Declarations of Conformity should include the following: business name and full address of the manufacturer and, where appropriate, his authorised representative; for machinery, the name and address of the person authorised to compile the technical file, who must be established in the Community; description and identification of the product, which may including information on model, type, and serial number a declaration as to which Directive(s) the product fulfils all relevant provisions where appropriate, a reference to the harmonised standards used, and to which conformity is declared where appropriate, the name, address and identification number of the notified body which carried out conformity assessment the identity and signature of the person empowered to draw up the declaration on behalf of the manufacturer or his authorised representative
  • When must a Declaration of Conformity be provided? For most new products the Declaration of Conformity must accompany the product through the supply chain to the end user.
  • This document should declare key information, including: the name and address of the organisation taking responsibility for the product a description of the product list which product safety Directives it complies with may include details of relevant standards used and be dated, and signed by a representative of the organisation placing it on the EU/EEA market.
Sonia Duarte

How do different communities create unique identifiers? - Lost Boy - 0 views

  • They play an important role, helping to publish, structure and link together data.
  • The simplest way to generate identifiers is by a serial number.
  • the Ordnance Survey TOID identifier is a serial number that looks like this: osgb1000006032892. UPRNs are similar.
  • ...15 more annotations...
  • Some serial numbering systems include built in error-checking to deal with copying errors, using a check digit.
  • The second way of providing unique identifiers is using a name or code.
  • These are typically still assigned by a central authority, sometimes known as a registration agency, but they are constructed in different ways.
  • Encoding information about geography and hierarchy within codes can be useful. It can make them easier to validate.
  • It also mean you can also manipulate them,
  • But encoding lots of information in identifiers also has its downsides. The main one being dealing with changes to administrative areas that mean the hierarchy has changed. Do you reassign all the identifiers?
  • some identifier systems look at reducing the burden on that central authority.
  • federated assignment. This is where the registration agency shares the work of assigning identifiers with other organisations.
  • Another approach to reducing dependence on, and coordination with a single registration agency, is to use what I’ll call “local assignment“.
  • A simplistic approach to local assignment is “block allocation“: handing out blocks of pregenerated identifiers to organisations which can locally assign them.
  • Here the registration agency still generates the identifiers, but the assignment of identifier to “thing” is done locally.
  • A more common approach is to use “prefix allocation“. In this approach the registration agency assigns individual organisations a prefix within the identifier system.
  • One challenge with prefix allocation is ensuring that the rules for locally assigned suffixes work in every context where the identifier needs to appear.
  • In distributed assignment of identifiers, anyone can create an identifier. Rather than requesting an identifier, or a prefix from a registration agency, these systems operate by agreeing rules for how unique identifiers can be constructed.
  • A hash based identifier takes some properties of the thing we want to identify and then use that to construct an identifier. 
Ben Snaith

Diamond_theFirstCut_pdf.pdf - 0 views

  • Currently, we are also unable to ascertain the extent to which our data sample is representative of the workforce it is trying to capture. Although we are reporting on 80,804 contributions from 5,904 contributors, the response rate is relatively low (24.3% of those invited to submit data). The low response rate and self-selecting nature of Diamond means there is the possibility of bias in the data we present here. We are taking this into account and will consider it as we undertake an equality analysis 1 of the system one year on.
  • Diamond collects: • Actual diversity data (across six protected characteristic groups) from individuals (contributors) who have a role in making television, whether on- or off-screen; and • Perceived diversity data (across the six protected characteristics) of the on-screen contributors (i.e. diversity characteristics as viewers might perceive them).
  • Diamond is collecting: • Actual diversity data (from those making and appearing on television, including freelancers) and Perceived diversity data (how the viewer might perceive those they see on television) • Data across six protected characteristic groups: gender, gender identity, age, ethnicity, sexual orientation, and disability 2 • Data from those making a significant contribution to a programme • Data from original programmes only, commissioned by the current five Diamond broadcasters for UK transmission • Data from programmes across all genres (although we do not currently report on news and sport) broadcast on a total of 30 channels across the five Diamond broadcasters.
  • ...2 more annotations...
  • Diamond diversity data is collected via the online platform Silvermouse, which is already used by many broadcasters and production companies to collect and manage other kinds of information about the programmes they make.
  • Diamond does not collect: • Data from programmes which have not been commissioned by the five Diamond broadcasters • Data on people working across broadcasting more generally, outside of production (in other words, our data are not overall workforce statistics ) • Data where it is impractical to do so and where relevant privacy notices cannot be given. (Diamond does not collect data from every person appearing on television as part of a crowd scene, for example.)
  •  
    This report provides an initial view of the data that has been collected and made available to CDN since Diamond went live on 15 August 2016.
Ben Snaith

Why we're calling for a data collective - The Catalyst - 0 views

  • We propose forming a data collective: a conscious, coordinated effort by a group of organisations with expertise in gathering and using data in the charity sector. We want to make sure that people in charities, on the front line and in leadership positions have access to the information they need, in a timely fashion, in the easiest possible format to understand, with the clearest possible analysis of what it means for them.
  •  
    "Social Economy Data Lab"
1 - 13 of 13
Showing 20 items per page