
Process Control Block - 0 views
-
computersciencej on 03 Mar 18What is Process Control Block ? Today in this Computer Science Study Material for Gate we will discuss about process control block and its various field which provides the information about process. .So let see what is process control block. A Process Control Block is a data structure maintained by the Operating System for every process. Each process has it own data structure. When a process is created then a unique id is assigned to the process Operating system identify a process among all processes on the basis of this process id. A PCB keeps all the information needed to keep track of a process. Generally a process control block contains the following information about a process. To read full tutorial click on the given link http://www.computersciencejunction.in/2018/02/introduction-to-process-control-block-in-operating-system.html

TCP/IP model questions based Study Material for gate Computer Science - 0 views
-
TCP/IP protocol based questions for gate computer science exam Q1.What is the difference between transport and session layer of OSI model. Answer: OSI Model Transport Layer The transport layer uses the services provided by the network layer, such as best path selection and logical addressing, to provide end-to-end communication between source and destination. The transport -layer data stream is a logical connection between the endpoints of a network. End-to-end control is provided by sliding windows and reliability in sequencing numbers and acknowledgments. The transport layer regulates information flow to ensure end-to-end connectivity between host applications reliably and accurately. The TCP/ IP protocol of Layer 4 (t transport t layer ) has two protocols. They are TCP and UDP. The transport layer accepts data from the session layer and segments the data for transport across the network. Generally, the transport layer is responsible for making sure that the data is delivered error-free and in the proper sequence. Flow control generally occurs at the transport layer. OSI Model Session Layer The session layer establishes, manages, and terminates communication sessions. Communication sessions consist of service requests and service responses that occur between applications located on different network devices. These requests and responses are coordinated by protocols implemented at the session layer. The session layer establishes, manages, and terminates sessions between applications Functions of the session layer and the different processes that occur as data packets travel through this layer. More specifically, you learned that Communication sessions consist of mini-conversations that occur between applications located on different network devices. Requests and responses are coordinated by protocols implemented at the session layer. The session layer decides whether to use two-way simultaneous communication or two-way alternate communicati
Simulated annealing - Wikipedia, the free encyclopedia - 1 views
-
Simulated annealing (SA) is a generic probabilistic metaheuristic for the global optimization problem of applied mathematics, namely locating a good approximation to the global minimum of a given function in a large search space. It is often used when the search space is discrete (e.g., all tours that visit a given set of cities). For certain problems, simulated annealing may be more effective than exhaustive enumeration — provided that the goal is merely to find an acceptably good solution in a fixed amount of time, rather than the best possible solution. The name and inspiration come from annealing in metallurgy, a technique involving heating and controlled cooling of a material to increase the size of its crystals and reduce their defects. The heat causes the atoms to become unstuck from their initial positions (a local minimum of the internal energy) and wander randomly through states of higher energy; the slow cooling gives them more chances of finding configurations with lower internal energy than the initial one. By analogy with this physical process, each step of the SA algorithm replaces the current solution by a random "nearby" solution, chosen with a probability that depends on the difference between the corresponding function values and on a global parameter T (called the temperature), that is gradually decreased during the process. The dependency is such that the current solution changes almost randomly when T is large, but increasingly "downhill" as T goes to zero. The allowance for "uphill" moves saves the method from becoming stuck at local minima—which are the bane of greedier methods. The method was independently described by S. Kirkpatrick, C. D. Gelatt and M. P. Vecchi in 1983 [1], and by V. Černý in 1985 [2]. The method is an adaptation of the Metropolis-Hastings algorithm, a Monte Carlo method to generate sample states of a thermodynamic system, invented by N. Metropolis et al. in 1953 [3].
-
Simulated annealing (SA) is a generic probabilistic metaheuristic for the global optimization problem of applied mathematics, namely locating a good approximation to the global minimum of a given function in a large search space. It is often used when the search space is discrete (e.g., all tours that visit a given set of cities). For certain problems, simulated annealing may be more effective than exhaustive enumeration — provided that the goal is merely to find an acceptably good solution in a fixed amount of time, rather than the best possible solution. The name and inspiration come from annealing in metallurgy, a technique involving heating and controlled cooling of a material to increase the size of its crystals and reduce their defects. The heat causes the atoms to become unstuck from their initial positions (a local minimum of the internal energy) and wander randomly through states of higher energy; the slow cooling gives them more chances of finding configurations with lower internal energy than the initial one. By analogy with this physical process, each step of the SA algorithm replaces the current solution by a random "nearby" solution, chosen with a probability that depends on the difference between the corresponding function values and on a global parameter T (called the temperature), that is gradually decreased during the process. The dependency is such that the current solution changes almost randomly when T is large, but increasingly "downhill" as T goes to zero. The allowance for "uphill" moves saves the method from becoming stuck at local minima—which are the bane of greedier methods. The method was independently described by S. Kirkpatrick, C. D. Gelatt and M. P. Vecchi in 1983 [1], and by V. Černý in 1985 [2]. The method is an adaptation of the Metropolis-Hastings algorithm, a Monte Carlo method to generate sample states of a thermodynamic system, invented by N. Metropolis et al. in 1953 [3].
-
Simulated annealing (SA) is a generic probabilistic metaheuristic for the global optimization problem of applied mathematics, namely locating a good approximation to the global minimum of a given function in a large search space. It is often used when the search space is discrete (e.g., all tours that visit a given set of cities). For certain problems, simulated annealing may be more effective than exhaustive enumeration — provided that the goal is merely to find an acceptably good solution in a fixed amount of time, rather than the best possible solution. The name and inspiration come from annealing in metallurgy, a technique involving heating and controlled cooling of a material to increase the size of its crystals and reduce their defects. The heat causes the atoms to become unstuck from their initial positions (a local minimum of the internal energy) and wander randomly through states of higher energy; the slow cooling gives them more chances of finding configurations with lower internal energy than the initial one. By analogy with this physical process, each step of the SA algorithm replaces the current solution by a random "nearby" solution, chosen with a probability that depends on the difference between the corresponding function values and on a global parameter T (called the temperature), that is gradually decreased during the process. The dependency is such that the current solution changes almost randomly when T is large, but increasingly "downhill" as T goes to zero. The allowance for "uphill" moves saves the method from becoming stuck at local minima—which are the bane of greedier methods. The method was independently described by S. Kirkpatrick, C. D. Gelatt and M. P. Vecchi in 1983 [1], and by V. Černý in 1985 [2]. The method is an adaptation of the Metropolis-Hastings algorithm, a Monte Carlo method to generate sample states of a thermodynamic system, invented by N. Metropolis et al. in 1953 [3].
- ...1 more annotation...
-
Simulated annealing (SA) is a generic probabilistic metaheuristic for the global optimization problem of applied mathematics, namely locating a good approximation to the global minimum of a given function in a large search space. It is often used when the search space is discrete (e.g., all tours that visit a given set of cities). For certain problems, simulated annealing may be more effective than exhaustive enumeration - provided that the goal is merely to find an acceptably good solution in a fixed amount of time, rather than the best possible solution. The name and inspiration come from annealing in metallurgy, a technique involving heating and controlled cooling of a material to increase the size of its crystals and reduce their defects. The heat causes the atoms to become unstuck from their initial positions (a local minimum of the internal energy) and wander randomly through states of higher energy; the slow cooling gives them more chances of finding configurations with lower internal energy than the initial one. By analogy with this physical process, each step of the SA algorithm replaces the current solution by a random "nearby" solution, chosen with a probability that depends on the difference between the corresponding function values and on a global parameter T (called the temperature), that is gradually decreased during the process. The dependency is such that the current solution changes almost randomly when T is large, but increasingly "downhill" as T goes to zero. The allowance for "uphill" moves saves the method from becoming stuck at local minima-which are the bane of greedier methods. The method was independently described by S. Kirkpatrick, C. D. Gelatt and M. P. Vecchi in 1983 [1], and by V. Černý in 1985 [2]. The method is an adaptation of the Metropolis-Hastings algorithm, a Monte Carlo method to generate sample states of a thermodynamic system, invented by N. Metropolis et al. in 1953 [3].
-
A natural AI approach
Model-view-controller - Wikipedia, the free encyclopedia - 0 views
-
Model–view–controller (MVC) is an architectural pattern used in software engineering. Successful use of the pattern isolates business logic from user interface considerations, resulting in an application where it is easier to modify either the visual appearance of the application or the underlying business rules without affecting the other. In MVC, the model represents the information (the data) of the application; the view corresponds to elements of the user interface such as text, checkbox items, and so forth; and the controller manages the communication of data and the business rules used to manipulate the data to and from the model.
-
MVC is often seen in web applications, where the view is the actual HTML or XHTML page, and the controller is the code that gathers dynamic data and generates the content within the HTML or XHTML. Finally, the model is represented by the actual content, which is often stored in a database or in XML nodes, and the business rules that transform that content based on user actions.
Home · Solid - 0 views
-
"All of your data, under your control Solid lets people store their data securely in decentralized data stores called Pods. Pods are like secure personal web servers for data. All data in a pod is accessible via the Solid Protocol. When data is stored in someone's pod, they control who and what can access it. Solid is led by the inventor of the Web, Sir Tim Berners-Lee, to help realise his vision for its future."
Colocation Hosting Services - 0 views
-
Colocation allows you to place your server machine in someone else's rack and share their bandwidth as your own.Choosing the colocation hosting services from Rackbank Datacenter is a beneficial and a cost effective option for businesses that need a stable and high performance network while maintaining total control over hardware and server administration.
List of network protocols - Wikipedia, the free encyclopedia - 0 views
-
IPv4 Internet Protocol version 4
-
IPv6 Internet Protocol version 6
-
المفروض ده بقي البروتوكول الجديد الي إن شاء الله هيطبق بعيدن
-
Currently there are two types of Internet Protocol (IP) addresses in active use: IP version 4 (IPv4) and IP version 6 (IPv6). IPv4 was initially deployed on 1 January 1983 and is still the most commonly used version. IPv4 addresses are 32-bit numbers often expressed as 4 octets in "dotted decimal" notation (for example, 192.0.2.53). Deployment of the IPv6 protocol began in 1999. IPv6 addresses are 128-bit numbers and are conventionally expressed using hexadecimal strings (for example, 2001:0db8:582:ae33::29). source : http://www.iana.org/numbers/ thanks tecno for your useful comments :) keep it up(Y)
-
-
TCP Transmission Control Protocol UDP User Datagram Protocol
-
The Diffrance between TCP and UDP is that TCP check the packet that packets are sent correctly ... but UDP just send and don't check ....so UDP is faster than TCP but TCP check that data is sent correctly .... so UDP is used in applications like Video Streaming (You Tube) and Voice Streaming too
-
check this http://www.skullbox.net/tcpudp.php
-
- ...6 more annotations...
PureOS - 0 views
Join the Battle for Net Neutrality - 0 views
-
"This is a battle for the future of the internet Comcast & Verizon want to end net neutrality so they can control what we see & do online. In 66 days, the FCC will let them, unless we stop it. This is a battle for the Internet's future. Before you do anything else, send a letter to the FCC & Congress now!"
A Million Dollar Request for Social Networks | Blockstack - 0 views
-
"It's time for a new breed of social networks - where power is taken back from a single authority and control is returned to you, to me, to all of us. It's time to decentralize social networks. This is a movement toward greater personal freedom, but it takes an empowered community to build this future."
Home · Solid - 0 views
Venho.ai - 0 views
Deep Blue (chess computer) - Wikipedia, the free encyclopedia - 0 views
-
Deep Blue was a chess-playing computer developed by IBM. On May 11, 1997, the machine won a six-game match by two wins to one with three draws against world champion Garry Kasparov.[1] Kasparov accused IBM of cheating and demanded a rematch, but IBM declined and dismantled Deep Blue.[2] Kasparov had beaten a previous version of Deep Blue in 1996
-
Deep Blue was then heavily upgraded (unofficially nicknamed "Deeper Blue")[11] and played Kasparov again in May 1997, winning the six-game rematch 3½–2½, ending on May 11, finally ending in game six, and becoming the first computer system to defeat a reigning world champion in a match under standard chess tournament time controls.
-
The system derived its playing strength mainly out of brute force computing power.
What's in an HTTP request? - 0 views
-
These headers tell us which web server you were trying to contact. If that seems odd, bear in mind that many web sites can be "hosted" on a single server, so when the request is received it needs to know which web site you were attempting to access
-
The request method is usually either "GET" or "POST". Basically if you fill in and submit a form on a web page it might generate a POST request (or it might be "GET"), whereas if you just click on a link, or activate one of your browser's "bookmarks" or "favourites", then the request method will always be "GET". Therefore, if it's "POST", we can tell that a form was definitely submitted. The contents of the form would appear here, and there would also be some "Content-" headers describing the data. Web browsers generate two kinds of "POST" data: either "multipart/form-data", which is used when uploading files to a web server, or the more common "application/x-www-form-urlencoded".
-
The "referer" header tells us which document referred you to us - in essence, if you followed a link to get to this page, it is the URL of the page you came from to get here.
- ...2 more annotations...