Skip to main content

Home/ Computer Science Knowledge Sharing/ Contents contributed and discussions participated by Abdelrahman Ogail

Contents contributed and discussions participated by Abdelrahman Ogail

Abdelrahman Ogail

Defensive programming - Wikipedia, the free encyclopedia - 3 views

  • Defensive programming is a form of defensive design intended to ensure the continuing function of a piece of software in spite of unforeseeable usage of said software. The idea can be viewed as reducing or eliminating the prospect of Murphy's Law having effect. Defensive programming techniques are used especially when a piece of software could be misused mischievously or inadvertently to catastrophic effect.
Abdelrahman Ogail

Simulated annealing - Wikipedia, the free encyclopedia - 1 views

  • Simulated annealing (SA) is a generic probabilistic metaheuristic for the global optimization problem of applied mathematics, namely locating a good approximation to the global minimum of a given function in a large search space. It is often used when the search space is discrete (e.g., all tours that visit a given set of cities). For certain problems, simulated annealing may be more effective than exhaustive enumeration — provided that the goal is merely to find an acceptably good solution in a fixed amount of time, rather than the best possible solution. The name and inspiration come from annealing in metallurgy, a technique involving heating and controlled cooling of a material to increase the size of its crystals and reduce their defects. The heat causes the atoms to become unstuck from their initial positions (a local minimum of the internal energy) and wander randomly through states of higher energy; the slow cooling gives them more chances of finding configurations with lower internal energy than the initial one. By analogy with this physical process, each step of the SA algorithm replaces the current solution by a random "nearby" solution, chosen with a probability that depends on the difference between the corresponding function values and on a global parameter T (called the temperature), that is gradually decreased during the process. The dependency is such that the current solution changes almost randomly when T is large, but increasingly "downhill" as T goes to zero. The allowance for "uphill" moves saves the method from becoming stuck at local minima—which are the bane of greedier methods. The method was independently described by S. Kirkpatrick, C. D. Gelatt and M. P. Vecchi in 1983 [1], and by V. Černý in 1985 [2]. The method is an adaptation of the Metropolis-Hastings algorithm, a Monte Carlo method to generate sample states of a thermodynamic system, invented by N. Metropolis et al. in 1953 [3].
  • Simulated annealing (SA) is a generic probabilistic metaheuristic for the global optimization problem of applied mathematics, namely locating a good approximation to the global minimum of a given function in a large search space. It is often used when the search space is discrete (e.g., all tours that visit a given set of cities). For certain problems, simulated annealing may be more effective than exhaustive enumeration — provided that the goal is merely to find an acceptably good solution in a fixed amount of time, rather than the best possible solution. The name and inspiration come from annealing in metallurgy, a technique involving heating and controlled cooling of a material to increase the size of its crystals and reduce their defects. The heat causes the atoms to become unstuck from their initial positions (a local minimum of the internal energy) and wander randomly through states of higher energy; the slow cooling gives them more chances of finding configurations with lower internal energy than the initial one. By analogy with this physical process, each step of the SA algorithm replaces the current solution by a random "nearby" solution, chosen with a probability that depends on the difference between the corresponding function values and on a global parameter T (called the temperature), that is gradually decreased during the process. The dependency is such that the current solution changes almost randomly when T is large, but increasingly "downhill" as T goes to zero. The allowance for "uphill" moves saves the method from becoming stuck at local minima—which are the bane of greedier methods. The method was independently described by S. Kirkpatrick, C. D. Gelatt and M. P. Vecchi in 1983 [1], and by V. Černý in 1985 [2]. The method is an adaptation of the Metropolis-Hastings algorithm, a Monte Carlo method to generate sample states of a thermodynamic system, invented by N. Metropolis et al. in 1953 [3].
  • Simulated annealing (SA) is a generic probabilistic metaheuristic for the global optimization problem of applied mathematics, namely locating a good approximation to the global minimum of a given function in a large search space. It is often used when the search space is discrete (e.g., all tours that visit a given set of cities). For certain problems, simulated annealing may be more effective than exhaustive enumeration — provided that the goal is merely to find an acceptably good solution in a fixed amount of time, rather than the best possible solution. The name and inspiration come from annealing in metallurgy, a technique involving heating and controlled cooling of a material to increase the size of its crystals and reduce their defects. The heat causes the atoms to become unstuck from their initial positions (a local minimum of the internal energy) and wander randomly through states of higher energy; the slow cooling gives them more chances of finding configurations with lower internal energy than the initial one. By analogy with this physical process, each step of the SA algorithm replaces the current solution by a random "nearby" solution, chosen with a probability that depends on the difference between the corresponding function values and on a global parameter T (called the temperature), that is gradually decreased during the process. The dependency is such that the current solution changes almost randomly when T is large, but increasingly "downhill" as T goes to zero. The allowance for "uphill" moves saves the method from becoming stuck at local minima—which are the bane of greedier methods. The method was independently described by S. Kirkpatrick, C. D. Gelatt and M. P. Vecchi in 1983 [1], and by V. Černý in 1985 [2]. The method is an adaptation of the Metropolis-Hastings algorithm, a Monte Carlo method to generate sample states of a thermodynamic system, invented by N. Metropolis et al. in 1953 [3].
  • ...1 more annotation...
  • Simulated annealing (SA) is a generic probabilistic metaheuristic for the global optimization problem of applied mathematics, namely locating a good approximation to the global minimum of a given function in a large search space. It is often used when the search space is discrete (e.g., all tours that visit a given set of cities). For certain problems, simulated annealing may be more effective than exhaustive enumeration — provided that the goal is merely to find an acceptably good solution in a fixed amount of time, rather than the best possible solution. The name and inspiration come from annealing in metallurgy, a technique involving heating and controlled cooling of a material to increase the size of its crystals and reduce their defects. The heat causes the atoms to become unstuck from their initial positions (a local minimum of the internal energy) and wander randomly through states of higher energy; the slow cooling gives them more chances of finding configurations with lower internal energy than the initial one. By analogy with this physical process, each step of the SA algorithm replaces the current solution by a random "nearby" solution, chosen with a probability that depends on the difference between the corresponding function values and on a global parameter T (called the temperature), that is gradually decreased during the process. The dependency is such that the current solution changes almost randomly when T is large, but increasingly "downhill" as T goes to zero. The allowance for "uphill" moves saves the method from becoming stuck at local minima—which are the bane of greedier methods. The method was independently described by S. Kirkpatrick, C. D. Gelatt and M. P. Vecchi in 1983 [1], and by V. Černý in 1985 [2]. The method is an adaptation of the Metropolis-Hastings algorithm, a Monte Carlo method to generate sample states of a thermodynamic system, invented by N. Metropolis et al. in 1953 [3].
  •  
    Simulated annealing (SA) is a generic probabilistic metaheuristic for the global optimization problem of applied mathematics, namely locating a good approximation to the global minimum of a given function in a large search space. It is often used when the search space is discrete (e.g., all tours that visit a given set of cities). For certain problems, simulated annealing may be more effective than exhaustive enumeration - provided that the goal is merely to find an acceptably good solution in a fixed amount of time, rather than the best possible solution. The name and inspiration come from annealing in metallurgy, a technique involving heating and controlled cooling of a material to increase the size of its crystals and reduce their defects. The heat causes the atoms to become unstuck from their initial positions (a local minimum of the internal energy) and wander randomly through states of higher energy; the slow cooling gives them more chances of finding configurations with lower internal energy than the initial one. By analogy with this physical process, each step of the SA algorithm replaces the current solution by a random "nearby" solution, chosen with a probability that depends on the difference between the corresponding function values and on a global parameter T (called the temperature), that is gradually decreased during the process. The dependency is such that the current solution changes almost randomly when T is large, but increasingly "downhill" as T goes to zero. The allowance for "uphill" moves saves the method from becoming stuck at local minima-which are the bane of greedier methods. The method was independently described by S. Kirkpatrick, C. D. Gelatt and M. P. Vecchi in 1983 [1], and by V. Černý in 1985 [2]. The method is an adaptation of the Metropolis-Hastings algorithm, a Monte Carlo method to generate sample states of a thermodynamic system, invented by N. Metropolis et al. in 1953 [3].
  •  
    A natural AI approach
Abdelrahman Ogail

Production system - Wikipedia, the free encyclopedia - 0 views

  • A production system (or production rule system) is a computer program typically used to provide some form of artificial intelligence, which consists primarily of a set of rules about behavior. These rules, termed productions, are a basic representation found useful in AI planning, expert systems and action selection. A production system provides the mechanism necessary to execute productions in order to achieve some goal for the system. Productions consist of two parts: a sensory precondition (or "IF" statement) and an action (or "THEN"). If a production's precondition matches the current state of the world, then the production is said to be triggered. If a production's action is executed, it is said to have fired. A production system also contains a database, sometimes called working memory, which maintains data about current state or knowledge, and a rule interpreter. The rule interpreter must provide a mechanism for prioritizing productions when more than one is triggered.
  • A production system (or production rule system) is a computer program typically used to provide some form of artificial intelligence, which consists primarily of a set of rules about behavior. These rules, termed productions, are a basic representation found useful in AI planning, expert systems and action selection. A production system provides the mechanism necessary to execute productions in order to achieve some goal for the system. Productions consist of two parts: a sensory precondition (or "IF" statement) and an action (or "THEN"). If a production's precondition matches the current state of the world, then the production is said to be triggered. If a production's action is executed, it is said to have fired. A production system also contains a database, sometimes called working memory, which maintains data about current state or knowledge, and a rule interpreter. The rule interpreter must provide a mechanism for prioritizing productions when more than one is triggered.
Abdelrahman Ogail

Flocking (behavior) - Wikipedia, the free encyclopedia - 0 views

  • Flocking behavior is the behavior exhibited when a group of birds, called a flock, are foraging or in flight. There are parallels with the shoaling behavior of fish, or the swarming behavior of insects. Computer simulations and mathematical models which have been developed to emulate the flocking behaviors of birds can generally be applied also to the "flocking" behavior of other species. As a result, the term "flocking" is sometimes applied, in computer science, to species other than birds. This article is about the modelling of flocking behavior. From the perceptive of the mathematical modeller, "flocking" is the collective motion of a large number of self-propelled entities and is a collective animal behavior exhibited by many living beings such as birds, fish, bacteria, and insects.[1] It is considered an emergent behaviour arising from simple rules that are followed by individuals and does not involve any central coordination. Flocking behavior was first simulated on a computer in 1986 by Craig Reynolds with his simulation program, Boids. This program simulates simple agents (boids) that are allowed to move according to a set of basic rules. The result is akin to a flock of birds, a school of fish, or a swarm of insects.
  • Flocking behavior is the behavior exhibited when a group of birds, called a flock, are foraging or in flight. There are parallels with the shoaling behavior of fish, or the swarming behavior of insects. Computer simulations and mathematical models which have been developed to emulate the flocking behaviors of birds can generally be applied also to the "flocking" behavior of other species. As a result, the term "flocking" is sometimes applied, in computer science, to species other than birds. This article is about the modelling of flocking behavior. From the perceptive of the mathematical modeller, "flocking" is the collective motion of a large number of self-propelled entities and is a collective animal behavior exhibited by many living beings such as birds, fish, bacteria, and insects.[1] It is considered an emergent behaviour arising from simple rules that are followed by individuals and does not involve any central coordination. Flocking behavior was first simulated on a computer in 1986 by Craig Reynolds with his simulation program, Boids. This program simulates simple agents (boids) that are allowed to move according to a set of basic rules. The result is akin to a flock of birds, a school of fish, or a swarm of insects.
Abdelrahman Ogail

Clockwork universe theory - Wikipedia, the free encyclopedia - 1 views

  • The Clockwork Universe Theory is a theory, established by Isaac Newton, as to the origins of the universe. A "clockwork universe" can be thought of as being a clock wound up by God and ticking along, as a perfect machine, with its gears governed by the laws of physics. What sets this theory apart from others is the idea that God's only contribution to the universe was to set everything in motion, and from there the laws of science took hold and have governed every sequence of events since that time. This idea was very popular during the Enlightenment, when scientists realized that Newton's laws of motion, including the law of universal gravitation, could explain the behavior of the solar system. A notable exclusion from this theory though is free will, since all things have already been set in motion and are just parts of a predictable machine. Newton feared that this notion of "everything is predetermined" would lead to atheism. This theory was undermined by the second law of thermodynamics ( the total entropy of any isolated thermodynamic system tends to increase over time, approaching a maximum value) and quantum physics with its unpredictable random behavior.
  • The Clockwork Universe Theory is a theory, established by Isaac Newton, as to the origins of the universe. A "clockwork universe" can be thought of as being a clock wound up by God and ticking along, as a perfect machine, with its gears governed by the laws of physics. What sets this theory apart from others is the idea that God's only contribution to the universe was to set everything in motion, and from there the laws of science took hold and have governed every sequence of events since that time. This idea was very popular during the Enlightenment, when scientists realized that Newton's laws of motion, including the law of universal gravitation, could explain the behavior of the solar system. A notable exclusion from this theory though is free will, since all things have already been set in motion and are just parts of a predictable machine. Newton feared that this notion of "everything is predetermined" would lead to atheism. This theory was undermined by the second law of thermodynamics ( the total entropy of any isolated thermodynamic system tends to increase over time, approaching a maximum value) and quantum physics with its unpredictable random behavior.
    • Abdelrahman Ogail
       
      "God's only contribution to the universe was to set everything in motion, and from there the laws of science took hold and have governed every sequence of events since that time" <-- ???
Abdelrahman Ogail

Artificial life - Wikipedia, the free encyclopedia - 2 views

  • Artificial life (commonly Alife or alife) is a field of study and an associated art form which examine systems related to life, its processes, and its evolution through simulations using computer models, robotics, and biochemistry.[1] There are three main kinds of alife[2], named for their approaches: soft[3], from software; hard[4], from hardware; and wet, from biochemistry. Artificial life imitates traditional biology by trying to recreate biological phenomena.[5] The term "artificial life" is often used to specifically refer to soft alife
  • The modeling philosophy of alife strongly differs from traditional modeling, by studying not only “life-as-we-know-it”, but also “life-as-it-might-be” [7].
Abdelrahman Ogail

Steady state - Wikipedia, the free encyclopedia - 0 views

  • A system in a steady state has numerous properties that are unchanging in time. The concept of steady state has relevance in many fields, in particular thermodynamics. Steady state is a more general situation than dynamic equilibrium. If a system is in steady state, then the recently observed behavior of the system will continue into the future. In stochastic systems, the probabilities that various different states will be repeated will remain constant.
Abdelrahman Ogail

Belief-Desire-Intention model - Wikipedia, the free encyclopedia - 1 views

  • The Belief-Desire-Intention (BDI) model of human practical reasoning was developed by Michael Bratman as a way of explaining future-directed intention. BDI is fundamentally reliant on folk psychology (the 'theory theory'), which is the notion that our mental models of the world are theories.
Abdelrahman Ogail

Belief-Desire-Intention software model - Wikipedia, the free encyclopedia - 0 views

  • The Belief-Desire-Intention (BDI) software model (usually referred to simply, but ambiguously, as BDI) is a software model developed for programming intelligent agents. Superficially characterized by the implementation of an agent's beliefs, desires and intentions, it actually uses these concepts to solve a particular problem in agent programming. In essence, it provides a mechanism for separating the activity of selecting a plan (from a plan library) from the execution of currently active plans. Consequently, BDI agents are able to balance the time spent on deliberating about plans (choosing what to do) and executing those plans (doing it). A third activity, creating the plans in the first place (planning), is not within the scope of the model, and is left to the system designer and programmer.
    • Abdelrahman Ogail
       
      Stress on the point "BDI Agents spent time about choosing what to do more that how to execute them"
Abdelrahman Ogail

Genetic programming - Wikipedia, the free encyclopedia - 0 views

  • In artificial intelligence, genetic programming (GP) is an evolutionary algorithm-based methodology inspired by biological evolution to find computer programs that perform a user-defined task. It is a specialization of genetic algorithms (GA) where each individual is a computer program. Therefore it is a machine learning technique used to optimize a population of computer programs according to a fitness landscape determined by a program's ability to perform a given computational task.
  • In artificial intelligence, genetic programming (GP) is an evolutionary algorithm-based methodology inspired by biological evolution to find computer programs that perform a user-defined task. It is a specialization of genetic algorithms (GA) where each individual is a computer program. Therefore it is a machine learning technique used to optimize a population of computer programs according to a fitness landscape determined by a program's ability to perform a given computational task.
Abdelrahman Ogail

Quis custodiet ipsos custodes? - Wikipedia, the free encyclopedia - 0 views

  • The question is put to Socrates, "Who will guard the guardians?" or, "Who will protect us against the protectors?" Plato's answer to this is that they will guard themselves against themselves. We must tell the guardians a "noble lie."[1] The noble lie will inform them that they are better than those they serve and it is therefore their responsibility to guard and protect those lesser than themselves. We will instill in them a distaste for power or privilege; they will rule because they believe it right, not because they desire it.
Abdelrahman Ogail

Mutation testing - Wikipedia, the free encyclopedia - 1 views

  • Mutation testing (or Mutation analysis) is a method of software testing, which involves modifying program's source code in small ways.[1] These, so-called mutations, are based on well-defined mutation operators that either mimic typical programming errors (such as using the wrong operator or variable name) or force the creation of valuable tests (such as driving each expression to zero). The purpose is to help the tester develop effective tests or locate weaknesses in the test data used for the program or in sections of the code that are seldom or never accessed during execution.
  • For example, consider the following C++ code fragment: if (a &amp;&amp; b) c = 1; else c = 0; The condition mutation operator would replace '&amp;&amp;' with '||' and produce the following mutant: if (a || b) c = 1; else c = 0;
  • Many mutation operators can produce equivalent mutants. For example, consider the following code fragment: int index=0; while (...) { . . .; index++; if (index==10) break; } Boolean relation mutation operator will replace "==" with "&gt;=" and produce the following mutant: int index=0; while (...) { . . .; index++; if (index&gt;=10) break; }
Abdelrahman Ogail

Hill climbing - Wikipedia, the free encyclopedia - 0 views

  • In computer science, hill climbing is a mathematical optimization technique which belongs to the family of local search. It is relatively simple to implement, making it a popular first choice. Although more advanced algorithms may give better results, in some situations hill climbing works just as well. Hill climbing can be used to solve problems that have many solutions, some of which are better than others. It starts with a random (potentially poor) solution, and iteratively makes small changes to the solution, each time improving it a little. When the algorithm cannot see any improvement anymore, it terminates. Ideally, at that point the current solution is close to optimal, but it is not guaranteed that hill climbing will ever come close to the optimal solution. For example, hill climbing can be applied to the traveling salesman problem. It is easy to find a solution that visits all the cities but will be very poor compared to the optimal solution. The algorithm starts with such a solution and makes small improvements to it, such as switching the order in which two cities are visited. Eventually, a much better route is obtained. Hill climbing is used widely in artificial intelligence, for reaching a goal state from a starting node. Choice of next node and starting node can be varied to give a list of related algorithms.
Abdelrahman Ogail

Stochastic optimization - Wikipedia, the free encyclopedia - 0 views

  • Stochastic optimization (SO) methods are optimization algorithms which incorporate probabilistic (random) elements, either in the problem data (the objective function, the constraints, etc.), or in the algorithm itself (through random parameter values, random choices, etc.), or in both [1]. The concept contrasts with the deterministic optimization methods, where the values of the objective function are assumed to be exact, and the computation is completely determined by the values sampled so far.
  •  
    In Artificial Intelligence, Genetic Algorithms belongs to class Stochastic search that is explained below
Abdelrahman Ogail

Genetic algorithm - Wikipedia, the free encyclopedia - 0 views

  • A genetic algorithm (GA) is a search technique used in computing to find exact or approximate solutions to optimization and search problems. Genetic algorithms are categorized as global search heuristics. Genetic algorithms are a particular class of evolutionary algorithms (EA) that use techniques inspired by evolutionary biology such as inheritance, mutation, selection, and crossover (also called recombination).
  • A typical genetic algorithm requires: a genetic representation of the solution domain, a fitness function to evaluate the solution domain.
  •  
    GE are primary used in Learning in AI
Abdelrahman Ogail

SOAP - Wikipedia, the free encyclopedia - 0 views

  • SOAP, originally defined as Simple Object Access Protocol, is a protocol specification for exchanging structured information in the implementation of Web Services in computer networks. It relies on Extensible Markup Language (XML) as its message format, and usually relies on other Application Layer protocols (most notably Remote Procedure Call (RPC) and HTTP) for message negotiation and transmission. SOAP can form the foundation layer of a web services protocol stack, providing a basic messaging framework upon which web services can be built.
Abdelrahman Ogail

Common Gateway Interface - Wikipedia, the free encyclopedia - 0 views

  • The Common Gateway Interface (CGI) is a standard protocol for interfacing external application software with an information server, commonly a web server. The task of such an information server is to respond to requests (in the case of web servers, requests from client web browsers) by returning output. Each time a request is received, the server analyzes what the request asks for, and returns the appropriate output. The two basic methods for the server to do this are the following: If the request identifies a file stored on disk, then return the contents of that file. If the request identifies an executable command and possibly arguments, then run the command and return its output. CGI defines a standard way of doing the second. It defines how information about the server and the request is passed to the command in the form of arguments and environment variables, and how the command can pass back extra information about the output (such as the type) in the form of headers.
Abdelrahman Ogail

Voice over Internet Protocol - Wikipedia, the free encyclopedia - 0 views

shared by Abdelrahman Ogail on 01 Jul 09 - Cached
Ahmed One liked it
  • Voice over Internet Protocol (VoIP) is a general term for a family of transmission technologies for delivery of voice communications over IP networks such as the Internet or other packet-switched network
  • Internet telephony refers to communications services—voice, facsimile, and/or voice-messaging applications—that are transported via the Internet, rather than the public switched telephone network (PSTN). The basic steps involved in originating an Internet telephone call are conversion of the analog voice signal to digital format and compression/translation of the signal into Internet protocol (IP) packets for transmission over the Internet; the process is reversed at the receiving end.[1]
Abdelrahman Ogail

Theory of mind - Wikipedia, the free encyclopedia - 0 views

  • Theory of mind is the ability to attribute mental states—beliefs, intents, desires, pretending, knowledge, etc.—to oneself and others and to understand that others have beliefs, desires and intentions that are different from one's own.[1]
  • One of the most important milestones in theory of mind development is gaining the ability to attribute false belief: that is, to recognize that others can have beliefs about the world that are wrong. To do this, it is suggested, one must understand how knowledge is formed, that people’s beliefs are based on their knowledge, that mental states can differ from reality, and that people’s behavior can be predicted by their mental states. Numerous versions of the false-belief task have been developed, based on the initial task done by Wimmer and Perner (1983).
  • In the most common version of the false-belief task (often called the ‘Sally-Anne’ task), children are told or shown a story involving two characters. For example, in one version, the child is shown two dolls, Sally and Anne, playing with a marble. The dolls put away the marble in a box, and then Sally leaves. Anne takes the marble out and plays with it again, and after she is done, puts it away in a different box. Sally returns and the child is then asked where Sally will look for the marble. The child passes the task if she answers that Sally will look in the first box where she put the marble; the child fails the task if she answers that Sally will look in the second box, where the child knows the marble is hidden, even though Sally cannot know, since she did not see it hidden there. In order to pass the task, the child must be able to understand that another’s mental representation of the situation is different from their own, and the child must be able to predict behavior based on that understanding. The results of research using false-belief tasks have been fairly consistent: most normally-developing children are unable to pass the tasks until around the age of three or four.
    • Abdelrahman Ogail
       
      Test your small brother this test if he/she under 3 years!
Abdelrahman Ogail

ELIZA effect - Wikipedia, the free encyclopedia - 0 views

  • The effect is named for the 1966 chatterbot ELIZA, developed by MIT computer scientist Joseph Weizenbaum
    • Abdelrahman Ogail
       
      ELIZA was develped by a computer scientist from MIT!!
1 - 20 of 42 Next › Last »
Showing 20 items per page