Skip to main content

Home/ Computer Science Knowledge Sharing/ Group items tagged Probability

Rss Feed Group items tagged

Abdelrahman Ogail

Simulated annealing - Wikipedia, the free encyclopedia - 1 views

  • Simulated annealing (SA) is a generic probabilistic metaheuristic for the global optimization problem of applied mathematics, namely locating a good approximation to the global minimum of a given function in a large search space. It is often used when the search space is discrete (e.g., all tours that visit a given set of cities). For certain problems, simulated annealing may be more effective than exhaustive enumeration — provided that the goal is merely to find an acceptably good solution in a fixed amount of time, rather than the best possible solution. The name and inspiration come from annealing in metallurgy, a technique involving heating and controlled cooling of a material to increase the size of its crystals and reduce their defects. The heat causes the atoms to become unstuck from their initial positions (a local minimum of the internal energy) and wander randomly through states of higher energy; the slow cooling gives them more chances of finding configurations with lower internal energy than the initial one. By analogy with this physical process, each step of the SA algorithm replaces the current solution by a random "nearby" solution, chosen with a probability that depends on the difference between the corresponding function values and on a global parameter T (called the temperature), that is gradually decreased during the process. The dependency is such that the current solution changes almost randomly when T is large, but increasingly "downhill" as T goes to zero. The allowance for "uphill" moves saves the method from becoming stuck at local minima—which are the bane of greedier methods. The method was independently described by S. Kirkpatrick, C. D. Gelatt and M. P. Vecchi in 1983 [1], and by V. Černý in 1985 [2]. The method is an adaptation of the Metropolis-Hastings algorithm, a Monte Carlo method to generate sample states of a thermodynamic system, invented by N. Metropolis et al. in 1953 [3].
  • Simulated annealing (SA) is a generic probabilistic metaheuristic for the global optimization problem of applied mathematics, namely locating a good approximation to the global minimum of a given function in a large search space. It is often used when the search space is discrete (e.g., all tours that visit a given set of cities). For certain problems, simulated annealing may be more effective than exhaustive enumeration — provided that the goal is merely to find an acceptably good solution in a fixed amount of time, rather than the best possible solution. The name and inspiration come from annealing in metallurgy, a technique involving heating and controlled cooling of a material to increase the size of its crystals and reduce their defects. The heat causes the atoms to become unstuck from their initial positions (a local minimum of the internal energy) and wander randomly through states of higher energy; the slow cooling gives them more chances of finding configurations with lower internal energy than the initial one. By analogy with this physical process, each step of the SA algorithm replaces the current solution by a random "nearby" solution, chosen with a probability that depends on the difference between the corresponding function values and on a global parameter T (called the temperature), that is gradually decreased during the process. The dependency is such that the current solution changes almost randomly when T is large, but increasingly "downhill" as T goes to zero. The allowance for "uphill" moves saves the method from becoming stuck at local minima—which are the bane of greedier methods. The method was independently described by S. Kirkpatrick, C. D. Gelatt and M. P. Vecchi in 1983 [1], and by V. Černý in 1985 [2]. The method is an adaptation of the Metropolis-Hastings algorithm, a Monte Carlo method to generate sample states of a thermodynamic system, invented by N. Metropolis et al. in 1953 [3].
  • Simulated annealing (SA) is a generic probabilistic metaheuristic for the global optimization problem of applied mathematics, namely locating a good approximation to the global minimum of a given function in a large search space. It is often used when the search space is discrete (e.g., all tours that visit a given set of cities). For certain problems, simulated annealing may be more effective than exhaustive enumeration — provided that the goal is merely to find an acceptably good solution in a fixed amount of time, rather than the best possible solution. The name and inspiration come from annealing in metallurgy, a technique involving heating and controlled cooling of a material to increase the size of its crystals and reduce their defects. The heat causes the atoms to become unstuck from their initial positions (a local minimum of the internal energy) and wander randomly through states of higher energy; the slow cooling gives them more chances of finding configurations with lower internal energy than the initial one. By analogy with this physical process, each step of the SA algorithm replaces the current solution by a random "nearby" solution, chosen with a probability that depends on the difference between the corresponding function values and on a global parameter T (called the temperature), that is gradually decreased during the process. The dependency is such that the current solution changes almost randomly when T is large, but increasingly "downhill" as T goes to zero. The allowance for "uphill" moves saves the method from becoming stuck at local minima—which are the bane of greedier methods. The method was independently described by S. Kirkpatrick, C. D. Gelatt and M. P. Vecchi in 1983 [1], and by V. Černý in 1985 [2]. The method is an adaptation of the Metropolis-Hastings algorithm, a Monte Carlo method to generate sample states of a thermodynamic system, invented by N. Metropolis et al. in 1953 [3].
  • ...1 more annotation...
  • Simulated annealing (SA) is a generic probabilistic metaheuristic for the global optimization problem of applied mathematics, namely locating a good approximation to the global minimum of a given function in a large search space. It is often used when the search space is discrete (e.g., all tours that visit a given set of cities). For certain problems, simulated annealing may be more effective than exhaustive enumeration — provided that the goal is merely to find an acceptably good solution in a fixed amount of time, rather than the best possible solution. The name and inspiration come from annealing in metallurgy, a technique involving heating and controlled cooling of a material to increase the size of its crystals and reduce their defects. The heat causes the atoms to become unstuck from their initial positions (a local minimum of the internal energy) and wander randomly through states of higher energy; the slow cooling gives them more chances of finding configurations with lower internal energy than the initial one. By analogy with this physical process, each step of the SA algorithm replaces the current solution by a random "nearby" solution, chosen with a probability that depends on the difference between the corresponding function values and on a global parameter T (called the temperature), that is gradually decreased during the process. The dependency is such that the current solution changes almost randomly when T is large, but increasingly "downhill" as T goes to zero. The allowance for "uphill" moves saves the method from becoming stuck at local minima—which are the bane of greedier methods. The method was independently described by S. Kirkpatrick, C. D. Gelatt and M. P. Vecchi in 1983 [1], and by V. Černý in 1985 [2]. The method is an adaptation of the Metropolis-Hastings algorithm, a Monte Carlo method to generate sample states of a thermodynamic system, invented by N. Metropolis et al. in 1953 [3].
  •  
    Simulated annealing (SA) is a generic probabilistic metaheuristic for the global optimization problem of applied mathematics, namely locating a good approximation to the global minimum of a given function in a large search space. It is often used when the search space is discrete (e.g., all tours that visit a given set of cities). For certain problems, simulated annealing may be more effective than exhaustive enumeration - provided that the goal is merely to find an acceptably good solution in a fixed amount of time, rather than the best possible solution. The name and inspiration come from annealing in metallurgy, a technique involving heating and controlled cooling of a material to increase the size of its crystals and reduce their defects. The heat causes the atoms to become unstuck from their initial positions (a local minimum of the internal energy) and wander randomly through states of higher energy; the slow cooling gives them more chances of finding configurations with lower internal energy than the initial one. By analogy with this physical process, each step of the SA algorithm replaces the current solution by a random "nearby" solution, chosen with a probability that depends on the difference between the corresponding function values and on a global parameter T (called the temperature), that is gradually decreased during the process. The dependency is such that the current solution changes almost randomly when T is large, but increasingly "downhill" as T goes to zero. The allowance for "uphill" moves saves the method from becoming stuck at local minima-which are the bane of greedier methods. The method was independently described by S. Kirkpatrick, C. D. Gelatt and M. P. Vecchi in 1983 [1], and by V. Černý in 1985 [2]. The method is an adaptation of the Metropolis-Hastings algorithm, a Monte Carlo method to generate sample states of a thermodynamic system, invented by N. Metropolis et al. in 1953 [3].
  •  
    A natural AI approach
Abdelrahman Ogail

Steady state - Wikipedia, the free encyclopedia - 0 views

  • A system in a steady state has numerous properties that are unchanging in time. The concept of steady state has relevance in many fields, in particular thermodynamics. Steady state is a more general situation than dynamic equilibrium. If a system is in steady state, then the recently observed behavior of the system will continue into the future. In stochastic systems, the probabilities that various different states will be repeated will remain constant.
Abdelrahman Ogail

Stochastic optimization - Wikipedia, the free encyclopedia - 0 views

  • Stochastic optimization (SO) methods are optimization algorithms which incorporate probabilistic (random) elements, either in the problem data (the objective function, the constraints, etc.), or in the algorithm itself (through random parameter values, random choices, etc.), or in both [1]. The concept contrasts with the deterministic optimization methods, where the values of the objective function are assumed to be exact, and the computation is completely determined by the values sampled so far.
  •  
    In Artificial Intelligence, Genetic Algorithms belongs to class Stochastic search that is explained below
Abdelrahman Ogail

Common Mistakes in Online and Real-time Contests - 0 views

  • Dynamic programming problems are to be solved with tabular methods
    • Ahmed Mansour
       
      Dynamic programming, like the divide-and-conquer method, solves problems by combining the solutions to subproblems. ("Programming" in this context refers to a tabular method, not to writing computer code) y3ney 3bara 3n 2nene bn2sem el problem el kbirr le shwit probelsm so3'ira .. we ne solve el problems deh we ngma el yab2a dh 7l lel problem el kbira :D;d see introduction to algorithms book . chapter 15
  • breadth-first search
    • Ahmed Mansour
       
      In graph theory, breadth-first search (BFS) is a graph search algorithm that begins at the root node and explores all the neighboring nodes. Then for each of those nearest nodes, it explores their unexplored neighbor nodes, and so on, until it finds the goal. ya3ney be el 3arby keda lw ana 3ndy tree maslan we el tree dh bettkwen mn shwit levels 3ady gedan.. lama hagey 23mel search 3la node mo3ina fi el tree deh hamsk el tree mn el root bet3ha ely hwa level 0 we habda2 2mshy level by level y3ney hanzl 3la el level 1 we hakaz le 3'it mal2y el node bet3ty ,,,, see this ,, it's a tutorial show how BFS algorithm is work!! http://www.personal.kent.edu/~rmuhamma/Algorithms/MyAlgorithms/GraphAlgor/breadthSearch.htm
  • Memorize the value of pi You should always try to remember the value of pi as far as possible, 3.1415926535897932384626433832795, certainly the part in italics. The judges may not give the value in the question, and if you use values like 22/7 or 3.1416 or 3.142857, then it is very likely that some of the critical judge inputs will cause you to get the wrong answer. You can also get the value of pi as a compiler-defined constant or from the following code: Pi=2*acos(0)
    • Islam TeCNo
       
      hhhhhhhhhhh ...... awl mara a3rf el mawdo3 dah we awl mara a3raf en el Pi = 2*acos(0)
    • Abdelrahman Ogail
       
      Thanks Islam for the info, really useful
  • ...4 more annotations...
  • You cannot always check the equality of floating point numbers with the = = operator in C/C++. Logically their values may be same, but due to precision limit and rounding errors they may differ by some small amount and may be incorrectly deemed unequal by your program
  • #define swap(xxx, yyy) (xxx) ^= (yyy) ^= (xxx) ^= (yyy)
    • Islam TeCNo
       
      I remember someone told me that it's impossible to do swaping using macros :D ...but i think it's possible
  • But recursion should not be discounted completely, as some problems are very easy to solve recursively (DFS, backtracking)
    • Islam TeCNo
       
      Some problems are much easier when using recursion
  • Having a good understanding of probability is vital to being a good programmer
  •  
    for bignner acmers hoping to be useful !
  •  
    in this article the author discuss the common problems that faced teams in ACM contests .. and how to avoid it !
1 - 5 of 5
Showing 20 items per page