# Using computers to simulate possible experiment targets is still primitive [[Physical experiments are expensive]]. If you can simulate something perfectly, you get two benefits. [[It is cheaper to do experiments on a simulated system than a physical system]] and [[You can explore a simulated system with algorithms]]. It is hard to simulate systems perfectly. [[A perfect simulation of a system would let you interrogate the system in any way you could interrogate a real system and get the same data]] You can simulate many systems in a useful way even if the simulation is not perfect. Word problems from math class are the most basic example of a useful but imperfect simulation. “I have five chickens, and give you two chickens. How many chickens do I have?” People have been doing this sort of simulation for a long time. Computers can perform simulations that we cannot do on paper or in our heads. Video games simulate how the real world looks. Finite element models simulate how systems that can be modeled by partial differential equations respond to inputs. People have used computer simulations for discovery since computers were invented. [^1] The things that have changed now: 1. Computers have become faster so you can simulate more things in a reasonable amount of time. 2. We have better simulations. The biggest example is how good video games and animated movies have become at simulating what the real world looks like and to some extent how its physics behave. There are entire academic fields devoted to better visual simulation techniques which often are physics based. Many people have worked on creating better finite element modeling techniques and other physics simulations. 3. Machine learning enables you to simulate the inputs and outputs of a system without simulating what is going on in the guts of the system. 4. Better GUIs, learnings from video games, and technology like AR and VR have made it easier to interrogate simulations. 5. APIs and the internet have made it easier to interrogate them algorithmically, interface simulations with each other and the physical world. ### Case Studies * Using simulations to enable robots to do reenforcement learning without doing it on the real world ([[OpenAI]]) * Simulating molecular physics to verify that a protein has been folded correctly. Note that [[It is computationally expensive to simulate protein folding directly]]. * Creating deep learning models that predict the properties of a molecule or material based on its structure. * [[AlphaFold]] * [[Idea2Data - Toward a New Paradigm for Drug Discovery]] * Simulating the effects of a genetic change on a microorganisms metabolism. ([[Zymergen]] ### Challenges * Many things still take a very long time simulate even with supercomputers. * [[It is computationally expensive to simulate protein folding directly]] * Fluid Dynamics * Ray tracing rendering * [[Simulations and models often fail to capture important aspects of a real system]] * Many times building the simulation in the first place is just as hard * Making a simulation “AI friendly” is hard - iterative methods are really good at finding and exploiting loopholes in physics engines. If they can clip through a wall or anything, they will do it. * [[It is hard to interface different simulations]] ### Related * [[Simulations]] <!-- #stub --> [Web URL for this note](http://notes.benjaminreinhardt.com/Using+computers+to+simulate+possible+experiment+targets+is+still+primitive) [Comment on this note](http://via.hypothes.is/http://notes.benjaminreinhardt.com/Using+computers+to+simulate+possible+experiment+targets+is+still+primitive)