Utilisateur
A field that aims to understand the principles and mechanisms underlying the function of the nervous system through the use of computational models and techniques. It involves the integration of neuroscience, computer science, math, and physics to develop models that simulate and explain interactions within neural circuits and systems.
Aka. Developing models to interpret data.
DM A Unicorn (about) DA ICE ATM
Develop Model/ Modeling Neural Systems: Developing computational models that simulate the behavior of neural circuits and systems.
Analysis of Neural Data: Analyze the data (obtained with neuroscience techniques) by applying mathematical and computational tools
Understanding Information Processing: Investigate how neural circuits process and represent information
Develop Algorithms: Creating algorithms inspired by neural processes to solve complex computational problems (ie. neural networks and machine learning techniques drew inspiration from biological neural networks)
Integration of Experimental and Computational Approaches: Bridging the gap between experimental neuroscience and computational modelling by validating and refining models based on experimental observations. Or models can guide experimental design and help generate hypotheses for further testing. Having a model allows you to extrapolate further than just the data you have.
Application in Technology and Medicine: Applying insights from computational systems neuroscience to develop technologies and treatments (ie., design of brain-machine interfaces)
1. Dynamical Systems
2. Network Theory
3. Information Theory
4. Statistical Modeling
How things evolve in time (ex. how does a neuron evolve with time? Membrane potential goes up and down depending on the stimulus it recieves)
Neural Representation and Encoding:
-Neural Coding: Investigates how information is represented by patterns of neural activity. Theoretical frameworks include rate coding, temporal coding, and population encoding to explore how neurons encode and and transmit information
Synaptic Plasticity and Learning:
-Hebbian Plasticity: Synaptic connections are strenghened when the pre-synaptic neuron is repeatedly active at the same time as the post-synaptic neuron
-Spike-timing Dependent Plasticity: Focuses on the precise timing of spikes in pre-and post- synaptic neurons as a key factor in modifying synaptic strength
Neural Network Dynamics:
-Dynamical systems theory: treats neural systems as dynamic entities, exploring how patterns of neural activity change over time. Main mathematical theory we use to understand systems
-Hodgkin-Huxley Model: Describes the biophysical properties of excitable membranes, providing a detailed understanding of action potentials
-Integrate-and-fire models: Simplified models that capture essential features of neuron behaviour
How things are connected
Connectivity and Network Architecture:
-Small-world Networks and Scale-Free Networks: Describes the connectivity patterns in nerual networks. These concepts help understand how local and global connectivity contribute to efficient information processing
-Hub Nodes: Focuses on the importance of specific highly connected nodes (hubs) in neural networks and their role in information integration
-Structural Connectivity: Focuses on studying physical or structural connections between neurons or brain areas
-Effective Connectivity: Studies communication pathways in the brain (information flow)
-Functional Connectivity: Studies correlations in brain activity or coactivations
How things talk with each other.
Information Processing:
-Information Theory: Provides tools for quantifying and understanding the flow of information within neural systems. How you study information processing in neural networks
-Network interference: Uses information theoretical measures to estimate causal relations or information transfer in neural networks
-Efficient coding: Information Theory provides bounds and constraints on how messages can be transmitted through a system/network, especially in noisy conditions.
Understanding Data
Computational Models and Algorithms
-Neural Networks: Includes various types of artificial neural networks which are inspired by the organization and function of biological neural networks
-Machine learning: Utilizes algorithms and techniques from machine learning for modeling and understanding neural processes
Dimensionality Reduction
-Principal Component Analysis: Transforms the data into a new coordinate system where the first few principal components capture the maximum variance in the data
-t-Distributed Stochastc Neighbor Embedding: A non-linear dimensionality reduction technique that is effective for visualizing high-dimensional data in lower-dimensional spaces
-Uniform Manifold Approximation and Projection: Non-linear dimensionality reduction that preserves both local and global structure in the data.
In 1952 H and H quantified and measured an action potential experimentally, the built a mathematical model to understand the different variables they were measuring and how the action potential was behaving. They developed m proposed model then tried to fit experimental data to the model. They were able to reproduce experimental results, extrapolate and make predictions using this equation.
Izhikevich took the HH model and simplified it. It is enough to describe what the neuron is doing and enough to reproduce the behaviour of a neuron. You can use the model to very closely reproduce the firing pattern of cortical neurons.
They did this by using patch clamp to simultaneously measure AMPA and GABA currents in real time. They precisely recorded inputs of cells and measured the output of it's membrane potential. Used these inputs with the model.
When modifying equations it is easier to modify the simplier equation because you don't have to change as many things/it will take you less time to change things one by one. You can model systems at different levels depening on what you want to do, the level you choose depends on the question you want to answer.
First of understanding level (base of triangle)
Descriptive "what"
-compact summary of data
-what are we observing/measuring?
-describing what we get in the data
-how do we summarize the data in our project?
Second level of understanding
Mechanistic "how"
-show how a circuit performs a complex function
-how is the model or system behaving?
-how is the system performing a complex function?
-need to extrapolate what the data is doing, need to fit the data to something or some model to interpolate what you observed and extrapolate
Final level of understanding (top of triangle)
Interpretative "why"
Explain why the brain does something (optimality, energy efficiency, etc)
-why is this part of the brain/neuron etc, doing something?
-what is the reason?
This is hard to answer, and can not always be answered
Knowledge synthesis
Identify hidden assumptions, hypotheses, unknowns
Mechanistic insights of how things happen
Retrieve latent information--find/see things you couldn't see yourself
Testbench for medical interventions--explore beyond what you can do in the animal model
Guidance in designing useful experiments - quantitative predictions
Inspire new technologies/applications
Usually determined by the question, hypothesis, and/or model goals. It's best to keep it simple, but ensure it gives you the information needed.
More abstract = more generalizable to other things
More realistic = more specific
Models allow for understanding and control
U: Insights not directly accessible by experiments or data (extrapolation, inference)
C: Interventions (causal manipulation), experimental, clinical, etc.