Optimization#

class whobpyt.optimization.CostsTS(simKey=None, model=None)[source]#
__init__(simKey=None, model=None)[source]#
main_loss(simData: dict, empData: Tensor)[source]#

Calculate the Pearson Correlation between the simFC and empFC. From there, compute the probability and negative log-likelihood.

Parameters:
simData: dict of tensor with node_size X datapoint

simulated EEG

empData: tensor with node_size X datapoint

empirical EEG

class whobpyt.optimization.CostsFC(simKey=None, model=None)[source]#

Cost function for Fitting the Functional Connectivity (FC) matrix. The cost function is the negative log-likelihood of the Pearson correlation between the simulated FC and empirical FC.

Attributes:
simKey: str

string key to reference to this const function. i.e., “CostsFC”.

Methods

loss: function

calculates functional connectivity and uses it to calculate the loss

__init__(simKey=None, model=None)[source]#
Parameters:
simKey: str

type of cost function to be used

main_loss(simData: dict, empData: Tensor)[source]#

Function to calculate the cost function for Functional Connectivity (FC) fitting. It initially calculates the FC matrix using the data from the BOLD time series, makes that mean-zero, and then calculates the Pearson Correlation between the simulated FC and empirical FC. The FC matrix values are then transposed to the 0-1 range. We then use this FC matrix as a probability matrix and use it to get the cross-entropy-like loss using negative log likelihood.

Parameters:
simData: dict of torch.Tensor with node_size X datapoint

simulated BOLD

empData: torch.Tensor with node_size X datapoint

empirical BOLD

Returns:
losses_corr: torch.tensor

cost function value

class whobpyt.optimization.CostsFixedFC(simKey, device=device(type='cpu'))[source]#

Cost function for Fitting the Functional Connectivity (FC) matrix. In this version, the empirical FC is given directly, instead of being given an empirical time series. The cost function is the negative log-likelihood of the Pearson correlation between the simulated FC and empirical FC.

Has GPU support.

Attributes:
simKey: str

string key to reference to this const function. i.e., “CostsFC”.

device: torch.device

Whether to run on GPU or CPU

Methods:
——–
loss: function

calculates functional connectivity and uses it to calculate the loss

__init__(simKey, device=device(type='cpu'))[source]#
Parameters:
simKey: str

The state variable or output variable from the model used for the simulated FC

device: torch.device

Whether to run on GPU or CPU

loss(simData, empData)[source]#

Function to calculate the cost function for Functional Connectivity (FC) fitting. It initially calculates the FC matrix using the data from the time series, makes that mean-zero, and then calculates the Pearson Correlation between the simulated FC and empirical FC. The FC matrix values are then transposed to the 0-1 range. We then use this FC matrix as a probability matrix and use it to get the cross-entropy-like loss using negative log likelihood.

Parameters:
simData: dict of torch.tensor with node_size X time_point

Simulated Time Series

empData: torch.tensor with node_size X node_size

Empirical Functional Connectivity

Returns:
losses_corr: torch.tensor

cost function value

class whobpyt.optimization.CostsPSD(num_regions, simKey, sampleFreqHz, targetValue=None, empiricalData=None)[source]#

WARNING: This function is no longer supported. TODO: Needs to be updated.

__init__(num_regions, simKey, sampleFreqHz, targetValue=None, empiricalData=None)[source]#
calcPSD(sampleFreqHz, minFreq=None, maxFreq=None, axMethod=2)[source]#
downSmoothPSD(sdValues, numPoints=512)[source]#
loss(simData, empData=None)[source]#
scalePSD(sdValues_dS)[source]#
class whobpyt.optimization.CostsFixedPSD(num_regions, simKey, sampleFreqHz, minFreq, maxFreq, targetValue=None, empiricalData=None, batch_size=1, rmTransient=0, device=device(type='cpu'))[source]#

Updated Code that fits to a fixed PSD

The simulated PSD is generated as the square of the Fast Fourier Transform (FFT). A particular range to fit on is selected. The mean is not removed, so it is recommended to set the range such as to disclude the first point of the PSD. Removing an initial transient period before calculating the PSD is also recommended.

Designed for Fitting_Batch, where the model output has an extra dimension for batch. TODO: Generalize further to work in case without this dimension as well.

NOTE: If using batching, the batches will be averaged before calculating the error (as opposed to having an error for each time series in the batch).

Has GPU support.

Attributes:
simKey: String

Name of the state variable or modality to be used as input to the cost function.

num_regions: Int

The number of nodes in the model.

batch_size: Int

The number of elements in the batch.

rmTransient: Int

The number of initial time steps of simulation to remove as the transient. Default: 0

device: torch.device

Whether to run the objective function on CPU or GPU.

sampleFreqHz: Int

The sampling frequency of the data.

targetValue: torch.tensor

The target PSD. This is assumed to be created with the same frequency range and density as that of the PSD generated from the simulated data.

empiricalData: torch.tensor

NOT IMPLEMENTED: This is a placeholder for the case of getting an empirical timeseries as input, which would be applicable if doing a windowed fitting paradigm

__init__(num_regions, simKey, sampleFreqHz, minFreq, maxFreq, targetValue=None, empiricalData=None, batch_size=1, rmTransient=0, device=device(type='cpu'))[source]#
Parameters:
num_regions; Int

The number of nodes in the model.

simKey: String

Name of the state variable or modality to be used as input to the cost function.

sampleFreqHz: Int

The sampling frequency of the data.

minFreq: Int

The minimum frequnecy of the PSD to return.

maxFreq: Int

The maximum frequency of the PSD to return.

targetValue: torch.tensor

The sampling frequency of the data.

empiricalData: torch.tensor

NOT IMPLEMENTED: This is a placeholder for the case of getting an empirical timeseries as input, which would be applicable if doing a windowed fitting paradigm

batch_size: Int

The number of elements in the batch.

rmTransient: Int

The number of initial time steps of simulation to remove as the transient. Default: 0

device: torch.device

Whether to run the objective function on CPU or GPU.

calcPSD(signal, sampleFreqHz, minFreq=None, maxFreq=None, axMethod=2)[source]#

This method calculates the Power Spectral Density (PSD) as the square of the Fast Fourier Transform (FFT).

Tested when working with the default simulation frequency of 10,000Hz.

Parameters:
signal: dict of torch.tensor

The timeseries outputted by a model. Dimensions: [nodes, time, batch]

sampleFreqHz: Int

The sampling frequency of the data.

minFreq: Int

The minimum frequnecy of the PSD to return.

maxFreq: Int

The maximum frequency of the PSD to return.

axMethod: Int

Either 1 or 2 depending on the approach to calculate the PSD axis.

Returns:
sdAxis: torch.tensor

The axis values of the PSD

sdValues: torch.tensor

The PSD values [____, ____]

loss(simData, empData=None)[source]#

NOTE: If using batching, the batches will be averaged before calculating the error (as opposed to having an error for each simulated time series in the batch).

Parameters:
simData: torch.tensor

Simulated Data in the form [regions, time_steps, block/batch]

empData: torch.tensor

NOT IMPLEMENTED: This is a placeholder for the case of getting an empirical timeseries as input, which would be applicable if doing a windowed fitting paradigm

Returns:
psdMSE:

The MSE of the difference between the simulated and target power spectrum within the specified range

class whobpyt.optimization.CostsMean(num_regions, simKey, targetValue=None, empiricalData=None, batch_size=1, device=device(type='cpu'))[source]#

Target Mean Value of a Variable

This loss function calculates the mean value of a particular variable for every node across time, and then takes the Mean Squared Error of those means with the target value.

Attributes:
num_regionsInt

The number of reagons in the model beign fit

simKeyString

The name of the variable for which the mean is calculated

targetValueTensor

The target value either as single number or vector

devicetorch.device

Whether the objective function is to run on GPU

__init__(num_regions, simKey, targetValue=None, empiricalData=None, batch_size=1, device=device(type='cpu'))[source]#
Parameters:
num_regionsInt

The number of regions in the model being fit

simKeyString

The name of the variable for which the mean is calculated

targetValueTensor

The target value either as single number or vector

loss(simData, empData=None)[source]#

Method to calculate the loss

Parameters:
simData: dict of Tensor[ Nodes x Time ] or [ Nodes x Time x Blocks(Batch) ]

The time series used by the loss function

Returns:
Tensor

The loss value