Case Study 3 — Resource explicit Monod growth and death
# Our user defined plotting functions are in the utils folder. Please navigate there to use.
include(joinpath(@__DIR__, "..", "..", "..", "utils", "plot_utils.jl"))
plot_posterior_states_stacked (generic function with 1 method)
Welcome to the "how to" of Markov Chain Monte Carlo using Turing. A Julia software package allowing you to fit complex models with ease.
Case Study 3: Monod Growth Model with Explicit Resources¶
We are going to establish a Monod Growth model in this case study. The ODE will be defined in the near future. The Monod Growth model results is a stiff ODE system so we have to employ a couple tricks to get this model to work efficiently. This will be sectioned into 5 general parts.
1: Spawn processes
2: Establish ODE system
3: Create Model
4: Load processes' dependencies
5: Run
First is the introduction of distributed processing to allow chains to run in parallel. This portion is simple. Just establish some processes as follows. The amount of processes you spawn need to match the amount of chains you plan to run. Your main process is process 1. When you spawn 4 more you will get additional processes 2, 3, 4, and 5. These will sit idly and wait for the sample() call in step 5 to compute one chain per process in parallel. Note, that functions and libraries need to have the @everywhere tag so they are implemented in all the spawned processes not just process one (main).
using Distributed
addprocs(4) # 4 processes 4 chains
4-element Vector{Int64}:
2
3
4
5
Second, we will setup the ODE system that will adapt to the data. To do this, you will use variables that Turing gives you by 'default' (du, u, p, t).
mu_max: Maximum Specific Growth Rate
Ks: Half-Saturation Constant
Qn: Nutrient Consumption per Cell
delta: Death Rate
P_m3: Unit Conversion
$$ \begin{aligned} \mu(N) &= \mu_{\text{max}} \frac{N}{K_s + N} \\ \frac{dN}{dt} &= - Q_n \, \mu(N) \, P_{\text{m}^3} \\ \frac{dP}{dt} &= \mu(N) \, P - \delta \, P \\ \frac{dD}{dt} &= \delta \, P \\ \end{aligned} $$
@everywhere function ode(du, u, p, t)
N, P, D = u
mum, Ks, Qn, delta = p
P_m3 = P * 1e6 # convert from cells/mL to cells/m^3
mu = mum * N / (Ks + N)
du[1] = -Qn * mu * P_m3
du[2] = mu * P - delta * P
du[3] = delta * P
return nothing
end
Third, the model. We declare the function to be a probabilistic model named fit_ode using the @model tag. The inputs are as follows.
logP_obs: Total cell counts
logD_obs: Dead cell counts
times: The times at which the ODE system is evaluated
prob: the prebuilt ODE
Next, proirs. You have all kinds of options. Uniform, normal, lognormal, halfnormal. It all depends on what you need. For our parameters we choose uniform priors. These uniform priors say, "we believe the proper value for the model will be equally likely within this range and WILL NOT be outside of it". For our initial conditions we choose a lognormal distribution. This says, "we believe our value is around some x given a normalesque strictly positive distribution". For our sigma terms (error), we choose a halfnormal distribution. This is usually used for your error or standard deviation since neither can be negative and you can tune how quickly it tails off. Below are the priors visualized.

Next is solving the ODE. This will be a 2 step process.
1: remake your problem with newly sampled elements.
The inputs are as follows:
prob: Premade ODE system.
u0: State priors.
p: Parameter priors.
tspan: first and last of your observed times.
2: solve the system at specific time points.
The inputs are as follows:
pr: Remade system.
Rodas5(): An ODE solver designed for stiff systems.
saveat: Observed times.
abstol / restol: Absolute tolerance / Relative tolerance. Default value chosen.
sensealg: This is for cases where you want to use a specific algorithm to compute gradients. This is a standard choice from SciMLSensitivity. There are many at your disposal depending on your criteria.
note: See how when you're passing in values to this julia function there are both , and ;? In julia this is to mark the difference between positional arguments (before ;) and keyword arguments (after ;).
log.(Array(sol)[x, :] .+ 1e-9): sol is the solver output. We force it into an array and grab a row using the [x, :] indexing. Remember to be intentional with your indexing. Finally, we log scale the prediction.
log_obs ~ arraydist(Normal.(log_pred, sigma)): This is the actual likelihood calculation. Ensure the variable preceding the ~ here is the same as the one passed into fit_ode.
@everywhere using Turing, Distributions, SciMLSensitivity
@everywhere @model function fit_ode(logP_obs, logD_obs, times, prob)
mum ~ Uniform(0.4, 0.7)
Ks ~ Uniform(0.05, 0.2)
Qn ~ Uniform(1e-10, 7e-10)
delta ~ Uniform(0.01, 0.09)
# N0 isn't a 'prior' distribution. It is deterministic. Meaing, N0 will be a
# variable tracked by the model, but at each iteration it exists as a function
# of Qn.
N0 := 1000 + ((500 / 1.8e-10) * (Qn - 3.2e-10))
P0 ~ LogNormal(12.2175, 0.1)
D0 ~ LogNormal(10.2804, 0.1)
sigma_live ~ truncated(Normal(0, 1), 0, Inf)
sigma_dead ~ truncated(Normal(0, 1), 0, Inf)
pr = remake(prob,
u0 = [N0, P0, D0],
p = [mum, Ks, Qn, delta],
tspan = (times[1], times[end]))
sol = solve(pr, Rodas5();
saveat = times,
abstol = 1e-6, reltol = 1e-6,
sensealg = InterpolatingAdjoint(autojacvec = ZygoteVJP()))
S = Array(sol)
logP_pred = log.((S[2, :] + S[3, :]) .+ 1e-9) # total. 2 is live, 3 is dead!
logD_pred = log.(S[3, :] .+ 1e-9)
logP_obs ~ arraydist(Normal.(logP_pred, sigma_live))
logD_obs ~ arraydist(Normal.(logD_pred, sigma_dead))
end
Fourth, dependencies. You want to establish all your dependencies at once within the @everywhere begin / end block. If you don't do this, you will have to type @everywhere in front of every single variable. You need to call this only once. Multiple calls in this fashion (at least while working in jupyter notebooks) during personal testing resulted in agregious slowdowns during the distributed computation phase.
We'll walk through how the priming variables were chosen.
Qn_low: The bottom end of the uniform prior established for the variable.
N0_init: The variable in the model is a deterministic value based on Qn. The same math is applied to give the priming value.
The rest of u0: First values of the series of observed data.
p: Bottom ends of their respective priors.
tspan: First and last observed times.
prob: Primed ODE system.
Final step is to log transform the data and pass it to the model.
@everywhere using CSV, DataFrames, DifferentialEquations
@everywhere begin
cells = CSV.read("../../../case_study_2/python/data/total_cells.csv", DataFrame)
death = CSV.read("../../../case_study_2/python/data/death_percentage.csv", DataFrame)
cells_times = cells[end-14:end, :1]
cells_obs = cells[end-14:end, :2] * 1e6
death_times = death[end-14:end, :1]
death_obs = death[end-14:end, :2] .* (cells_obs ./ 100)
Qn_low = 1e-10
N0_init = 1000 + ((500 / 1.8e-10) * (Qn_low - 3.2e-10))
u0 = [N0_init, cells_obs[1], death_obs[1]]
p = [0.4, 0.05, 1e-10, 0.01]
tspan = (cells_times[1], cells_times[end])
prob = ODEProblem(ode, u0, tspan, p)
log_cells_obs = log.(cells_obs .+ 1e-9)
log_death_obs = log.(death_obs .+ 1e-9)
model = fit_ode(log_cells_obs, log_death_obs, cells_times, prob)
end
Fifth and finally this is the command to run the processes.
model: The model created above.
NUTS(1000, .95): This is telling you to use the NUTS sampler. 1000 is the amount of 'warm up' iterations it will do under the hood. .95 is the acceptance rate of the 'improved' solutions.
MCMCDistributed(): Tells the solver to run the chains in parallel on the 4 spawned processes.
1000: Number of posterior samples to be generated.
4: Number of chains. Need to match the process count.
progress: Either hides or shows the progress bar. Up to you.
chain = sample(model, NUTS(1000, .95), MCMCDistributed(), 1000, 4; progress=true)
Sampling (4 processes) 0%|█ | ETA: N/A
From worker 2: ┌ Info: Found initial step size From worker 2: └ ϵ = 0.0359619140625 From worker 3: ┌ Info: Found initial step size From worker 3: └ ϵ = 0.2 From worker 5: ┌ Info: Found initial step size From worker 5: └ ϵ = 0.05 From worker 4: ┌ Info: Found initial step size From worker 4: └ ϵ = 0.00625
Sampling (4 processes) 0%|█ | ETA: 2:04:51 Sampling (4 processes) 1%|█ | ETA: 1:05:50 Sampling (4 processes) 2%|█ | ETA: 0:47:08 Sampling (4 processes) 2%|█ | ETA: 0:36:36 Sampling (4 processes) 2%|█ | ETA: 0:31:34 Sampling (4 processes) 3%|█ | ETA: 0:26:57 Sampling (4 processes) 4%|█ | ETA: 0:25:02 Sampling (4 processes) 4%|██ | ETA: 0:23:05 Sampling (4 processes) 4%|██ | ETA: 0:21:27 Sampling (4 processes) 5%|██ | ETA: 0:19:22 Sampling (4 processes) 6%|██ | ETA: 0:17:50 Sampling (4 processes) 6%|██ | ETA: 0:16:37 Sampling (4 processes) 6%|██ | ETA: 0:15:27 Sampling (4 processes) 7%|██ | ETA: 0:14:33 Sampling (4 processes) 8%|███ | ETA: 0:13:37 Sampling (4 processes) 8%|███ | ETA: 0:12:51 Sampling (4 processes) 8%|███ | ETA: 0:12:09 Sampling (4 processes) 9%|███ | ETA: 0:11:31 Sampling (4 processes) 10%|███ | ETA: 0:10:58 Sampling (4 processes) 10%|███ | ETA: 0:10:26 Sampling (4 processes) 10%|███ | ETA: 0:09:58 Sampling (4 processes) 11%|████ | ETA: 0:09:34 Sampling (4 processes) 12%|████ | ETA: 0:09:12 Sampling (4 processes) 12%|████ | ETA: 0:08:53 Sampling (4 processes) 12%|████ | ETA: 0:08:32 Sampling (4 processes) 13%|████ | ETA: 0:08:18 Sampling (4 processes) 14%|████ | ETA: 0:07:58 Sampling (4 processes) 14%|████ | ETA: 0:07:44 Sampling (4 processes) 14%|█████ | ETA: 0:07:27 Sampling (4 processes) 15%|█████ | ETA: 0:07:14 Sampling (4 processes) 16%|█████ | ETA: 0:07:01 Sampling (4 processes) 16%|█████ | ETA: 0:06:50 Sampling (4 processes) 16%|█████ | ETA: 0:06:37 Sampling (4 processes) 17%|█████ | ETA: 0:06:27 Sampling (4 processes) 18%|█████ | ETA: 0:06:15 Sampling (4 processes) 18%|██████ | ETA: 0:06:06 Sampling (4 processes) 18%|██████ | ETA: 0:05:57 Sampling (4 processes) 19%|██████ | ETA: 0:05:49 Sampling (4 processes) 20%|██████ | ETA: 0:05:39 Sampling (4 processes) 20%|██████ | ETA: 0:05:33 Sampling (4 processes) 20%|██████ | ETA: 0:05:24 Sampling (4 processes) 21%|██████ | ETA: 0:05:17 Sampling (4 processes) 22%|███████ | ETA: 0:05:09 Sampling (4 processes) 22%|███████ | ETA: 0:05:04 Sampling (4 processes) 22%|███████ | ETA: 0:04:56 Sampling (4 processes) 23%|███████ | ETA: 0:04:52 Sampling (4 processes) 24%|███████ | ETA: 0:04:44 Sampling (4 processes) 24%|███████ | ETA: 0:04:40 Sampling (4 processes) 24%|███████ | ETA: 0:04:34 Sampling (4 processes) 25%|████████ | ETA: 0:04:29 Sampling (4 processes) 26%|████████ | ETA: 0:04:23 Sampling (4 processes) 26%|████████ | ETA: 0:04:18 Sampling (4 processes) 26%|████████ | ETA: 0:04:13 Sampling (4 processes) 27%|████████ | ETA: 0:04:09 Sampling (4 processes) 28%|████████ | ETA: 0:04:04 Sampling (4 processes) 28%|████████ | ETA: 0:04:00 Sampling (4 processes) 28%|████████ | ETA: 0:03:55 Sampling (4 processes) 29%|█████████ | ETA: 0:03:51 Sampling (4 processes) 30%|█████████ | ETA: 0:03:47 Sampling (4 processes) 30%|█████████ | ETA: 0:03:43 Sampling (4 processes) 30%|█████████ | ETA: 0:03:39 Sampling (4 processes) 31%|█████████ | ETA: 0:03:35 Sampling (4 processes) 32%|█████████ | ETA: 0:03:32 Sampling (4 processes) 32%|█████████ | ETA: 0:03:28 Sampling (4 processes) 32%|██████████ | ETA: 0:03:25 Sampling (4 processes) 33%|██████████ | ETA: 0:03:21 Sampling (4 processes) 34%|██████████ | ETA: 0:03:19 Sampling (4 processes) 34%|██████████ | ETA: 0:03:15 Sampling (4 processes) 34%|██████████ | ETA: 0:03:12 Sampling (4 processes) 35%|██████████ | ETA: 0:03:09 Sampling (4 processes) 36%|██████████ | ETA: 0:03:06 Sampling (4 processes) 36%|███████████ | ETA: 0:03:03 Sampling (4 processes) 36%|███████████ | ETA: 0:03:00 Sampling (4 processes) 37%|███████████ | ETA: 0:02:57 Sampling (4 processes) 38%|███████████ | ETA: 0:02:54 Sampling (4 processes) 38%|███████████ | ETA: 0:02:52 Sampling (4 processes) 38%|███████████ | ETA: 0:02:49 Sampling (4 processes) 39%|███████████ | ETA: 0:02:47 Sampling (4 processes) 40%|████████████ | ETA: 0:02:44 Sampling (4 processes) 40%|████████████ | ETA: 0:02:41 Sampling (4 processes) 40%|████████████ | ETA: 0:02:39 Sampling (4 processes) 41%|████████████ | ETA: 0:02:36 Sampling (4 processes) 42%|████████████ | ETA: 0:02:34 Sampling (4 processes) 42%|████████████ | ETA: 0:02:32 Sampling (4 processes) 42%|████████████ | ETA: 0:02:30 Sampling (4 processes) 43%|█████████████ | ETA: 0:02:28 Sampling (4 processes) 44%|█████████████ | ETA: 0:02:25 Sampling (4 processes) 44%|█████████████ | ETA: 0:02:23 Sampling (4 processes) 44%|█████████████ | ETA: 0:02:21 Sampling (4 processes) 45%|█████████████ | ETA: 0:02:19 Sampling (4 processes) 46%|█████████████ | ETA: 0:02:17 Sampling (4 processes) 46%|█████████████ | ETA: 0:02:15 Sampling (4 processes) 46%|██████████████ | ETA: 0:02:13 Sampling (4 processes) 47%|██████████████ | ETA: 0:02:11 Sampling (4 processes) 48%|██████████████ | ETA: 0:02:09 Sampling (4 processes) 48%|██████████████ | ETA: 0:02:08 Sampling (4 processes) 48%|██████████████ | ETA: 0:02:06 Sampling (4 processes) 49%|██████████████ | ETA: 0:02:04 Sampling (4 processes) 50%|██████████████ | ETA: 0:02:02 Sampling (4 processes) 50%|███████████████ | ETA: 0:02:00 Sampling (4 processes) 50%|███████████████ | ETA: 0:01:58 Sampling (4 processes) 51%|███████████████ | ETA: 0:01:57 Sampling (4 processes) 52%|███████████████ | ETA: 0:01:55 Sampling (4 processes) 52%|███████████████ | ETA: 0:01:53 Sampling (4 processes) 52%|███████████████ | ETA: 0:01:52 Sampling (4 processes) 53%|███████████████ | ETA: 0:01:50 Sampling (4 processes) 54%|███████████████ | ETA: 0:01:48 Sampling (4 processes) 54%|████████████████ | ETA: 0:01:47 Sampling (4 processes) 55%|████████████████ | ETA: 0:01:45 Sampling (4 processes) 55%|████████████████ | ETA: 0:01:44 Sampling (4 processes) 56%|████████████████ | ETA: 0:01:42 Sampling (4 processes) 56%|████████████████ | ETA: 0:01:40 Sampling (4 processes) 56%|████████████████ | ETA: 0:01:39 Sampling (4 processes) 57%|████████████████ | ETA: 0:01:37 Sampling (4 processes) 57%|█████████████████ | ETA: 0:01:36 Sampling (4 processes) 58%|█████████████████ | ETA: 0:01:34 Sampling (4 processes) 58%|█████████████████ | ETA: 0:01:33 Sampling (4 processes) 59%|█████████████████ | ETA: 0:01:32 Sampling (4 processes) 60%|█████████████████ | ETA: 0:01:30 Sampling (4 processes) 60%|█████████████████ | ETA: 0:01:29 Sampling (4 processes) 60%|█████████████████ | ETA: 0:01:27 Sampling (4 processes) 61%|██████████████████ | ETA: 0:01:26 Sampling (4 processes) 62%|██████████████████ | ETA: 0:01:25 Sampling (4 processes) 62%|██████████████████ | ETA: 0:01:23 Sampling (4 processes) 62%|██████████████████ | ETA: 0:01:22 Sampling (4 processes) 63%|██████████████████ | ETA: 0:01:20 Sampling (4 processes) 64%|██████████████████ | ETA: 0:01:19 Sampling (4 processes) 64%|██████████████████ | ETA: 0:01:18 Sampling (4 processes) 64%|███████████████████ | ETA: 0:01:17 Sampling (4 processes) 65%|███████████████████ | ETA: 0:01:15 Sampling (4 processes) 66%|███████████████████ | ETA: 0:01:14 Sampling (4 processes) 66%|███████████████████ | ETA: 0:01:13 Sampling (4 processes) 66%|███████████████████ | ETA: 0:01:11 Sampling (4 processes) 67%|███████████████████ | ETA: 0:01:10 Sampling (4 processes) 68%|███████████████████ | ETA: 0:01:09 Sampling (4 processes) 68%|████████████████████ | ETA: 0:01:08 Sampling (4 processes) 68%|████████████████████ | ETA: 0:01:06 Sampling (4 processes) 69%|████████████████████ | ETA: 0:01:05 Sampling (4 processes) 70%|████████████████████ | ETA: 0:01:04 Sampling (4 processes) 70%|████████████████████ | ETA: 0:01:03 Sampling (4 processes) 70%|████████████████████ | ETA: 0:01:02 Sampling (4 processes) 71%|████████████████████ | ETA: 0:01:01 Sampling (4 processes) 72%|█████████████████████ | ETA: 0:00:59 Sampling (4 processes) 72%|█████████████████████ | ETA: 0:00:58 Sampling (4 processes) 72%|█████████████████████ | ETA: 0:00:57 Sampling (4 processes) 73%|█████████████████████ | ETA: 0:00:56 Sampling (4 processes) 74%|█████████████████████ | ETA: 0:00:55 Sampling (4 processes) 74%|█████████████████████ | ETA: 0:00:54 Sampling (4 processes) 74%|█████████████████████ | ETA: 0:00:52 Sampling (4 processes) 75%|██████████████████████ | ETA: 0:00:51 Sampling (4 processes) 76%|██████████████████████ | ETA: 0:00:50 Sampling (4 processes) 76%|██████████████████████ | ETA: 0:00:49 Sampling (4 processes) 76%|██████████████████████ | ETA: 0:00:48 Sampling (4 processes) 77%|██████████████████████ | ETA: 0:00:47 Sampling (4 processes) 78%|██████████████████████ | ETA: 0:00:46 Sampling (4 processes) 78%|██████████████████████ | ETA: 0:00:45 Sampling (4 processes) 78%|██████████████████████ | ETA: 0:00:44 Sampling (4 processes) 79%|███████████████████████ | ETA: 0:00:42 Sampling (4 processes) 80%|███████████████████████ | ETA: 0:00:41 Sampling (4 processes) 80%|███████████████████████ | ETA: 0:00:40 Sampling (4 processes) 80%|███████████████████████ | ETA: 0:00:39 Sampling (4 processes) 81%|███████████████████████ | ETA: 0:00:38 Sampling (4 processes) 82%|███████████████████████ | ETA: 0:00:37 Sampling (4 processes) 82%|███████████████████████ | ETA: 0:00:36 Sampling (4 processes) 82%|████████████████████████ | ETA: 0:00:35 Sampling (4 processes) 83%|████████████████████████ | ETA: 0:00:34 Sampling (4 processes) 84%|████████████████████████ | ETA: 0:00:33 Sampling (4 processes) 84%|████████████████████████ | ETA: 0:00:32 Sampling (4 processes) 84%|████████████████████████ | ETA: 0:00:31 Sampling (4 processes) 85%|████████████████████████ | ETA: 0:00:30 Sampling (4 processes) 86%|████████████████████████ | ETA: 0:00:29 Sampling (4 processes) 86%|█████████████████████████ | ETA: 0:00:28 Sampling (4 processes) 86%|█████████████████████████ | ETA: 0:00:27 Sampling (4 processes) 87%|█████████████████████████ | ETA: 0:00:26 Sampling (4 processes) 88%|█████████████████████████ | ETA: 0:00:25 Sampling (4 processes) 88%|█████████████████████████ | ETA: 0:00:23 Sampling (4 processes) 88%|█████████████████████████ | ETA: 0:00:22 Sampling (4 processes) 89%|█████████████████████████ | ETA: 0:00:21 Sampling (4 processes) 90%|██████████████████████████ | ETA: 0:00:20 Sampling (4 processes) 90%|██████████████████████████ | ETA: 0:00:19 Sampling (4 processes) 90%|██████████████████████████ | ETA: 0:00:18 Sampling (4 processes) 91%|██████████████████████████ | ETA: 0:00:17 Sampling (4 processes) 92%|██████████████████████████ | ETA: 0:00:16 Sampling (4 processes) 92%|██████████████████████████ | ETA: 0:00:15 Sampling (4 processes) 92%|██████████████████████████ | ETA: 0:00:14 Sampling (4 processes) 93%|███████████████████████████ | ETA: 0:00:13 Sampling (4 processes) 94%|███████████████████████████ | ETA: 0:00:12 Sampling (4 processes) 94%|███████████████████████████ | ETA: 0:00:12 Sampling (4 processes) 94%|███████████████████████████ | ETA: 0:00:11 Sampling (4 processes) 95%|███████████████████████████ | ETA: 0:00:10 Sampling (4 processes) 96%|███████████████████████████ | ETA: 0:00:09 Sampling (4 processes) 96%|███████████████████████████ | ETA: 0:00:08 Sampling (4 processes) 96%|████████████████████████████| ETA: 0:00:07 Sampling (4 processes) 97%|████████████████████████████| ETA: 0:00:06 Sampling (4 processes) 98%|████████████████████████████| ETA: 0:00:05 Sampling (4 processes) 98%|████████████████████████████| ETA: 0:00:04 Sampling (4 processes) 98%|████████████████████████████| ETA: 0:00:03 Sampling (4 processes) 99%|████████████████████████████| ETA: 0:00:02 Sampling (4 processes) 100%|████████████████████████████| ETA: 0:00:01 Sampling (4 processes) 100%|████████████████████████████| Time: 0:03:10 Sampling (4 processes) 100%|████████████████████████████| Time: 0:03:15
Chains MCMC chain (1000×23×4 Array{Float64, 3}):
Iterations = 1001:1:2000
Number of chains = 4
Samples per chain = 1000
Wall duration = 185.17 seconds
Compute duration = 727.25 seconds
parameters = mum, Ks, Qn, delta, N0, P0, D0, sigma_live, sigma_dead
internals = n_steps, is_accept, acceptance_rate, log_density, hamiltonian_energy, hamiltonian_energy_error, max_hamiltonian_energy_error, tree_depth, numerical_error, step_size, nom_step_size, lp, logprior, loglikelihood
Use `describe(chains)` for summary statistics and quantiles.
To interpret the output we have some user-defined postprocessing and plotting functions. They are in our GitHub repo.
Interested readers should compare results with PyMC
priors = Dict{Symbol,Distribution}(
:mum => Uniform(0.4, 0.7),
:Ks => Uniform(.05, 0.2),
:Qn => Uniform(1e-10, 7e-10),
:delta => Uniform(0.01, 0.09),
:P0 => LogNormal(12.2175, 0.1),
:D0 => LogNormal(10.2804, 0.1),
:sigma_live => truncated(Normal(0, 1), 0, Inf),
:sigma_dead => truncated(Normal(0, 1), 0, Inf)
)
order = [:mum, :Ks, :Qn, :delta, :P0, :D0, :sigma_live, :sigma_dead]
plot_trace_with_priors(chain; priors=priors, var_order=order, per_chain_density=true) # also per-chain densities
fig = plot_posterior_states_stacked(
chain, prob, cells_times;
n_draws=150, ribbon_q=(0.05,0.95),
obs_total=cells_obs, obs_dead=death_obs
)
display(fig)
Now we interpret the results. Remember the ODE system from before.
$$ \begin{aligned} \mu(N) &= \mu_{\text{max}} \frac{N}{K_s + N} \\ \frac{dN}{dt} &= - Q_n \, \mu(N) \, P_{\text{m}^3} \\ \frac{dP}{dt} &= \mu(N) \, P - \delta \, P \\ \frac{dD}{dt} &= \delta \, P \\ \end{aligned} $$
Here we will look at the above plots in tandem. Take note of the x-axis values at the peaks of the PDF's and the variables they represent. Those are the optimal values for their respective variables for the system. The left hand column is a frequency plot. This is just to make sure the model is exploring the parameter space. Now, see that not all the varibles are 'fit'. Ks doesn't have a peak, its flat. If you see this alone it may seem problematic. This is why looking at posterior draws over the data is important. We can see that our model fits the data very well, therefore the flat Ks estimate means the model is not dependent on that value. Which is fine! This is the importance of clear post processing.