Simcem is a computational thermodynamics package and database with aspirations of process simulation. It currently allows the calculation of thermodynamic properties from thermodynamic models (i.e., equations of state), and can calculate the equilibrium state of model thermodynamic systems. It mainly focuses on combustion and cement chemistry, but it is planned to evolve into a general chemical engineering toolkit.

Everything is available via the main menu (click the icon in the top left to open it). The console at the top provides information on the progress of calculations (and any errors).

The combustion calculator works and most of our cement database has been made available. A cement kiln and tubular furnace simulator should be available soon.

If you are a collaborator, you can access our full data set by logging in.

This page is to enable the filtering/searching of molecules by their elemental composition. You can also click on an element for more information on its isotopes and other data.

The icon in the top right of each element allows you to filter molecules displayed in the molecules page.

- : Don't filter by this element.
- : Only show molecules which include this element.
- : Only show molecules which exclude this element.

These are notes on the thermodynamic implementation written while (re)developing my own understanding of thermodynamics. They are not complete and are too brief to be precise, but they are provided to help others understand SimCem and implement their own Gibb's free energy minimisation as I struggled to find all aspects of the technique explained in one convenient place.

First, the concept of a thermodynamic system is introduced. Then, the fundamental differential equation of thermodynamics is derived from the conservation of energy. Then, the second law is used to justify the definition of several "thermodynamic potentials" which are minimised at equilibrium. State variables are then discussed to justify the "best" basis for thermodynamic calculations, and to demonstrate that additional constraints are required during minimisation. The general approach to constrained minimisation using Lagrangian multipliers is then outlined to understand what thermodynamic derivatives are required for the minimisation algorithm. As there are a large number of derivatives required, the next section discusses many simplifying relationships which can collapse the required derivatives to a minimal set. Finally, the implementation of a number of thermodynamic models is outlined, first for volume-explicit models, then for pressure-explicit models.

This work wasn't possible without some of the excellent work already out in the literature. In particular, nothing was more useful to me than the excellent NASA CEA program, its database, and its highly educational report.

A thermodynamic system is the partitioning of some quantity of mass
and energy (but most importantly energy) from
its *surroundings* through an
enclosing *boundary*. The surroundings can also be treated as
a (inside-out) thermodynamic system; but the key idea is the
division of what we are interested in (the system) from the
uninteresting (the surroundings).

The boundary of the system may be physical (e.g., the walls of a vessel such as a balloon) or may be defined by some arbitrary division of space (e.g., a finite volume in a CFD simulation). If the boundary is physical, then it may or may not be included as part of the system. For example, water droplets in air have a surface tension which acts like the skin of a balloon and pull the drop into a spherical shape. This surface has an associated energy and it is at our discretion whether to include the energy as part of the system or as part of the surroundings (or neglect it entirely as an approximation).

Both physical and unphysical boundaries may be fixed or change shape
over time. In addition, if mass can pass through the boundaries then
the system is deemed *open* (and *closed* if it
cannot).

The energy contained inside a thermodynamic system may take many
forms, but only the *observable* properties at the boundary
of the system, such as volume, mass, surface area, and pressure are
visible to us (any other effect the mass/energy has is irrelevant as
it cannot be seen). Observable properties are generally only
visible to us at the boundary of the system (e.g., a thermometer
measures its surface temperature). If the system is small enough,
then the observables are approximately constant over the system's
boundary. For larger systems, we can split it into smaller and
smaller sub-systems until the observables are approximately
constant: thus, here only single values of the observables are
considered.

The sum total of the energy inside a system (the *internal
energy*) is not directly observable (as most of it is inside,
away from the boundary) but the conservation of energy tells us it
must exist. It should come as no surprise then that there are
other *internal* properties, such as the entropy, which can
only be discerned indirectly (more later).

Now that the initial terminology has been outlined, a governing equation for the changes in the energy of a thermodynamic system is derived.

The power of thermodynamics arises from its ability to find simple universal relationships between observable state variables. These relationships are a direct consequence of the laws of thermodynamics.

The *first law of thermodynamics* is an observation that
energy is neither created or destroyed but only transformed between
different forms. Every thermodynamic system may contain internally
some energy, $U_{sys.}$. The first law can then be stated as as a
conservation of this *internal* energy between a system and
its surroundings:
where the ${\rm d}X$ indicates an infinitesimal change in $X$ and
that this is an *exact* differential ($U_{sys.}$ cannot
change without $U_{surr.}$ changing).

From further observation of real systems, two types of energy
transfer are identified: *heat transfer* and *work*,
where $\partial Q_{surr.\to {sys.}}$, is heat transferred to the
system due to temperature differences and $\partial W_{sys.\to
surr.}$ represents all forms of work carried out by the system (the
negative sign on the work term is a conventional choice). Given that
the work term represents many forms of work, why is heat transfer
singled out as a separate term? Nature appears to maximise heat
transfer over work whenever possible, and this is discussed later
when reversibility is introduced.

You should note that a $\partial$ symbol is used for the
work/heat-transfer terms to indicate *inexact differential*
relationships. A thermodynamic system may transfer arbitrarily large
amounts of heat, and perform arbitrarily large amounts of work, but
only the remainder $(\partial Q_{\to {sys.}}-\partial W_{sys.\to surr.})$
will actually cause a change in the energy $U_{sys.}$.
The internal energy is a *state* variable as it describes the
state of the system; however, work and heat transfer are not.

Physical examples of this include engines, which are thermodynamic
systems that can perform arbitrary amounts of work provided
sufficient heat/energy is supplied but return to their initial state
every cycle. An inexact differential implies there is no unique
relationship between the variables (we cannot integrate this
equation). Interestingly, inexact differentials can often be
transformed into simpler *exact* differentials through the
use of constraints. For example, if the engine is seized and no work
can be carried out ($\partial W_{sys.\to surr.}=0$), then only heat
transfer can change the energy of the system and we now have an
exact differential relationship, ${\rm d}U_{sys.}={\rm d}Q_{\to
{sys.}}$. This constraint is far too restrictive in general and
another constraint, known as reversibility, must be invoked to
generate exact differential equations we can integrate.

In the next two subsections, the concept of reversibility is introduced through consideration of cycles and is used to find exact differential descriptions of work and heat.

A thermodynamic *cycle* is a *process* applied to a
thermodynamic system which causes its state (and its state
variables) to change but eventually return to its initial state (and
values of the state variables). For example, the combustion chamber
inside an engine will compress and expand during its operation but
it returns to its starting volume after each cycle. This leads to
the following identity where the sum/integral of the changes over a
cycle are zero, i.e., $\oint_{\rm cycle} {\rm d} V=0$, and similar
identites must also apply for every state variable.

In 1855, Clausius observed that the integral of the heat transfer
rate over the temperature is always negative when measured over a
cycle,
This is known as the Clausius inequality. It was found that this
inequality approaches zero in the limit that the cycle is performed
slowly. This limiting result indicates that the kernel of the
integral actually contains a state variable, i.e.,
where $S_{sys.}$ is the state variable known as *entropy* of the
system. Interestingly, the entropy (like the internal energy) is also not
directly observable and its existence is only revealed by this
inequality.

As the inequality is generally negative over a cycle, it indicates
that entropy always increases and must be removed from a system to
allow it to return to its initial state (except in the limit of slow
changes). This has led to the terminology of
the *irreversible* cycle, $\oint_{\rm cycle}{\rm d}S>0$, and
the idealised *reversible* cycle, $\oint_{\rm cycle}{\rm
d}S=0$, which can be returned to its starting state without removing
entropy.

Further careful reasoning which is omitted here results in the
statement of the second law of thermodynamics: *the total entropy
of a isolated system can only increase over time*. Assuming the
universe is an isolated system, our thermodynamic system and its
surroundings must always together have a positive (or zero) entropy
change.

One last note, thermodynamic processes which are not cycles may also be reversible or irreversible. For a general process to be reversible the total entropy of the system and its surroundings together must remain zero. This allows the entropy to increase or decrease in the system, but only if the surroundings have a compensating opposite change. Irreversibility is further explored later but for now our understanding is sufficient to introduce the various forms of work.

The work term $\partial W_{sys.\to surr.}$ represents all methods of transferring of energy other than as heat. Reversible paths reduce total entropy changes to zero, which minimizes the heat transferred and actually maximizes the amount of work performed by the system for a given process. This is what distinguishes heat transfer from mechanical work.

As an illustrative example, consider the emptying of a balloon via popping it versus untying the neck and letting it go. In the first case, no work is done as the air is immediately released into the surroundings: this is the quickest path of deflating the balloon thus it maximizes entropy. Untying the neck, the air jet leaving the balloon will perform work and propels the balloon around the room. In this case, the air is released more slowly and more work is extracted.

All work can be expressed as a generalized driving *force*,
$\vec{F}_{sys.}$, which is *displaced* by a change in the
corresponding generalized distance, $\vec{L}_{sys.}$. The reversible
limit corresponds to infinitesimally slow/small changes of the distance
(i.e., ${\rm d}\vec{L}$) resulting in the following general expression for
the work.

For the balloon, there are three forms of work taking place. First,
as the volume of the balloon is decreased work must be performed to
compress the volume against the pressure of the air within. The
reversible *pressure-volume* work is then as follows,
where the pressure $p$ is the generalised force and the volume $V$
is the generalised displacement. In addition, the balloon itself is
shrinking, releasing the tension within its elastic surface. This is
known as *surface* work,
where $\gamma_{sys.}$ is the surface tension and $\Sigma_{sys.}$ is the surface
area.

As air leaves the balloon through the neck it will carry away energy
with it. This is known as *chemical* work,
where the chemical potential, $\mu_{i,{sys.}}$, is the energy added to
the system if one mole of the component $i$ (from one of the $N_C$
components of the system) is added to or removed from the system by
any process (e.g., flow through the boundaries or internal
reactions).

The definition of a component, $i$, in a thermodynamic system is
flexible and may be used to represent a single type of atom,
molecule, or elementary particle (i.e., electrons), or even a
mixture of molecules (such as "air"). Although ${\rm d} N_{i,{sys.}}$
represents changes in the amounts of a species $i$, for
a *closed* system (a system which cannot exchange mass with
any other system), chemical work is impossible and thus the
conservation of energy requires that the following holds true (even
if ${\rm d} N_{i,{sys.}}\neq 0$ due to internal processes such as
reactions),
Closed systems are typical during process/unit-operation
calculations; however, as these closed systems are often composed of
multiple open sub-systems (i.e. multiple interacting phases within
a closed vessel) the chemical work term is always required.

In summary, under the constraint of a reversible system, the expression for entropy (Eq. \eqref{eq:entropydefinition}) and any relevant work terms (Eq. \eqref{eq:pressurevolumework}-\eqref{eq:materialwork}) can be substituted into the energy balance of Eq. \eqref{eq:initialebalance}, to yield the fundamental thermodynamic equation, where the subscripts have been dropped from every term for convenience. Other work terms, such as the surface work, can be added to this equation depending on the system studied; however, the pressure-volume and chemical work terms are the most important from a process engineering perspective.

It is interesting to note that each term of the fundamental
thermodynamic equation consists of a so-called *conjugate
pairing* of an *intensive* property such as $T$, $p$, or
$\mu_i$ and a corresponding conjugate *extensive* property
$S$, $V$, or $\left\{N_i\right\}$ respectively. Provided all the
relevant work terms have been included, it has been observed that a
thermodynamic state is fully specified if *at least* one
variable is specified from each of these conjugate pairs. Fewer
variables may be required if there are other constraints such as
multi-phase equilibrium (see
the phase
rule). Computationally, it is convenient to always use a full
set of variables and allow the free-energy minimization (discussed
later) to introduce the required constraints between the independent
variables to satisfy the phase rule.

In mathematics, we are free to choose as our set of independent
variables any of the variables which appear in the equation;
however, the *natural variables* are whichever result in an
exact differential relationship. For example, the internal energy
has the natural variables $U(S,\,V,\,\left\{N_i\right\})$. This
choice is only convenient for expressions for the internal energy.
As a further example, the natural set of variables for the entropy
is $S(U,\,V,\,\left\{N_i\right\})$. Clearly there may be several
sets of "natural" variables, which all depend on the thermodynamic
function under study. The natural variables encountered so far are
not particularly convenient ($T$ and $p$ or $V$ are the most
convenient for process engineers) and so two thermodynamic
potentials $G$ and $A$ are introduced with these natural variables;
however, the internal energy is particularly interesting as its
natural variables are extensive, and this allows a particularly
simple solution.

If a total derivative of the internal energy is performed in its natural variables using the implicit function theorem, the following expression arises, where, for clarity, the variables which are held constant while a partial derivative is taken are written as subscripts on the parenthesis surrounding the derivative (this is because in thermodynamics we often change the set of independent and dependent variables).

The usefulness of this particular set of independent variables is obvious when comparing the total derivative to the fundamental thermodynamic relation of Eq.\eqref{eq:fundamentalThermoRelation} as it yields the following convenient definitions of the partial derivatives, This is the first indication that thermodynamics is a powerful tool as it has already found a differential relationship between the internal energy and the intensive observable properties. As the variables $S$, $V$, and $\left\{N_i\right\}$, are all extensive, Euler's solution for homogeneous functions applies.

Consider some function (in our case an extensive thermodynamic
property), $Z$. Assume that the property is only a function of the
extensive quantities $\left\{{sys.}_i\right\}$ (these may be the molar
amounts $\left\{N_i\right\}$, volume $V$, and entropy $S$ which
are all extensive). If all of the extensive properties
$\left\{A_i\right\}$ are scaled equally by some factor, $k$, the
extensive thermodynamic property must also scale. Thus,
where $k$ is some arbitrary scaling factor. $Z$ is therefore a
homogeneous
function of first order in $\left\{A_i\right\}$. Taking the
derivative of both sides with respect to $k$ (chain rule on the
LHS):
At this point it is selected that $k=1$ and expanding the dot
product as a sum,
This allows us to solve for $Z$ if the partial derivatives in
terms of each of its extensive parameters are known.

This allows the equation to be "solved" immediately as it is a
first-order homogeneous function of the extensive properties.
This is the solution for the internal energy which is the first
*thermodynamic potential*. It is this solution which
indicates that the product of a conjugate pair is
a *state
function* (independent of the thermodynamic path), but this
is not discussed in detail here.

As stated previously, there are other thermodynamic potentials which have more convenient natural variables. These can be derived by considering the consequences of the second law of thermodynamics.

The second law of thermodynamics has already been introduced via the
Clausius inequality and is formally written as follows,
i.e., the total entropy of the universe (our system and its
surroundings) must always increase or remain constant. This
statement implies that the only "stationary" thermodynamic state is
where the entropy has reached its maximum, henceforth known as
the *equilibrium* state. The equilibrium state is of
particular interest as all thermodynamic systems approach it and, if
left undisturbed, remain there indefinitely.

It is often the basis of process calculations that a particular thermodynamic system has approached equilibrium, thus determining the equilibrium state (via a maximization of the total entropy) is our primary goal. Starting from some initial non-equilibrium state, some unconstrained internal parameters (e.g., composition, reaction progression) are varied such that the total entropy is maximized.

Although the universe's entropy must be maximized at equilibrium, our interest is in a smaller thermodynamic system contained within it. The total entropy is the sum of the entropy of this system within the universe and the rest of the universe, i.e., It is clear that both $S_{sys.}$ and $S_{surr.}$ may increase or decrease, provided the overall change results in an increase of $S_{total}$.

It is henceforth assumed that the surroundings are at equilibrium,
they remain at equilibrium, and any interaction with the
surroundings is reversible. The author considers these the largest
assumptions they have ever made, both physically and in terms of
approximation; however, it is equivalent to a "worst case" estimate
for the generation of entropy. In this case, there can be no
"external" process driving changes within the thermodynamic
system. I.e., the rest of the universe cannot supply extra entropy
spontaneously to allow normally unfeasible processes within the
system to occur. In this case, the only possible mechanism by which
the universe's entropy may change is via heat transfer from the
system.
This makes it clear that the entropy change of the system must be
balanced against the entropy it is generating in the surroundings
through heat transfer (the surroundings are also so large the other
effects of the heat transfer are negligble). Inserting the
fundamental thermodynamic equation
(Eq.\eqref{eq:fundamentalThermoRelation}),
To simplify the remainder of this section, the thermodynamic system
is now assumed to be *closed* which allows the elimination of
the chemical potential term,
The subscript $C$ on the parenthesis is used to indicate that the
system is closed. This equation makes it clear that, in closed
systems interacting reversibly with its surroundings which are at
local equilibrium, the overall equilibrium is not solely linked to
the entropy of the system itself but is the *minimisation* of
the RHS of Eq.\eqref{eq:totalentropy} (due to the negative sign on
the entropy change). The RHS often corresponds to
some *thermodynamic potential* which arise under different
constraints and these are now derived in the sections below.

Consider a system which is completely isolated from its surroundings. It cannot exchange heat $\left({\partial}Q=0\right)$ or work $\left({\partial}W=0\right)$. As we're in the zero-work reversible limit, the system must be at constant volume $\left({\rm d}V=0\right)$ and the molar amounts $\left\{N_i\right\}$ may individually vary but only such that $\left(\sum_i^{N_C} \mu_i\,{\rm d} N_{i,A}=0\right)$. Examining the original balance in Eq.\eqref{eq:initialebalance}, thus it is clear that the isolated constraint is also equivalent to ${\rm d}V=0$ and ${\rm d}U=0$. Examining the total entropy under these constraints, where the subscripts on the brackets indicate that the $U$, $V$, are held constant in the closed system. It is clear from this expression (and our own intuition) that an isolated system, with surroundings already at equilibrium, all changes in the total entropy must arise from changes in the system entropy.

To put this in terms of minimising a thermodynamic potential
we define the negative of the entropy (sometimes
called *negentropy*),
To determine the equilibrium state the potential, $f_{U,V,C}$,
must be minimised and this action is equivalent to maximising
the total entropy.

Isolated systems are interesting in certain cases; however, in process engineering, we often have a closed system at a fixed temperature $T$ and pressure $p$. Under these conditions the system is free to transfer heat and change its volume. For the interaction of the system with its surroundings to be reversible, the surroundings must have the same temperature $T_{sys.}=T_{surr.}$ and pressure $p_{sys.}=p_{surr.}$.

If we now define a new thermodynamic potential called the Gibbs free energy, and look at changes in $G$ while holding $T$ and $p$ constant, we have, Comparing with this expression against Eq.\eqref{eq:totalentropy} it is immediately apparent that, Thus, maximisation of $S_{total}$ is equivalent to minimisation of $G$ when $T$ and $p$ are held constant. Writing this in the same notation as before we have,

Again, reversibility requires that $T_{surr.}=T_{sys.}$. No pressure-volume work occurs as the volume is fixed and so the pressure of the surroundings is actually irrelevant. Material work is also ignored as the system is closed. In the final equality, another thermodynamic potential is introduced: the Helmholtz free energy, It is now clear that under these constraints the maximum total entropy is reached at the minimum Helmholtz free energy. I.e.,

The enthalpy is defined as follows, For constant pressure and enthalpy we have, Examining the total entropy under these constraints (Eq.\eqref{eq:totalentropy}), Even though the surroundings temperature is unknown, it is merely a scaling factor and again the maximisation of the total entropy is equivalent to the minimisation of the system's negentropy,

Again, starting with the total entropy of a closed system, Eq.\eqref{eq:totalentropy}, We note that, Thus, we have The surroundings can be at some arbitrary constant temperature (they are at equilibrium and very large thus unaffected by heat transfer), thus $T_{surr.}$ is simply a scaling factor. The thermodynamic potential to minimise is then $f_{p,S,C} = H_{sys.}$.

Finally, the simplest example, Thus, the thermodynamic potential to minimise for maximum total entropy is $f_{S,V,C} = U_{sys.}$ and the surroundings temperature is unimportant.

In summary, there are a number of relevant thermodynamic potentials for a closed system. These are defined below,

For each set of constrained thermodynamic states in closed systems, a particular thermodynamic potential is minimised at equilibrium. These are summarised in the table below:

Constants | Function to minimise, $f$ |
---|---|

$p,\,S$ | $H$ |

$p,\,T$ | $G$ |

$p,\,H$ | $-S$ |

$V,\,S$ | $U$ |

$V,\,T$ | $A$ |

$V,\,U$ | $-S$ |

The variables held constant correspond to the "natural" variables of each potential. Expressing the change in each thermodynamic potential in terms of these natural variables yields the following differential equations, The significance of the chemical potential cannot be overstated. It is the change of each thermodynamic potential per mole of each species exchanged but only provided the natural variables of the potential are held constant, The implication of this is that when dealing with systems exchanging mass, but constrained by two "natural" variables, the chemical potential for each species must be equal in all phases, regardless of which constrained variables are actually used (otherwise a change of mass between systems could change the value of the overall thermodynamic potential implying it is not at a minimum). It is also the partial molar Gibbs free energy ($G=\sum_i N_i\,\mu_i$) as discussed later and thus calculation of the Gibbs free energy can be reduced to considering the chemical potential.

The thermodynamic potentials, when expressed as a function of their natural variables, provide a complete description of the state of a thermodynamic system. For example, consider the Gibbs free energy expressed in terms of its natural variables $G\left(T,\,p,\,\left\{N_i\right\}\right)$. If we know the values of the natural variables, we can calculate $G$. The derivatives of $G$ (see Eq.\eqref{eq:dG}) also allow us to calculate the following properties, If we evaluate the derivatives using the value of the natural variables then the values of all other thermodynamic potentials can be determined using Eqs.\eqref{eq:Urule}-\eqref{eq:Grule}. E.g. We can take further derivatives to evaluate properties such as the heat capacity $C_p = \left(\partial H/\partial T\right)_{p,\left\{N_i\right\}}$. In this way, an entire thermodynamic model can be created by only specifying the functional form of $G(T,\,p,\,\left\{N_i\right\})$. This is known as a generating function approach, and this can be applied to any thermodynamic potential in its natural variables. Most commonly either the Gibbs or Helmholtz $A(T,\,V,\,\left\{N_i\right\})$ free energy are used as generating functions due to the convenience of their natural variables.

It is sometimes convenient to generate enthalpy rather than entropy from the Gibbs free energy. Consider a reversible process in a closed system, This illustrates that the volume and enthalpy can also be directly obtained from the functional form of $G(T,p)/\,T$, by taking derivatives in its natural variables (while holding the other natural state variables, including $N_i$, constant).

Specifying the entire thermodynamic system using a single
thermodynamic potential guarantees *thermodynamic
consistency*. This is the requirement that all properties are
correctly related via their derivatives. This may be accidentally
violated if the system is specified another way, i.e., via any of
the derivatives. Many initial thermodynamic models are inconsistent
as they specify simple polynomials in temperature for both the heat
capacity and the density, but these cannot be integrated into a
consistent thermodynamic potential.

The Gibbs phase rule states that the number of independent intensive variables (AKA degrees of freedom), $F$, required to completely specify the equilibrium state of a thermodynamic system is, where $N_P$ is the number of phases and $C$ is the number of independent components in the system. It should be noted that in general $C\neq N_C$, as components may be linked by the constraints of elemental or molecular balances. For example, in a system of pure boiling water ($N_P=2$), the total number of components is $N_C=2$ as we have $N_{H_2O,liquid}$ and $N_{H_2O,gas}$ (thanks to the phase-by-phase description); however, in this system the total amount of $H_2O$ is limited, This additional equation reduces the number of independent components by one, i.e., $C=N_C-1=1$, thus $F=1$ in this system (e.g., water boiling at $100^\circ\text{C}$ must be at 1 atm pressure, thus the temperature completely defines the system). Similar constraints appear in reactive systems.

In SimCem, $N_P\left(N_C+2\right)$ state variables are always used to describe a system (the $N_C$ molar amounts $\left\{N_i\right\}$, the temperature and either the pressure or volume). In general, $N_p(N_C+2)\ge C+2-N_P$, thus the state of a multi-phase and/or reactive system is typically overspecified and constraints must be added to the minimisation to eliminate the additional degrees of freedom. Other constraints will also appear due to duplication of the variables (i.e., the temperature and pressure is equal in all phases).

To determine the equilibrium state of a closed system a thermodynamic potential, $f$, is minimised subject to some constraints. This can be expressed mathmatically in the following way, where $\vec{X}_{equil}$ are the state variables at equilibrium, $\alpha$ denotes the variables of a particular phase, and $k$ is the index of an equality constraint which holds some function of the state variables, $g_k\left(\vec{X}\right)$, to a value of zero. The non-negativity constraints on the molar amounts and phase temperature/pressure/volume are required to prevent the numerical minimistation technique from drifting into unphysical values of the parameters. The constraints may be divided into two categories: (1) constraints on the molar amounts and (2) constraints on other thermodynamic variables.

Two systems in equilibrium must have equal temperature, pressure, and chemical potential. This should arise naturally from the maximisation of entropy; however, it is efficient to remove any variables of the minimisation wherever we can. This also appears to aid with stability of the minimisation problem.

The simplest constraint required is that the temperatures of all phases are equal at equilibrium. As all phases have temperature variables, $\left\{T_\alpha\right\}^{N_{p}}$, these are eliminated from $\vec{X}$ and set equal to a single system temperature, $T$, which is inserted into $\vec{X}$. If the equilibrium is being determined at constant temperature, then the system temperature, $T$, is simply set to the constrained value, copied to all phases, and eliminated from $\vec{X}$ entirely.

Phases in equilibrium must also have equal pressures (Laplace pressures from surface tension and other effects are not yet available). To implement this requirement, all phase pressure variables, $\left\{p_\alpha\right\}^{N_p}$, are eliminated from $\vec{X}$ and set to a single system pressure, $p$. As not all phases have pressure as the independent variable but use volume instead constraint functions are introduced to enforce this requirement, If the equilibrium is being evaluated at constant pressure, then the system pressure variable may also be eliminated from $\vec{X}$; however, the constraint functions $g_{p_\alpha}$ must remain.

Finally, if the volume is held constant then a single constraint function is added, Any phase volumes appearing as independent variables must remain in $\vec{X}$ as it is the overall volume of the system which is constrained to $V_{target}$, thus individual phases have unconstrained volumes.

Other constraints naturally arise from the conservation of elements
and species. For example, consider the water/steam equilibrium
system,
H_{2}O may be present in both the steam and water phase, but
the total amount is constrained to the initial amount, $N_{H_2O}^0$,
like so,
In reactive systems, the types of molecules are no-longer conserved
but the elements are. For example, consider the combustion of
graphite,
The state variables are,
In this reactive system, elemental constraints are used,
Elemental constraints allow any and all possible
reactions/rearrangements of elements to minimise the free
energy. For example, the following reaction is also allowed by these
constraints,
Sometimes this is not desired, and only specific reaction paths are
fast enough to be considered (e.g. in catalysed reactions). In this
case, custom constraints will need to be implemented. At the moment,
Simcem will only automatically generate elemental and
molecular constraints.

It should be noted that both elemental and species constraints are linear functions of the molar amounts, $N_i$, and can both be expressed in general as follows: where $\pi_{ik}$ is the amount of $k$ (element or molecule) within a molar amount of $N_i$, and the index $i$ runs over all species in all phases. With the constraints outlined, the general method of solution is explained.

To actually perform the constrained minimisation problems, they are
often transformed into unconstrained searches for stationary points
using the method of Lagrange multipliers. A new function called the
Lagrangian, $F$, is constructed like so,
where $\lambda_k$ is the Lagrange multiplier for the
$k$ constraint. The purpose of this operation is to
construct a function where the constrained minima's of $f$ now
occur at the (unconstrained) *extrema* of $F$. For
example, it is easy to see that the derivatives of $F$ with
respect to each Lagrange multiplier are zero if the constraints are
satisfied,
Thus we are searching for a point where $\partial F/\partial
\lambda_k=0$ to ensure the constraints are satisfied. Taking a
derivative of the Lagrangian with respect to the state variables,
$\vec{X}$, yields the following,
or in vector notation,
Lets consider when $\partial F/\partial
\vec{X}=\nabla_\vec{X}\,F=0$, at this point the following must be
true,
This makes it clear that at the point where $\partial F/\partial
\vec{X}=0$, the downhill direction of $f$ can be decomposed into
directions where the constraint functions also change
($\nabla_\vec{X}\,c_k$ are basis vectors of $\nabla_\vec{X}\,f$, and
$-\lambda_k$ are the coordinate values in this basis). Thus,
attempting to lower $f$ any further will cause the constraint values
to alter away from zero if they are already satisfied at this point
(as guaranteed by $\frac{\partial F}{\partial \lambda_k}=0$). This
point is then a local minima which satisfies the constraints.

The strategy is now to find a stationary point of the Lagrangian, This is a stationary point and not a maximum or minimum as no statements on the second derivatives of the Lagrangian have been made. In fact, most stationary points of the Lagrangian turn out to be saddle points therefore direct minimisation of the Lagrangian is not suitable, instead a root search for the first derivative of the Lagrangian may be attempted. Even this approach may converge to a stationary point of $F$ which is not a minimum but a maximisation of $f$. Implementation of a suitable routine is an art but fortunately general algorithms are available which perform this analysis, such as NLopt, Opt++, and SLSQP. These require knowledge of the derivatives of the objective function and the constraints.

To determine the extreema of the Lagrangian, the minimisation algorithms require the derivatives of the Lagrangian. As illustrated above, the derivatives with respect to the lagrangian multipliers are given by the constraint functions themselves, thus no additional calculation is required there. Ignoring any contribution from the interfaces between phases, the thermodynamic potentials of the overall system can be broken down into the contribution from each phase, where $f_\alpha$ is the contribution arising from a single phase, $\alpha$. This allows us to rewrite the derivative of the Lagrangian with respect to the state variables as follows, It should be noted that the following is true, Thus, each individual phase's derivatives can be considered separately as only the $\partial f_\alpha/\partial \vec{X}_\alpha$ terms are nonzero. For the constraint functions, the only non-zero derivative of the element/molecular constraint function given in Eq.\eqref{eq:genMolConstraint} is as follows,

Many of these derivatives required depend on the state variable set used to describe the phase. The following sections discuss how these properties are calcluated for the two types of state variable set.

A large number of thermodynamic derivatives are required to implement the minimisation. This section presents a number of useful expressions and approaches which allow us to interrelate various derivatives to reduce the number which must be implemented to a minimal amount.

First, generalised partial properties are introduced to demonstrate that all extensive properties can be expressed in terms of their derivatives in their extensive variables. The Bridgman tables provide a convenient method of expressing any derivative of the thermodynamic potentials or their variables in terms of just three "material derivatives". A "generalised" product rule is then introduced to demonstrate how derivatives may be interchanged, particularly for the triple product rule. Finally, a key relationship between the partial molar and partial "volar" properties is derived.

As demonstrated in the section on solution for the internal energy, functions which are first-order homogeneous in their variables can be immediately expressed in terms of these derivatives. This is useful as it provided a relationship between the internal energy and the other thermodynamic potentials; however, a generalised rule would eliminate the need to generate expressions for thermodynamic properties IF their derivatives are available.

Unfortunately, the two variable sets considered here contain both
homogeneous first-order ($\left\{N_i\right\}$, $V$) and
inhomogeneous ($T$, $p$) variables. In this case, Euler's method
does not extend to expressions which are functions of both; However,
a similar solution can be derived for these expressions provided the
inhomogeneous variables are restricted to intensive
properties *and* the extensive variables together uniquely
specify the total size of the system.

The derivation presented here is a generalisation of the proof for
partial *molar* properties which you can find in any
thermodynamic text (e.g., Smith, van Ness, and Abbott, Sec.11.2, 7th
Ed). Consider an extensive thermodynamic property, $M$, which is a
function of extensive $\left\{X_i\right\}$ and intensive
$\left\{y_i\right\}$ variables. The total differential is as
follows,
The extensive property $M$ can be converted to a corresponding
intensive property, $m$, by dividing by the system amount, i.e.,
$M=m\,N$. If the extensive properties are held constant and *
they are sufficient to determine the total system amount*, $N$,
then the system size, $N$, is also constant and may be factored out
of $M$ in the intensive partial differential terms.
In addition, ${\rm d} M={\rm d} (N\,m) = m\,{\rm d} N+N\,{\rm
d}m$ and ${\rm d} X_i={\rm d} (N\,x_i) = x_i\,{\rm d} N+N\,{\rm
d}x_i$. Inserting these and factoring out the terms in $N$ and
${\rm d}N$ yields,
As ${\rm d}N$ and $N$ can vary independently this equation is only
satisfied if the terms in parenthesis are each zero. Multiplying the
first term in parenthesis by $N$ and setting it equal to zero yields
the required identity,
where $\bar{\bar{m}}_i=\left(\partial M/\partial
X_i\right)_{\left\{X_{i\neq j}\right\},\left\{y_{k}\right\}}$ is a
partial property.
This is a generalised solution for an extensive property in terms of
its partial properties which are its derivatives in its extensive
variables and is the primary objective of our derivation; however,
an additional important equation for the partial properties can be
easily obtained. The derivative of Eq.\eqref{eq:partialProperty}
(first divided by $N$) is,
This can be used to eliminate ${\rm d}m$ from the right term of
Eq.\eqref{eq:pmolarintermediate} which is also set equal to zero
(and multiplied by N) to give,
This is the generalised **Gibbs/Duhem** equation which
interrelates the intensive properties of the system to the partial
properties.

The most well known applications of Eqs.\eqref{eq:partialProperty}
and \eqref{eq:GibbsDuhem} are for the partial *molar*
properties when $p$, $T$, and $\left\{N_i\right\}$ are the state
variables. In this case Eq.\eqref{eq:partialProperty} is
where $\bar{m}_i=\left(\partial M/\partial
N_i\right)_{p,T,\left\{N_{j\neq i}\right\}}$ is the partial molar
property. The most important partial molar property is the chemical
potential,
and it has already been proven that
Eq.\eqref{eq:partialMolarProperty} applies in this case, i.e.,
Eq.\eqref{eq:Grule} gives $G=\sum_iN_i\,\mu_i$. The corresponding
Gibbs-Duhem equation for the chemical potential in
$T,p,\left\{N_i\right\}$ is the most well-known form,

As derivatives are distributive, and $T$ and $p$ are held constant, the partial molar properties for the thermodynamic potentials satisfy similar relationships as the original potentials (Eqs.\eqref{eq:Urule}-\eqref{eq:Grule}). Thus, the partial molar properties not only provide a derivative in the molar amounts, but also completely describe all thermodynamic potentials and many extensive quantities (such as $V$). Simcem therefore only requires the implementation of expressions for three partial molar amounts ($\mu_i$, $\bar{v}_i$, $\bar{h}_i$) to specify all the thermodynamic potentials and their first derivative in the molar amounts for the $\left\{T,p,\left\{N_i\right\}\right\}$ variable set.

For the $\left\{T,V,\left\{N_i\right\}\right\}$ variable set, Eq.\eqref{eq:partialProperty} is, where $\breve{m}_i = \left(\partial M/\partial N_i\right)_{T,V,\left\{N_{j\neq i}\right\}}$ is deemed here to be a partial "volar" quantity. A more useful form of this expression is Eq.\eqref{eq:molartovolarproperty} which is derived in the later section on the transformation between $\bar{m}_i$ and $\breve{m}_i$.

There are a number of interesting properties which are derivatives of the thermodynamic potentials. For example, consider the isobaric heat capacity,

Using partial molar properties allows us to illustrate a complication in the calculation of these material properties for multi-phases/sub-systems. For example, expressing the (extensive) heat capacity in terms of the partial molar (AKA specific) heat capacity using Eq.\eqref{eq:partialMolarProperty} yields the following expression, where $\bar{c}_{p,i}=\left(\partial \bar{h}_i/\partial T\right)_{p}$ is the partial molar isobaric heat capacity. The term on the LHS of Eq.\eqref{eq:mixCp} arises from changes to the enthalpy caused by species transferring in and out of the system as the equilibrium state changes. This results in additional contributions to the apparent heat capacity above the partial molar isobaric heat capacity. For example, when a single-component fluid is at its boiling point, the apparent heat capacity of the overall system is infinite as $\partial N_i/\partial T$ is infinite due to the discontinuous change from liquid to vapour causing the instananeous transfer of molecules from one phase to another.

To complement the "equilibrium" thermodynamic $C_p$ above, it is convenient to define "frozen" thermodynamic derivatives where there are no molar fluxes, i.e., the "frozen" isobaric heat capacity, The "frozen" properties are required while calculating the gradient of thermodynamic potentials during minimisation, and arise as all molar quantities are held constant while these derivatives are taken.

The $C_p$ is just one material property; however, there are many
other thermodynamic derivatives which may be
calculated. Fortunately,
the Bridgman
tables is a convenient method to express any material property
as a function of just three key material derivatives. The heat
capacity is one and the other two are the isothermal ($\beta_T$) and
isobaric ($\alpha$) expansivities,
where $\bar{\alpha}_i\,\bar{v}_i= \left(\partial
\bar{v}_{i}/\partial T\right)_{p,\left\{N_j\right\}}$ and
$\bar{\beta}_{T,i}\,\bar{v}_i=-\left(\partial v_i/\partial
p\right)_{T,\left\{N_j\right\}}$. Again, "frozen" material
derivatives are available,
The terms $\left(\partial N_i/\partial T\right)_{p,\left\{N_{j\neq
i}\right\}}$ and $\left(\partial N_i/\partial
p\right)_{T,\left\{N_{j\neq i}\right\}}$ which appear in the
material properties must be determined from the solution to the
thermodynamic minimisation problem. They quantify how
the *equilibrium* molar amounts change for a variation in
temperature and pressure and thus must account for the constraints
placed on the equilibrium state and the movement of the minimia.

The Bridgman table approach decomposes every thermodynamic derivative into a look-up table for the numerator and denominator expressed in terms of the three material derivatives, $C_p$, $\alpha$, and $\beta_T$. For example, consider the following "unnatural" derivative, Thus, to generate any derivative required for minimisation, only the three "material" derivatives and three partial molar properties are required.

This section is almost directly copied from this math.StackExchange.com post.

Consider any expression for a thermodynamic property, $M$, written in terms of the state variables, $M=M\left(\vec{X}\right)$. It is straightforward to transform this function into an implicit function $F=F\left(M,\,\vec{X}\right)=M(\vec{X}) - M=0$ which helps to illustrate that $M$ can be treated as a variable on an equal basis as $\vec{X}$. To allow a uniform representation, the arguments of $F$ are relabeled such that $F(x_1,x_2,\ldots,x_N)=0$. As the function $F$ remains at zero, holding all variables except two constant taking the total derivative and setting ${\rm d}F=0$ yields the following identity, where the partial derivatives implicitly indicate that all other variables are held constant. This rule can be combined repeatedly and the terms on the RHS eliminated provided the variables loop back on themselves. For example, The first value $n=2$ yields the well known expression or, more familiarly, Finally, $n=3$ yields the triple product rule, This rule has wide application but is particularly attractive when derivatives hold an "awkward" thermodynamic property constant. For example, consider the following case The LHS is difficult to directly calculate in the state variables chosen here; however, the RHS arising from the triple product rule is in terms of natural derivatives of $G$ which are straightforward to calculate. The triple product rule is used extensively in the following sections to express complex expressions in terms of natural derivatives.

Consider a property, $M$, which is a function for four variables $x_1,x_2,x_3,x_4$. The total derivative is, Holding two variables constant and taking a derivative wrt a third yields, Three relablings of this expression can be used to interrelate two derivatives which only differ in one variable which is held constant. where the final term in parenthesis is cancelled to zero using the triple product rule. This equation is particularly useful for changing between partial quantities while pressure or volume is held constant. For example, or, in the notation used so far, This is a more useful form of Eq.\eqref{eq:partialVolarProperty}. The partial molar volume is inconvenient to derive directly when volume is an explicit variable; however, it may be expressed more conveniently using the triple product rule, This allows us to obtain partial molar properties conveniently when working with volume as a state variable.

In this section, the calculation of the required properties for minimisation with phases specified by the following set of variables is considered, In this particular variable set, the required constraint derivatives to implement the derivative of the Lagrangian are as follows, The thermodynamic potential derivatives required to specify the derivatives of the Lagrangian (Eq.\eqref{eq:Fderiv}) as generated from Eqs.\eqref{eq:GradEqsStart}-\eqref{eq:GradEqsEnd} are where $f=\left\{H,G,-S,U,A\right\}$. These derivatives are easily expressed using the Bridgman tables in terms of the three standard material derivatives and the results are given in the table below.

$Y_1$ | $Y_2$ | $f$ | $\left(\frac{\partial f}{\partial N_i}\right)_{T,p,\left\{N_{j\neq i}\right\}}$ | $\left(\frac{\partial f}{\partial T}\right)_{p,\left\{N_j\right\}}$ | $\left(\frac{\partial f}{\partial p}\right)_{T,\left\{N_j\right\}}$ |
---|---|---|---|---|---|

$p$ | $T$ | $G$ | $\mu_i$ | $-S$ | $V$ |

$V$ | $T$ | $A$ | $\bar{a}_i$ | $-(S+p\,\alpha\,V)$ | $p\left(\beta_{T}\,V\right)_{\left\{N_i\right\}}$ |

$p$ | $S$ | $H$ | $\bar{h}_i$ | $C_{p,\left\{N_j\right\}}$ | $V(1-T\,\alpha)$ |

$p$ $V$ | $H$ $U$ | $-S$ | $-\bar{s}_i$ | $-T^{-1}\,C_{p,\left\{N_j\right\}}$ | $\left(\alpha\,V\right)_{\left\{N_i\right\}}$ |

$V$ | $S$ | $U$ | $\bar{u}_i$ | $C_{p,\left\{N_j\right\}}-p\left(\alpha\,V\right)_{\left\{N_i\right\}}$ | $p\left(\beta_{T}\,V\right)_{\left\{N_i\right\}}-T\,\left(\alpha\,V\right)_{\left\{N_i\right\}}$ |

In summary, a model using this variable set must provide implementations of $\mu_i$, $\bar{v}_i$, $\bar{s}_i$, $\bar{\alpha}_i$, $\bar{\beta}_{T,i}$, and $\bar{c}_{p,i}$. These are all straightforward to obtain by performing derivatives of a Gibbs free energy function in its natural variables or integration and differentiation if a mechanical equation of state, $V\left(T,p,\left\{N_i\right\}\right)$, is available.

All other partial molar properties are obtained using Eqs.\eqref{eq:partialmolarrelationstart}-\eqref{eq:partialmolarrelationend} expressed in the following form. Most other relevant thermodynamic properties are calculated using the Bridgman tables.

The ideal gas model is not only a good approximation for gases at low pressures, but it is also a key reference "state" for most complex equations. A thermodynamic path can be constructed to the ideal gas state at low pressures for most models, thus in this case a "ideal-gas" contribution can be factored out from these models.

The ideal gas chemical potential is defined as follows, where $p_0$ is the reference pressure at which the temperature-dependent term, $\mu^{(ig)}_{i}(T)$, is measured and $N_\alpha$ is the total moles of the phase $\alpha$.

Using the chemical potential as a generating function, the minimial set of partial molar properties is derived below. where the partial molar heat capacity has been implicitly defined as $C_{p,\left\{N_i\right\}}^{(ig)} = \sum_i N_i\,\bar{c}_{p,i}^{(ig)}$.

The model is not completely specified until the pure function of temperature, $\mu_{i}^{(ig)}(T)=h_{i}^{(ig)}(T) - T\,s_{i}^{(ig)}(T)$, is specified, and this is discussed in the following section.

This is not a proof of how mixing entropy arises (which is discussed below), but a demonstration that mixing entropy is consistent with Dalton's law holding true. Consider a single component ideal gas, we have, If Dalton's law is true, the pressure of a single species is reduced when mixed as it only exerts a partial pressure, ($p_i=x_i\,p$). Assuming the total pressure and temperature remains constant during mixing, As $x\le1$, $g_{mixed} - g_{pure}\le0$, which is consistent that two ideal gases will be mixed at equilibrium under constant $T,p$. Thus, Dalton's law is consistent with mixing entropy

First, it is noted that entropy is extensive. The entropy of
two separate systems is the sum of each,
Fundamentally, entropy is a measure of the number of states of
a system; however, the number of states
has *multiplicative* scaling. For example, consider a
collection of light switches. A single light switch has two
states but a collection of $N$ light switches has $2^N$
possible states. The number of states is clearly
multiplicative and yet entropy is additive. The entropy must
therefore be the logarithm of the number of states as $\ln 2^N
= N \ln 2$. This line of arguing leads to what is known as
Boltzmann's entropy,
where $W$ is the number of states and $R$ makes the connection
to the thermal scale.

Now consider a binary mixture where each location of the system is on a lattice. Each "site" of the lattice may be occupied by one of two end-members/molecules, $N_1$ or $N_2$. The total number of available sites is $N=N_1+N_2$ (the lattice is full). As each end-member/molecule is indistinguishable from other molecules of the same type, the number of ways we can arrange these molecules are, The entropy is then, Using Stirling's approximation $\lim_{n\to\infty}\ln(n!)=n\,\ln(n)-n$. For a gas, consider that the number of sites is actually the number of molecules in the system and all we are doing is allowing the gases to mix (each species may substitute for another species provided the overall molar balance is maintained), then it is clear that: In comparison with the ideal gas properties, this is the entropy of mixing as derived from Dalton's law!

The term $\mu_{i}^{(ig)}(T)$ is a function of temperature which is directly related to the thermodynamic properties of a single isolated molecule (molecules of ideal gases do not interact). There are many theoretical approaches to expressing these terms for solids, simple models (i.e., rotors), and quantum mechanics can even directly calculate values for real molecules. However these results are obtained, this term is typically parameterised using polynomial equations.

The most common parameterisation is to use a polynomial for the heat capacity. This is common to the NIST, NASA CEA, ChemKin, and many other thermodynamic databases. A heat capacity polynomial can be related to the enthalpy and entropy through two additional constants, To demonstrate how this is correct, consider a closed system, At constant pressure the following expression holds, Thus, where $T_{ref}$ is a reference Performing this integration for the polynomial of Eq.\eqref{eq:cppoly}, where the two additional constants $\bar{h}_i^0$ and $\bar{s}_i^0$ must be determined through construction of a thermodynamic path to a known value of the entropy and enthalpy. This is usually through heats of reaction, solution, and equilibria such as vapour pressure, linking to the elements at standard conditions.

To allow this data to be expressed as a single function in Simcem, it is expressed directly as $\mu_i^{(ig)}(T)$ using the following transformation, where,

The most comprehensive (and most importantly, free!) source of ideal gas heat capacities and constants is the NASA CEA database, but additional constants are widely available. When collecting $c_p$ data, a distinction must be made between the calculated ideal gas properties and the measured data. The measured data may include additional contributions from the interactions of the molecules and thus must be fitted with the full chemical potential model and not just the ideal gas model.

Comparison with Eq.\eqref{eq:Mu_shomate_form} yields the definitions of the coefficients.

Another useful reference state is the incompressible phase. This model is used to describe liquid and solid phases when an equation of state describing all phases at once is unavailable. The generating chemical potential is as follows, where $\mu^{(ip)}_{i}(T)$ is again a temperature-dependent term and $\bar{v}^0_{i}$ is the (constant) partial molar volume of the incompressible species $i$. Using the expression for the chemical potential as a generating function all required properties are recovered, As $\alpha$ and $\beta_T$ are zero, some thermodynamic properties are not well specified but can be determined by other means. For example, $\bar{c}_p^{(ip)}=\bar{c}_v^{(ip)}$. In Simcem, ideal solids still contain a mixing entropy term which is included to allow ideal solid solutions to be constructed and used as a base class for more complex models.

In this section, the calculation of the required properties for minimisation with phases specified by the following set of variables is considered, In this variable set, the required non-zero derivatives to compute the constraint functions are as follows, These derivatives are convenient to determine in this variable set and also indirectly specify $\alpha$ and $\beta_T$ which are two of the three required material derivatives to allow use of the Bridgman tables. The third molar derivative is also convenient to determine and provides a direct path to the molar volume as given in Eq.\eqref{eq:TVpartialmolarvolume} and reproduced below, Thus, these three derivatives are required to be implemented by all models and used to derive other properties. The derivatives of the potentials are as follows where $f=\left\{H,G,-S,U,A\right\}$. The derivatives for each potential are specified below in terms of the most convenient properties/derivatives for this variable set.

$Y_1$ | $Y_2$ | $f$ | $\left(\frac{\partial f}{\partial N_i}\right)_{T,V,\left\{N_{j\neq i}\right\}}$ | $\left(\frac{\partial f}{\partial T}\right)_{V,\left\{N_j\right\}}$ | $\left(\frac{\partial f}{\partial V}\right)_{T,\left\{N_j\right\}}$ |
---|---|---|---|---|---|

$p$ | $T$ | $G$ | $\breve{g}_i$ | $V\,\left(\frac{\partial p}{\partial T}\right)_{V,\left\{N_i \right\}} -S$ | $V \left(\frac{\partial p}{\partial V}\right)_{T,\left\{N_i\right\}}$ |

$V$ | $T$ | $A$ | $\breve{a}_i$ | $-S$ | $-p$ |

$p$ | $S$ | $H$ | $\breve{h}_i$ | $C_{V,\left\{N_i\right\}}+V\,\left(\frac{\partial p}{\partial T}\right)_{V, \left\{N_i\right\}}$ | $V\left(\frac{\partial p}{\partial V}\right)_{T,\left\{N_i\right\}}+T\left(\frac{\partial p}{\partial T}\right)_{V,\left\{N_i\right\}}$ |

$p$ $V$ | $H$ $U$ | $-S$ | $-\breve{s}_i$ | $-\frac{C_{V,\left\{N_i\right\}}}{T}$ | $-\left(\frac{\partial p}{\partial T}\right)_{V,\left\{N_i\right\}}$ |

$V$ | $S$ | $U$ | $\breve{u}_i$ | $C_{V,\left\{N_i\right\}}$ | $T\,\left(\frac{\partial p}{\partial T}\right)_{V,\left\{N_i\right\}} - p$ |

where $C_V = \left(\partial U/\partial T\right)_V$, and is related to the isobaric heat capacity using the following relationship, In summary, models should provide equations to calculate $p$, $\breve{a}_i$, $\breve{u}_i$, $C_{v,\left\{N_i\right\}}$, $\left(\partial p/\partial V\right)_{T,\left\{N_i\right\}}$, $\left(\partial p/\partial T\right)_{V,\left\{N_i\right\}}$, $\left(\partial p/\partial N_i\right)_{T,V,\left\{N_{j\neq i}\right\}}$. These derivatives are used to compute the frozen $\alpha$ and $\beta_T$ values using Eqs.\eqref{eq:VMatDeriv1} and \eqref{eq:VMatDeriv1}. The third material derivative, $C_{p,\left\{N_i\right\}}$, is then obtained from Eq.\eqref{eq:CpCv}. The partial molar volume is calculated from Eq.\eqref{eq:TVpartialmolarvolume}. The partial volar/molar quantities are related by Eq.\eqref{eq:molartovolarproperty}; however, the partial volar Helmholtz free energy is equal to the chemical potential, just like any molar derivative of a potential when its natural variables are held constant (see Eq.\eqref{eq:ChemPotDefinition}). Thus, the partial volar/molar Gibbs and Helmholtz free energies are closely related, All other required partial thermodynamic potential properties are derived using straightforward applications of Eq.\eqref{eq:molartovolarproperty} and the partial molar relations of Eqs.\eqref{eq:partialmolarrelationstart}-\eqref{eq:partialmolarrelationend}, the results of which are below,

As experimental ideal-gas data is usually presented in terms of isobaric polynomials, the ideal gas functions here are based on the previous form of the ideal gas model given in Eq.\eqref{eq:idealgasmuTP}. Transforming the pressure variable to phase volume yields the following definition of the chemical potential and partial volar Helmholtz free energy, The pressure derivatives are obtained from the well known relation $p=N\,R\,T/V$, Finally, the internal energy and heat capacity are as follows,

During the minimisation, the state variables are constrained to remain zero or positive. I.e., $T\in[0,\infty),\,p\in[0,\infty),\,N_i\in[0,\infty)$. Algorithmically, it is difficult to restrict the optimisation to remain in the positive space of these variables. The derivative of the thermodynamic potential may indicate that these variables should decrease and an "overenthusiastic" numerical step may lead to these decreasing below zero. One approach is to move the optimisation back along the initial step (back-tracking) into the valid domain whenever it leaves but this approach can slow the progression towards minima. It is more stable to transform the system to remove these constraints.

To ensure that the variables remain positive throughout the
optimisation, a *bijection* can be used to transform the
original bounded variable onto an unbounded one, i.e.,
$(0,+\infty)\to(-\infty,+\infty)$. One example of such a bijection
is the exponential mapping,
This bijection has the appropriate mapping of $y(0)=-\infty$ and
$y(+\infty) = +\infty$, and each value in-between has a
corresponding transformed value. Thus, if the variables of the
optimisation are $\ln T$ and $\ln p$ instead of $T$ and $p$,
additional constraints are not needed to ensure $T$ and $p$ remain
positive.

It should be noted that zero is mapped to $-\infty$ in this bijection which implies that in the finite region of $y(x)$ it is "impossible" to reach $x=0$. Zero values of temperature and/or pressure are uninteresting and physically inaccessible thus there is no issue here with this exclusion, but this is not the case for the molar amounts. If a zero value of a molar amount is possible, then using logarithmic coordinates will cause the minimisation to "diverge" and eventually overflow (it will not in general reach negative infinity).

Molar amounts can clearly reduce to zero, i.e., water can completely convert to steam on heating; however, on the other hand some processes such as reactions never reach completion. This behaviour depends on the mixing contribution to the chemical potential, Complete reaction or removal of a species is impossible when mixing is present as $\lim_{N_{i,l}\to0}\Delta\mu_{i,l}^{ideal mixing}=+\infty$. It is always favourable to retain some degree of mixing if physically possible. If there is no mixing, then $N_{i,l}/N_l=1$ regardless of the value of $N_{i,l}$, thus there is no divergence in the chemical potential as $N_{i,l}\to0$.

Thus, logarithmic molar amounts are used whenever mixing is present, but in pure stochiometric phases the original linear molar amounts are used. If the initial molar amount is zero (i.e., reaction of pure reactants, zero products), then these quantities are initialised to the smallest non-zero (non-denormalized) amount ($<10^{-307}$). Interestingly, $\ln(10^{-307})\approx-707$ which demonstrates the compression of the logarithmic bijection and its ability to handle a wider range of values in the same representation (albeit with reduced precision).

While working in logartihmic coordinates, any derivatives will also need to be transformed but this is straightforward for this bijection, In summary, logarithmic temperature, pressure, and molar amounts are always used except for molar amounts of pure phases where linear terms are used. Linear molar amounts have positive bound constraints and logarithmic values are constrained to give a non-zero double precision representation (these are imposed via the optimisation algorithm typically using back-tracking), i.e., where $\left\{N_i\right\}$ are molar amounts for pure phases and $\left\{\ln N_j\right\}$ are logarithmic amounts for mixed phases.

The terms $\left(\partial N_i/\partial T\right)_p$ and $\left(\partial N_i/\partial p\right)_T$ must be calculated for a particular equilibrium solution.

Unfortunately, this is quite hard to determine! It turns out that Aspen requires these are numerically determined (see their Toolkit manual, 2006). NASA CEA manages to work it out by taking derivatives of the Lagrange function (which I started below) but I couldn't figure out how to do this in general (without incorporating the thermodynamic model used). This is abandoned for now!

Consider an equilibrium system. A change of variables is performed to convert the state variables into the natural variables of the thermodynamic potential, $f$, which is being minimised. In this representation, the state vector is $\vec{X}=\left\{N_i\right\}$ and the constraint functions only consider elemental or species constraints. A similar equation to Eq.\eqref{eq:lagrangianResult} is then generated using the same arguments: The molar derivative of the thermodynamic potential, $f$, is always the chemical potential whenever the natural variables are held constant. Inserting Eq.\eqref{eq:ChemPotDefinition} into Eq.\eqref{eq:molarderivativespt1} and simplifying yields the following equation,

The most precise and concise derivation of thermodynamics I have found is the review of thermodynamics which is part of the Statistical Physics Using Mathematica course by James J. Kelly.

All relevant thermodynamic equations are concisely summarized on Wikipedia's thermodynamic equations.

The SimCem interface and computational code was written
by **Marcus
Bannerman**. The database and its interface has
been developed with a number of students who are listed
below.

**Theodore Hanein** researched the literature and
reviewed the available thermodynamic data for phases
relevant to cement as part of his PhD studies. He later
assisted in entering this information into SimCem, as
well as helping in the testing of the code and website.

**Lewis McRae** provided the python cubic equation of
state implementation. He also added include molecular
structure information as well as common aliases using the
PubChem REST interface during a Carnegie Vacation
Scholarship.

This work could not exist without the free data provided
by the
NASA
CEA application database.

Without their
free collection of ideal gas contributions for
molecules, our research could not have begun.

Isotopic masses were obtained from the electronic file
at https://www-nds.iaea.org/amdc. These
are also published in Audi et. al., "The AME2012 atomic
mass evalulation", *Chinese Physics
C*, **36**,1287-2014 (2012).

Isotopic abundances were scraped from the table given by CIAAW. This data was scraped on 29/07/2015.

The PubChem PUG REST interface was used to scrape additional information on molecules, such as their structure, common names, and CAS identifiers.

Finally, a huge number of open-source libraries are used in this project. JQuery, Flot.js, SlickGrid.js, Hover.css, MathJax, NLOpt, Boost.Python, Tornado, JsTree, and more I've probably forgotten.

We stand on the shoulders of giants.