## Welcome to SimCem!

Simcem is a computational thermodynamics package and database with aspirations of process simulation. It currently allows the calculation of thermodynamic properties from thermodynamic models (i.e., equations of state), and can calculate the equilibrium state of model thermodynamic systems. It mainly focuses on combustion and cement chemistry, but it is planned to evolve into a general chemical engineering toolkit.

Everything is available via the main menu (click the icon in the top left to open it). The console at the top provides information on the progress of calculations (and any errors).

### Current status12th March 2018

The combustion calculator works and most of our cement database has been made available. A cement kiln and tubular furnace simulator should be available soon.

If you are a collaborator, you can access our full data set by logging in.

# Theory

These are notes written while developing my own understanding of applied thermodynamics. They are brief and incomplete but they are provided to help others understand SimCem, to check my workings, and to implement their own Gibb's free energy minimisation as I struggled to find explanations of certain key aspects of the technique (such as eliminating redundant molar constraints, or a generalised Euler's solution, or even just what a complete thermodynamic models look like, and no its not $P\,V=Z\,N\,R\,T$).

This work wasn't possible without some of the excellent work already out in the literature. In particular, nothing was more useful to me than the excellent NASA CEA program, its database, and its highly educational report.

## Thermodynamic systems, variables, and state A closed balloon system (A) and an open system (B), both exhibiting the observables of pressure, $p$, volume, $V$, and temperature, $T$, which can be measured at (or over) its boundary.

First, lets define the problem of thermodynamics and its terminology. A thermodynamic system is the partitioning of some quantity of mass and energy from its surroundings through an enclosing boundary. The key idea is the division of what we are interested in (the system) from the uninteresting (the surroundings).

The boundary of the system may be physical (e.g., the walls of a vessel such as a balloon) or may be defined by some arbitrary division of space (e.g., a finite volume in a CFD simulation). If the boundary is physical, then it may or may not be included as part of the system. For example, water droplets in air have a surface tension which acts like the skin of a balloon and pull the drop into a spherical shape. This surface has an associated energy and it is at our discretion whether to include the energy as part of the system or as part of the surroundings (or neglect it entirely as an approximation).

Both physical and unphysical boundaries may be fixed or change shape over time. In addition, if mass can pass through the boundaries then the system is deemed open and closed if it cannot.

The mass and energy contained inside a thermodynamic system may take many forms, but only the observable properties at the boundary of the system, such as volume, mass, surface area, and pressure are visible to us. Thus thermodynamics focuses on these variables and tries to find mathematical relationships to link these variables together. Any other internal effect the mass and energy of the system has cannot be seen unless it appears at the boundary and so thermodynamics does not immediately concern itself with them. E.g., a thermometer measures the temperature at its surface, not in the bulk of a fluid. If the system is small enough, then the observable properties should be approximately constant across the system. For larger systems, we can always split it into smaller and smaller sub-systems until the observable properties of a system are approximately constant. Moving on, we will always assume each system we consider is small enough (and thus homogeneous enough) to ignore any changes in observable properties across its volume.

The sum total of the energy inside a system (the internal energy) is not directly observable as most of it is inside, away from the boundary; however, the conservation of energy tells us it must exist. It should come as no surprise then that there are other internal variables, such as the entropy, which can only be discerned indirectly and yet are very interesting to thermodynamics (but more on this later).

The key external and internal variables are the so-called state variables. These are the variables that change whenever the system itself has changed. This is a circular argument that makes perfect sense as far as the external variables are concerned as we only know if a system has changed if the observable properties have changed. Counter examples of non-state variables are the age of the system or the distance it has moved (assuming there's no external field like gravity). If a complete set of state variables is collected such that any change of the system results in a change in one (or more) of the collected state variables then we have what is called an ensemble. Which variables are collected into an ensemble depending on what we are observing and how we are observing it. For example, perhaps our external variables are pressure and temperature for a balloon. I haven't mentioned the balloon's volume but as we will see later thermodynamics tells us volume is directly related to the pressure and temperature, so there's no need to specify or observe it, or we can swap the volume for either the pressure or temperature in our ensemble (more later). Sometimes we need to add additional variables. For example, a battery uses the same variables as a baloon but must also include the electric potential across its terminals. The key to understanding when an ensemble is incomplete is that any process connected to the transfer of mass and/or energy in the system must have its associated state variables.

Now that the initial terminology has been outlined, a governing equation for the changes in the energy of a thermodynamic system is derived.

## The fundamental equation

The power of thermodynamics arises from its ability to find simple universal relationships between observable state variables. These relationships are a direct consequence of the laws of thermodynamics.

The first law of thermodynamics is an observation that energy is neither created or destroyed but only transformed between different forms. Every thermodynamic system may contain internally some energy, $U_{sys.}$. The first law can then be stated as as a conservation of this internal energy between a system and its surroundings: where the ${\rm d}X$ indicates an infinitesimal change in $X$ and that this is an exact differential ($U_{sys.}$ cannot change without $U_{surr.}$ changing).

From further observation of real systems, two types of energy transfer are identified: heat transfer and work, where $\partial Q_{surr.\to {sys.}}$, is heat transferred to the system due to temperature differences and $\partial W_{sys.\to surr.}$ represents all forms of work carried out by the system (the negative sign on the work term is a conventional choice). The work term represents many forms of energy transfer, so why is heat transfer singled out as a separate term? Nature appears to maximise heat transfer over work whenever possible, and this is discussed later when reversibility is introduced. Engines cycle back to their initial starting state, and thus are designed to perform arbitrary amounts of work without changing their state. Of course, this ignores wear and tear of the engine.

You should note that a $\partial$ symbol is used for the work/heat-transfer terms to indicate inexact differential relationships. A thermodynamic system may transfer arbitrarily large amounts of heat, and perform arbitrarily large amounts of work, but only the remainder $(\partial Q_{\to {sys.}}-\partial W_{sys.\to surr.})$ will actually cause a change in the energy $U_{sys.}$. The internal energy is a state variable as it describes the state of the system; however, work and heat transfer are not.

Physical examples of this include engines, which are thermodynamic systems that can perform arbitrary amounts of work provided sufficient heat/energy is supplied but they return to their initial state at the end of every cycle. An inexact differential implies there is no unique relationship between the variables (we cannot integrate this equation). Interestingly, inexact differentials can often be transformed into simpler exact differentials through the use of constraints. For example, if the engine is seized and no work can be carried out ($\partial W_{sys.\to surr.}=0$), then only heat transfer can change the energy of the system and we now have an exact differential relationship, ${\rm d}U_{sys.}={\rm d}Q_{\to {sys.}}$. This constraint is far too restrictive in general and another constraint, known as reversibility, must be invoked to generate exact differential equations we can integrate.

In the next two subsections, the concept of reversibility is introduced through consideration of cycles and is used to find exact differential descriptions of work and heat.

### Cycles, reversibility, and heat

A thermodynamic cycle is a process applied to a thermodynamic system which causes its state (and its state variables) to change but eventually return to its initial state (and so it also returns to the initial values of its state variables). For example, the combustion chamber inside an engine will compress and expand during its operation but it returns to its starting volume after each cycle. This leads to the following identity where the sum/integral of the changes over a cycle are zero, i.e., $\oint_{\rm cycle} {\rm d} V=0$, and similar identites must also apply for every state variable.

In 1855, Clausius observed that the integral of the heat transfer rate over the temperature is always negative when measured over a cycle, This is known as the Clausius inequality. It was found that this inequality approaches zero in the limit that the cycle is performed slowly. This limiting result indicates that the kernel of the integral actually contains a state variable, i.e., where $S_{sys.}$ is the state variable known as entropy of the system. Interestingly, the entropy (like the internal energy) is not directly observable and its existence is only revealed by this inequality.

As the inequality is generally negative over a cycle, it indicates that entropy always increases and must be removed from a system to allow it to return to its initial state (except in the limit of slow changes). This has led to the terminology of the irreversible cycle, $\oint_{\rm cycle}{\rm d}S>0$, and the idealised reversible cycle, $\oint_{\rm cycle}{\rm d}S=0$, which can be returned to its starting state without removing entropy.

Further careful reasoning which is omitted here results in the statement of the second law of thermodynamics: the total entropy of a isolated system can only increase over time. Assuming the universe is an isolated system (at least over the timescales we're interested in), our thermodynamic system and its surroundings must always together have a positive (or zero) entropy change.

One last thing to note, thermodynamic processes which are not cycles may also be reversible or irreversible. For a general process to be reversible the total entropy change of the system and its surroundings together must remain zero. This allows the entropy to increase or decrease in the system, but only if the surroundings have a compensating opposite change. Irreversibility is further explored later but for now our understanding is sufficient to introduce the various forms of work.

### Work Both popping a balloon (left) and releasing it untied (right) results in the same final state (air outside the balloon); however, releasing the air slowly through the neck allows work to be extracted causing the balloon to fly around.

The work term $\partial W_{sys.\to surr.}$ represents all methods of transferring of energy other than as heat. Reversible paths reduce total entropy changes to zero, which minimizes the heat transferred and actually maximizes the amount of work performed by the system for a given process. It also turns work into an exact differential!

As an illustrative example, consider the emptying of a balloon via popping it versus untying the neck and letting it go. In the first case, no work is done as the air is immediately released into the surroundings: this is the quickest path of deflating the balloon thus it maximizes entropy. Untying the neck, the air jet leaving the balloon will perform work by propelling the balloon around the room (thus yielding kinetic energy). This slower release of air has allowed work to be extracted.

All work can be expressed as a generalized driving force, $\vec{F}_{sys.}$, which is displaced by a change in the corresponding generalized distance, $\vec{L}_{sys.}$. For the balloon, the force is the pressure difference in the neck (and the air resistance, which should be equal and opposite when the system is reversible) and the distance is the travel of the balloon. The reversible limit corresponds to infinitesimally slow/small changes of the distance (i.e., ${\rm d}\vec{L}$) allowing all opposing forces time to remain in balance resulting in the following general expression for the work.

For the balloon, there are three forms of work taking place. First, as the volume of the balloon is decreased work must be performed to compress the volume against the pressure of the air within. The reversible pressure-volume work is then as follows, where the pressure $p$ is the generalised force and the volume $V$ is the generalised displacement. In addition, the balloon itself is shrinking, releasing the tension within its elastic surface. This is known as surface work, where $\gamma_{sys.}$ is the surface tension and $\Sigma_{sys.}$ is the surface area.

As air leaves the balloon through the neck it will carry away energy with it. This is known as chemical work, where the chemical potential, $\mu_{i,{sys.}}$, is the energy added to the system if one mole of the component $i$ (from one of the $N_C$ components of the system) is added to or removed from the system by any process (e.g., flow through the boundaries or internal reactions). The definition of a component, $i$, in a thermodynamic system is flexible and may be used to represent a single type of atom, molecule, or elementary particle (i.e., electrons), or even a mixture of molecules (such as "air").

The term ${\rm d} N_{i,{sys.}}$ represents changes in the amounts of a species $i$. This change may be due to mass flowing in or out of a system, but it may also result from reactions within a system; However, for closed system (a system which cannot exchange mass with any other system), chemical work is impossible and thus the conservation of energy requires that the following holds true (even if ${\rm d} N_{i,{sys.}}\neq 0$ due to internal processes such as reactions), Closed systems are typical during process/unit-operation calculations; however, as these closed systems are often composed of multiple open sub-systems (i.e. multiple interacting phases within a closed vessel) the chemical work term is always useful to retain.

### Summary of the fundamental equation

In summary, under the constraint of a reversible system, the expression for entropy (Eq. \eqref{eq:entropydefinition}) and any relevant work terms (Eq. \eqref{eq:pressurevolumework}-\eqref{eq:materialwork}) can be substituted into the energy balance of Eq. \eqref{eq:initialebalance}, to yield the fundamental thermodynamic equation, where the subscripts have been dropped from every term for convenience. Other work terms, such as the surface or electrical work, can be added to this equation depending on the system studied; however, the pressure-volume and chemical work terms are the most important from a process engineering perspective.

### Solution of the fundamental equation

As we have an exact differential in Eq.\eqref{eq:fundamentalThermoRelation} if the internal energy is taken as function of $U(S,\,V,\,\left\{N_i\right\})$, then the total derivative of the internal energy in these variables is as follows, where, for clarity, the variables which are held constant while a partial derivative is taken are written as subscripts on the parenthesis surrounding the derivative (this is needed for clarity as in thermodynamics we often change the set of independent and dependent variables).

Comparing the total derivative above to the fundamental thermodynamic relation of Eq.\eqref{eq:fundamentalThermoRelation} yields the following definitions of the partial derivatives, This is the first indication that thermodynamics is a powerful tool as it has already found a differential relationship between the internal energy and the intensive properties. Also, as the variables of $U(S,\,V,\,\left\{N_i\right\})$, are all extensive, Euler's solution for homogeneous functions applies.

This allows the equation to be "solved" immediately as it is a first-order homogeneous function of the extensive properties. This is the remarkably simple solution for the internal energy which is the first thermodynamic potential we encounter.

### Natural variables

When performing calculations in thermodynamics, we are free to specify our system state using any of the state variables introduced so far $(U,\,T,\,S,\,p,\,V,\,\left\{\mu_i\right\}^{N_C},\,\left\{N_i\right\}^{N_C})$, but how many are required and which ones are independent? Each term of the fundamental thermodynamic equation consists of a so-called conjugate pairing of an intensive property such as $T$, $p$, or $\mu_i$ and a corresponding conjugate extensive property $S$, $V$, or $\left\{N_i\right\}$ respectively. Provided all the relevant work terms have been included, it has been observed that a thermodynamic state is fully specified if at least one variable is specified for each of the conjugate pairs considered.

The natural variables for a particular function are whichever choices result in an exact differential relationship for that function. For example, the internal energy has the natural variables $U(S,\,V,\,\left\{N_i\right\}, \ldots)$. This is apparent from Eq.\ref{eq:fundamentalThermoRelation}, where these variables are all exact differentials related to ${\rm d}U$. Unfortunately, these variables are not particularly nice (and the internal energy is not particularly interesting) as we cannot directly measure the entropy or internal energy in experiments. There are other thermodynamic potentials which have more convenient natural variables and these can be derived by considering the consequences of the second law of thermodynamics. These are all Legendre transforms of the internal energy thus their natural variables will always correspond to one variable from each conjugate pair.

## Free-energies and thermodynamic potentials

The second law of thermodynamics has already been introduced via the Clausius inequality and is formally written as follows, i.e., the total entropy of the universe (our system and its surroundings) must always increase or remain constant. This statement implies that the only "stationary" thermodynamic state is where the entropy has reached its maximum, henceforth known as the equilibrium state. The equilibrium state is of particular interest as all thermodynamic systems approach it and, if left undisturbed, remain there indefinitely.

It is often the basis of process calculations that a particular thermodynamic system has reached equilibrium, thus determining the equilibrium state (via a maximization of the total entropy) is our primary goal. Starting from some initial non-equilibrium state, some unconstrained internal parameters (e.g., composition, reaction progression) are varied such that the total entropy is maximized.

Although the universe's entropy must be maximized at equilibrium, our interest is in a smaller thermodynamic system contained within it. The total entropy is the sum of the entropy of this system within the universe and the rest of the universe, i.e., It is clear that both $S_{sys.}$ and $S_{surr.}$ may increase or decrease, provided the overall change results in an increase of $S_{total}$.

It is henceforth assumed that the surroundings are at equilibrium, they remain at equilibrium, and any interaction with the surroundings is reversible. The author considers these the largest assumptions they have ever made, both physically and in terms of approximation; however, it is equivalent to a “worst case” estimate for the generation of entropy. In this case, there can be no “external” process driving changes within the thermodynamic system. Anything that the system does must happen “spontaneously”. In this case, the only possible mechanism by which the universe's entropy may change is via heat transfer from the system (and the heat flux becomes an exact differential). This makes it clear that the entropy change of the system must be balanced against the entropy it is generating in the surroundings through heat transfer (the surroundings are also so large the other effects of the heat transfer are negligble). Inserting the fundamental thermodynamic equation (Eq.\eqref{eq:fundamentalThermoRelation}), To simplify the remainder of this section, the thermodynamic system is now assumed to be closed which allows the elimination of the chemical potential term, The subscript $C$ on the parenthesis is used to indicate that the system is closed. This equation makes it clear that, in closed systems interacting reversibly with its surroundings which are at local equilibrium, the overall equilibrium is not solely linked to the entropy of the system itself but is the minimisation of the RHS of Eq.\eqref{eq:totalentropy} (due to the negative sign on the entropy change). The RHS often corresponds to some thermodynamic potential which arise under different constraints and these are now derived in the sections below.

### Summary

In summary, there are a number of relevant thermodynamic potentials for a closed system. These are defined below,

For each set of constrained thermodynamic states in closed systems, a particular thermodynamic potential is minimised at equilibrium. These are summarised in the table below:

ConstantsFunction to minimise, $f$
$p,\,S$$H p,\,T$$G$
$p,\,H$$-S V,\,S$$U$
$V,\,T$$A V,\,U$$-S$

The variables held constant correspond to the "natural" variables of each potential. Expressing the change in each thermodynamic potential in terms of these natural variables yields the following differential equations, The significance of the chemical potential cannot be overstated. It is the change of each thermodynamic potential per mole of each species exchanged when the other natural variables of the potential are held constant, The implication of this is that when dealing with systems exchanging mass, but constrained by two "natural" variables, the chemical potential for each species must be equal in all phases, regardless of which constrained variables are actually used (otherwise a change of mass between systems could change the value of the overall thermodynamic potential implying it is not at a minimum). It is also the partial molar Gibbs free energy ($G=\sum_i N_i\,\mu_i$) and thus calculation of the Gibbs free energy can be reduced to considering the chemical potential.

## Minimisation

Now that equilibrium has been defined, how do we calculate the equilibrium state? To determine the equilibrium state, $\vec{X}_{equil.}$, of a closed system containing many sub-systems a thermodynamic potential, $f$, is minimised. where $\vec{X}$ represents all the variables used by SimCem to describe the state of all the $N_p$ sub-systems within the closed system. A particular sub-system/"model", $\alpha\in[1,N_p]$, will typically have $N_{C,\,\alpha}$ molar amounts, i.e., $\left\{N_{i,\alpha}\right\}^{N_{C,\,\alpha}}$, the temperature $T_\alpha$, and either the pressure $p_\alpha$ or the volume $V_\alpha$ depending on the model used. To keep the minimisation in a physical region, constraints are added to make sure all these variables remain positive, i.e., Finally, there are a number of constraints which we write in general as follows, where $k$ is the index of an equality constraint which holds some function of the state variables, $g_k\left(\vec{X}\right)$, to a value of zero. One example of these are the material constraints arising from mass/mole balances (e.g. conservation of elements) whereas the other constraints arise from constraints on thermodynamic variables (e.g., constant enthalpy and pressure). Before these are discussed, we review how constrained minimisation is carried out.

### Lagrange multipliers

To actually calclulate constrained minimisation problems they are often transformed into unconstrained searches for stationary points using the method of Lagrange multipliers. A new function called the Lagrangian, $F$, is constructed like so, where $\lambda_k$ is the Lagrange multiplier for the $k$th equality constraint. The positivity constraints can be simply added using bounds checking (other more advanced techniques are available but are irrelevant for the discussion). The Lagrangian has the unique property that the constrained minima's of $f$ now occur at the (unconstrained) extrema of $F$. For example, it is easy to see that the derivatives of $F$ with respect to each Lagrange multiplier are zero if the constraints are satisfied, Thus we are searching for a point where $\partial F/\partial \lambda_k=0$ to ensure the constraints are satisfied. Taking a derivative of the Lagrangian with respect to the state variables, $\vec{X}$, yields the following, or in vector notation, Lets consider when $\partial F/\partial \vec{X}=\nabla_\vec{X}\,F=\vec{0}$, at this point the following must be true, This makes it clear that at the point where $\partial F/\partial \vec{X}=\vec{0}$, the downhill direction of $f$ can be decomposed into directions where the constraint functions also change ($\nabla_\vec{X}\,c_k$ are basis vectors of $\nabla_\vec{X}\,f$, and $-\lambda_k$ are the coordinate values in this basis). Thus, attempting to lower $f$ any further will cause the constraint values to alter away from zero if they are already satisfied at this point (as guaranteed by $\frac{\partial F}{\partial \lambda_k}=0$).

As a result, the new strategy to find equilibrium is to find a stationary point of the Lagrangian, This is a stationary point and not a maximum or minimum as no statements on the second derivatives of the Lagrangian have been made. In fact, most stationary points of the Lagrangian turn out to be saddle points therefore minimisation of the Lagrangian is not suitable, instead a root search for the first derivative of the Lagrangian may be attempted. Even this approach may converge to a stationary point of $F$ which is not a minimum but a maximisation of $f$. Implementation of a suitable routine is therefore an art but fortunately general algorithms are available which perform this analysis, such as NLopt, Opt++, and SLSQP.

The purpose of introducing the Lagrangian is to demonstrate that derivatives of $f$ and $g_k$ are needed, but also to introduce the Lagrangian multipliers $\lambda_k$, which will later be shown to correspond to physical properties of the system. We will summarise the values that must be calculated before discussing how to calculate these values.

### Required properties for minimisation

To determine the extreema of the Lagrangian, the minimisation algorithms require the derivatives of the Lagrangian. As illustrated above, the derivatives with respect to the lagrangian multipliers are given by the constraint functions themselves, thus no additional calculation is required there. Ignoring any contribution from the interfaces between phases, the thermodynamic potentials of the overall system can be broken down into the contribution from each phase, where $f_\alpha$ is the contribution arising from a single phase, $\alpha$. This allows us to rewrite the derivative of the Lagrangian with respect to the state variables as follows, It should be noted that the following is true, Thus, each individual phase's derivatives can be considered separately as only the $\partial f_\alpha/\partial \vec{X}_\alpha$ terms are nonzero. Later, when models are considered, these derivatives will be generated. Now we must consider the general constraints.

## Minimisation Constraints

The Gibbs phase rule states that the number of independent intensive variables (AKA degrees of freedom), $F$, required to completely specify the equilibrium state of a thermodynamic system is, where $N_P$ is the number of phases and $C$ is the number of independent components in the system. It should be noted that in general $C\neq N_C$, as components may be linked by the constraints of elemental or molecular balances.

In SimCem, $\sum_\alpha^{N_P}\left(N_{C,\alpha}+2\right)$ state variables are always used to describe a system (there are $N_{C,\alpha}$ molar amounts $\left\{N_{i,\alpha}\right\}^{N_{C,\alpha}}$, the subsystem temperature $T_\alpha$, and either the subsystem pressure $p_\alpha$ or volume $V_\alpha$ for each subsystem). In general, $\sum_\alpha^{N_P}\left(N_{C,\alpha}+2\right)\ge C+2-N_P$, thus the state of a multi-phase and/or reactive system, $\vec{X}$, is typically over specified and constraints must be added to the minimisation to eliminate the additional degrees of freedom.

### Constraints on $S,\,H,\,U,\text{ or }T$ and $p\text{ or } V$

Two systems in equilibrium must have equal temperature, pressure, and chemical potential. These constraints will arise naturally from the minimisation; however, it is efficient to remove any variables of the minimisation wherever we can (and also helps with numerical accuracy/stability).

As the temperatures of all phases are equal at equilibrium and all models used in SimCem have a temperature variable, $\left\{T_\alpha\right\}^{N_{p}}$, these individual values are eliminated from $\vec{X}$ and set equal to a single system temperature, $T$, which is inserted into $\vec{X}$.

If a constant temperature is being considered, then the system temperature, $T$, is simply set to the constrained value and eliminated from $\vec{X}$ entirely. If temperature is free, then a constraint on $S$, $H$, or $U$ is added, i.e.,

Not all models used in SimCem have a pressure variable, thus it is a little more challenging to reduce it to a single value in $\vec{X}$, so this is not done (yet); however, if constant pressure is required, then the pressure of the first phase is constrained only and the other phases then equilibrate to this pressure via the minimisation, If the volume is held constant then a different constraint function is used, Any phase volumes appearing as independent variables must remain in $\vec{X}$ as it is the overall volume of the system which is constrained to $V_{target}$, thus individual phases themselves have unconstrained volumes.

### Constraints on elements/species

SimCem has a reactive and non-reactive mode which selects between the conservation of elements or species respectively. For example, consider the water/steam equilibrium system, The variables for this system in SimCem are typically as follows, H2O may be present in both the steam and water phase, but the total amount is constrained to the initial amount, $N_{H_2O}^0$. Thus we'd like to add a non-reactive species constraint, In reactive systems, the types of molecules are no-longer conserved but the elements are. For example, consider the combustion of graphite, The state variables are, Selecting a reactive system, SimCem will generate elemental constraints, Elemental constraints allow any and all possible reactions/rearrangements of elements to minimise the free energy. For example, the following reaction is also allowed by these constraints, Sometimes this is not desired, and only specific reaction paths are fast enough to be considered (e.g. in catalysed reactions). In this case, custom constraints will need to be implemented. At the moment, Simcem will only automatically generate elemental and molecular constraints.

### Eliminating redundant constraints

Consider the water-steam system again. Imagine that a user selects this as a reactive system. SimCem will attempt to use a hydrogen and oxygen balance constraint: These two constraints are identical aside from a multiplicative factor. This leads to ambiguity in the values of the lagrange multipliers as either constraint can apply and this indeterminacy can cause issues with solvers, so we need to eliminate them.

Both elemental and species constraints are linear functions of the molar amounts, $N_i$, thus they can be expressed in matrix form, where $\vec{C}$ is a matrix where each row has an entry for the amount of either an element or molecule represented by a particular molar amount and $\vec{N}^0$ is the initial molar amounts.

To determine which rows of the matrix $\vec{C}$ are redundant, we perform singular-value decomposition on the matrix $\vec{C}=\vec{U}\,\vec{S}\,\vec{V}^T$. The benefit of this is the rows of $V^T$ corresponding to the non-zero singular values form a set of orthagonal constraints equivalent to the original constraint matrix $\vec{C}$, so we use the so-called "thin" $\vec{V}^{T}$ (only containing the non-zero singular rows) as our new constraint matrix. As we would like to extract the original lagrangian multipliers for later calculations, we need to be able to map back and forth between the original and reduced lagrangians. This mapping can be found by considering the original constraint and its lagrangians, where the final line implicitly defines the reduced set of lagrange multipliers $\vec{\lambda}_r$. Performing the minimisation using $\vec{V}^T$, we can then recover the original lagrange multipliers like so, As a side note, the matrices $\vec{U}$ and $\vec{T}$ are orthonormal/rotation matrices thus the tranpose is their inverse. Also, as the diagonal matrix $\vec{S}$ is singular, $\vec{S}^{-1}$ is the generalised inverse (i.e., only the non-zero diagonal values are inverted).

With the constraints outlined, the actual definition of thermodynamic models and the calculations may begin.

### Derivatives required for constraints

For the constraint functions, the only non-zero derivative of the element/molecular constraint function given in Eq.\eqref{eq:genMolConstraint} is as follows,

## Thermodynamic models and consistency

The thermodynamic potentials, when expressed as a function of their natural variables, provide a complete description of the state of a thermodynamic system. For example, consider the Gibbs free energy expressed in terms of its natural variables $G\left(T,\,p,\,\left\{N_i\right\}\right)$. If we know the values of the natural variables, we can calculate $G$. The derivatives of $G$ (see Eq.\eqref{eq:dG}) also allow us to calculate the following properties, If we evaluate the derivatives using the value of the natural variables then the values of all other thermodynamic potentials can be determined using Eqs.\eqref{eq:Urule}-\eqref{eq:Grule}. E.g. We can take further derivatives to evaluate properties such as the heat capacity $C_p = \left(\partial H/\partial T\right)_{p,\left\{N_i\right\}}$. In this way, an entire thermodynamic model can be created by only specifying the functional form of $G(T,\,p,\,\left\{N_i\right\})$. This is known as a generating function approach, and this can be applied to any thermodynamic potential in its natural variables. Most commonly either the Gibbs or Helmholtz $A(T,\,V,\,\left\{N_i\right\})$ free energy are used as generating functions due to the convenience of their natural variables.

Specifying the entire thermodynamic system using a single thermodynamic potential guarantees thermodynamic consistency. This is the requirement that all properties are correctly related via their derivatives. This may be accidentally violated if the system is specified another way, i.e., via any of the derivatives. Many initial thermodynamic models are inconsistent as they specify simple polynomials in temperature for both the heat capacity and the density, but these cannot be integrated into a consistent thermodynamic potential.

## Simplifying relationships

A large number of thermodynamic derivatives are required to implement the minimisation. This section presents a number of useful expressions and approaches which allow us to interrelate various derivatives to reduce the number which must be implemented to a minimal amount.

First, generalised partial properties are introduced to demonstrate that all extensive properties can be expressed in terms of their derivatives in their extensive variables. The Bridgman tables provide a convenient method of expressing any derivative of the thermodynamic potentials or their variables in terms of just three "material derivatives". A "generalised" product rule is then introduced to demonstrate how derivatives may be interchanged, particularly for the triple product rule. Finally, a key relationship between the partial molar and partial "volar" properties is derived.

### Partial properties

As demonstrated in the section on solution for the internal energy, functions which are first-order homogeneous in their variables can be immediately expressed in terms of these derivatives. This is useful as it provided a relationship between the internal energy and the other thermodynamic potentials; however, a generalised rule would eliminate the need to generate expressions for thermodynamic properties IF their derivatives are available.

Unfortunately, the two variable sets considered here contain both homogeneous first-order ($\left\{N_i\right\}$, $V$) and inhomogeneous ($T$, $p$) variables. In this case, Euler's method does not extend to expressions which are functions of both; However, a similar solution can be derived for these expressions provided the inhomogeneous variables are restricted to intensive properties and the extensive variables together uniquely specify the total size of the system.

The derivation presented here is a generalisation of the proof for partial molar properties which you can find in any thermodynamic text (e.g., Smith, van Ness, and Abbott, Sec.11.2, 7th Ed). Consider an extensive thermodynamic property, $M$, which is a function of extensive $\left\{X_i\right\}$ and intensive $\left\{y_i\right\}$ variables. The total differential is as follows, The extensive property $M$ can be converted to a corresponding intensive property, $m$, by dividing by the system amount, i.e., $M=m\,N$. If the extensive properties are held constant and they are sufficient to determine the total system amount, $N$, then the system size, $N$, is also constant and may be factored out of $M$ in the intensive partial differential terms. In addition, ${\rm d} M={\rm d} (N\,m) = m\,{\rm d} N+N\,{\rm d}m$ and ${\rm d} X_i={\rm d} (N\,x_i) = x_i\,{\rm d} N+N\,{\rm d}x_i$. Inserting these and factoring out the terms in $N$ and ${\rm d}N$ yields, As ${\rm d}N$ and $N$ can vary independently this equation is only satisfied if the terms in parenthesis are each zero. Multiplying the first term in parenthesis by $N$ and setting it equal to zero yields the required identity, where $\bar{\bar{m}}_i=\left(\partial M/\partial X_i\right)_{\left\{X_{i\neq j}\right\},\left\{y_{k}\right\}}$ is a partial property. This is a generalised solution for an extensive property in terms of its partial properties which are its derivatives in its extensive variables and is the primary objective of our derivation; however, an additional important equation for the partial properties can be easily obtained. The derivative of Eq.\eqref{eq:partialProperty} (first divided by $N$) is, This can be used to eliminate ${\rm d}m$ from the right term of Eq.\eqref{eq:pmolarintermediate} which is also set equal to zero (and multiplied by N) to give, This is the generalised Gibbs/Duhem equation which interrelates the intensive properties of the system to the partial properties.

The most well known applications of Eqs.\eqref{eq:partialProperty} and \eqref{eq:GibbsDuhem} are for the partial molar properties when $p$, $T$, and $\left\{N_i\right\}$ are the state variables. In this case Eq.\eqref{eq:partialProperty} is where $\bar{m}_i=\left(\partial M/\partial N_i\right)_{p,T,\left\{N_{j\neq i}\right\}}$ is the partial molar property. The most important partial molar property is the chemical potential, and it has already been proven that Eq.\eqref{eq:partialMolarProperty} applies in this case, i.e., Eq.\eqref{eq:Grule} gives $G=\sum_iN_i\,\mu_i$. The corresponding Gibbs-Duhem equation for the chemical potential in $T,p,\left\{N_i\right\}$ is the most well-known form,

As derivatives are distributive, and $T$ and $p$ are held constant, the partial molar properties for the thermodynamic potentials satisfy similar relationships as the original potentials (Eqs.\eqref{eq:Urule}-\eqref{eq:Grule}). Thus, the partial molar properties not only provide a derivative in the molar amounts, but also completely describe all thermodynamic potentials and many extensive quantities (such as $V$). Simcem therefore only requires the implementation of expressions for three partial molar amounts ($\mu_i$, $\bar{v}_i$, $\bar{h}_i$) to specify all the thermodynamic potentials and their first derivative in the molar amounts for the $\left\{T,p,\left\{N_i\right\}\right\}$ variable set.

For the $\left\{T,V,\left\{N_i\right\}\right\}$ variable set, Eq.\eqref{eq:partialProperty} is, where $\breve{m}_i = \left(\partial M/\partial N_i\right)_{T,V,\left\{N_{j\neq i}\right\}}$ is deemed here to be a partial "volar" quantity. A more useful form of this expression is Eq.\eqref{eq:molartovolarproperty} which is derived in the later section on the transformation between $\bar{m}_i$ and $\breve{m}_i$.

### Material properties (Bridgman tables)

There are a number of interesting properties which are derivatives of the thermodynamic potentials. For example, consider the isobaric heat capacity,

Using partial molar properties allows us to illustrate a complication in the calculation of these material properties for multi-phases/sub-systems. For example, expressing the (extensive) heat capacity in terms of the partial molar (AKA specific) heat capacity using Eq.\eqref{eq:partialMolarProperty} yields the following expression, where $\bar{c}_{p,i}=\left(\partial \bar{h}_i/\partial T\right)_{p}$ is the partial molar isobaric heat capacity. The term on the LHS of Eq.\eqref{eq:mixCp} arises from changes to the enthalpy caused by species transferring in and out of the system as the equilibrium state changes. This results in additional contributions to the apparent heat capacity above the partial molar isobaric heat capacity. For example, when a single-component fluid is at its boiling point, the apparent heat capacity of the overall system is infinite as $\partial N_i/\partial T$ is infinite due to the discontinuous change from liquid to vapour causing the instananeous transfer of molecules from one phase to another.

To complement the "equilibrium" thermodynamic $C_p$ above, it is convenient to define "frozen" thermodynamic derivatives where there are no molar fluxes, i.e., the "frozen" isobaric heat capacity, The "frozen" properties are required while calculating the gradient of thermodynamic potentials during minimisation, and arise as all molar quantities are held constant while these derivatives are taken.

The $C_p$ is just one material property; however, there are many other thermodynamic derivatives which may be calculated. Fortunately, the Bridgman tables is a convenient method to express any material property as a function of just three key material derivatives. The heat capacity is one and the other two are the isothermal ($\beta_T$) and isobaric ($\alpha$) expansivities, where $\bar{\alpha}_i\,\bar{v}_i= \left(\partial \bar{v}_{i}/\partial T\right)_{p,\left\{N_j\right\}}$ and $\bar{\beta}_{T,i}\,\bar{v}_i=-\left(\partial v_i/\partial p\right)_{T,\left\{N_j\right\}}$. Again, "frozen" material derivatives are available, The terms $\left(\partial N_i/\partial T\right)_{p,\left\{N_{j\neq i}\right\}}$ and $\left(\partial N_i/\partial p\right)_{T,\left\{N_{j\neq i}\right\}}$ which appear in the material properties must be determined from the solution to the thermodynamic minimisation problem. They quantify how the equilibrium molar amounts change for a variation in temperature and pressure and thus must account for the constraints placed on the equilibrium state and the movement of the minimia.

The Bridgman table approach decomposes every thermodynamic derivative into a look-up table for the numerator and denominator expressed in terms of the three material derivatives, $C_p$, $\alpha$, and $\beta_T$. For example, consider the following "unnatural" derivative, Thus, to generate any derivative required for minimisation, only the three "material" derivatives and three partial molar properties are required.

### Product rules

This section is almost directly copied from this math.StackExchange.com post.

Consider any expression for a thermodynamic property, $M$, written in terms of the state variables, $M=M\left(\vec{X}\right)$. It is straightforward to transform this function into an implicit function $F=F\left(M,\,\vec{X}\right)=M(\vec{X}) - M=0$ which helps to illustrate that $M$ can be treated as a variable on an equal basis as $\vec{X}$. To allow a uniform representation, the arguments of $F$ are relabeled such that $F(x_1,x_2,\ldots,x_N)=0$. As the function $F$ remains at zero, holding all variables except two constant taking the total derivative and setting ${\rm d}F=0$ yields the following identity, This rule can be combined repeatedly and the terms on the RHS eliminated provided the variables loop back on themselves. For example, where the variables held constant were dropped for brevity on the right hand side. The first value $n=2$ yields the well known expression, or, more familiarly (and without the explicit variables held constant), Finally, $n=3$ yields the triple product rule, where it is implied that all other variables other than $x_1$, $x_2$, and $x_3$ are held constant. This rule has wide application but is particularly attractive when derivatives hold an "awkward" thermodynamic property constant. For example, consider the following case The LHS is difficult to directly calculate in the state variables chosen here; however, the RHS arising from the triple product rule is in terms of natural derivatives of $G$ which are straightforward to calculate. The triple product rule is used extensively in the following sections to express complex expressions in terms of natural derivatives.

### Relation between $\bar{x}_i$ and $\breve{x}_i$

Consider a property, $M$, which is a function for four variables $x_1,x_2,x_3,x_4$. The total derivative is, Holding two variables constant and taking a derivative wrt a third yields, Three relablings of this expression can be used to interrelate two derivatives which only differ in one variable which is held constant. where the final term in parenthesis is cancelled to zero using the triple product rule. This equation is particularly useful for changing between partial quantities while pressure or volume is held constant. For example, or, in the notation used so far, This is a more useful form of Eq.\eqref{eq:partialVolarProperty}. The partial molar volume is inconvenient to derive directly when volume is an explicit variable; however, it may be expressed more conveniently using the triple product rule, This allows us to obtain partial molar properties conveniently when working with volume as a state variable.

## Gibbs Models ($p$ as a variable)

In this section, the calculation of the required properties for minimisation with phases specified by the following set of variables is considered, In this particular variable set, the required constraint derivatives to implement the derivative of the Lagrangian are as follows, The thermodynamic potential derivatives required to specify the derivatives of the Lagrangian (Eq.\eqref{eq:Fderiv}) as generated from Eqs.\eqref{eq:GradEqsStart}-\eqref{eq:GradEqsEnd} are where $f=\left\{H,G,-S,U,A\right\}$. These derivatives are easily expressed using the Bridgman tables in terms of the three standard material derivatives and the results are given in the table below.

$Y_1$$Y_2$$f$$\left(\frac{\partial f}{\partial N_i}\right)_{T,p,\left\{N_{j\neq i}\right\}}$$\left(\frac{\partial f}{\partial T}\right)_{p,\left\{N_j\right\}}$$\left(\frac{\partial f}{\partial p}\right)_{T,\left\{N_j\right\}} p$$T$ $G$ $\mu_i$ $-S$ $V$
$V$ $T$ $A$ $\bar{a}_i$ $-(S+p\,\alpha\,V)$ $p\left(\beta_{T}\,V\right)_{\left\{N_i\right\}}$
$p$$S H \bar{h}_i$$C_{p,\left\{N_j\right\}}$ $V(1-T\,\alpha)$
$p$
$V$
$H$
$U$
$-S$ $-\bar{s}_i$$-T^{-1}\,C_{p,\left\{N_j\right\}} \left(\alpha\,V\right)_{\left\{N_i\right\}} V S U \bar{u}_i$$C_{p,\left\{N_j\right\}}-p\left(\alpha\,V\right)_{\left\{N_i\right\}}$ $p\left(\beta_{T}\,V\right)_{\left\{N_i\right\}}-T\,\left(\alpha\,V\right)_{\left\{N_i\right\}}$

In summary, a model using this variable set must provide implementations of $\mu_i$, $\bar{v}_i$, $\bar{s}_i$, $\bar{\alpha}_i$, $\bar{\beta}_{T,i}$, and $\bar{c}_{p,i}$. These are all straightforward to obtain by performing derivatives of a Gibbs free energy function in its natural variables or integration and differentiation if a mechanical equation of state, $V\left(T,p,\left\{N_i\right\}\right)$, is available.

All other partial molar properties are obtained using Eqs.\eqref{eq:partialmolarrelationstart}-\eqref{eq:partialmolarrelationend} expressed in the following form. Most other relevant thermodynamic properties are calculated using the Bridgman tables.

### Ideal gas model

The ideal gas model is not only a good approximation for gases at low pressures, but it is also a key reference "state" for most complex equations. A thermodynamic path can be constructed to the ideal gas state at low pressures for most models, thus in this case a "ideal-gas" contribution can be factored out from these models.

The ideal gas chemical potential is defined as follows, where $p_0$ is the reference pressure at which the temperature-dependent term, $\mu^{(ig)}_{i}(T)$, is measured and $N_\alpha$ is the total moles of the phase $\alpha$.

Using the chemical potential as a generating function, the minimial set of partial molar properties is derived below. where the partial molar heat capacity has been implicitly defined as $C_{p,\left\{N_i\right\}}^{(ig)} = \sum_i N_i\,\bar{c}_{p,i}^{(ig)}$.

The model is not completely specified until the pure function of temperature, $\mu_{i}^{(ig)}(T)=h_{i}^{(ig)}(T) - T\,s_{i}^{(ig)}(T)$, is specified, and this is discussed in the following section.

### Ideal-gas isobaric contribution $\mu_{i}^{(ig)}(T)$

The term $\mu_{i}^{(ig)}(T)$ is a function of temperature which is directly related to the thermodynamic properties of a single isolated molecule (molecules of ideal gases do not interact). There are many theoretical approaches to expressing these terms for solids, simple models (i.e., rotors), and quantum mechanics can even directly calculate values for real molecules. However these results are obtained, this term is typically parameterised using polynomial equations.

The most common parameterisation is to use a polynomial for the heat capacity. This is common to the NIST, NASA CEA, ChemKin, and many other thermodynamic databases. A heat capacity polynomial can be related to the enthalpy and entropy through two additional constants, To demonstrate how this is correct, consider a closed system, At constant pressure the following expression holds, Thus, where $T_{ref}$ is a reference Performing this integration for the polynomial of Eq.\eqref{eq:cppoly}, where the two additional constants $\bar{h}_i^0$ and $\bar{s}_i^0$ must be determined through construction of a thermodynamic path to a known value of the entropy and enthalpy. This is usually through heats of reaction, solution, and equilibria such as vapour pressure, linking to the elements at standard conditions.

To allow this data to be expressed as a single function in Simcem, it is expressed directly as $\mu_i^{(ig)}(T)$ using the following transformation, where,

The most comprehensive (and most importantly, free!) source of ideal gas heat capacities and constants is the NASA CEA database, but additional constants are widely available. When collecting $c_p$ data, a distinction must be made between the calculated ideal gas properties and the measured data. The measured data may include additional contributions from the interactions of the molecules and thus must be fitted with the full chemical potential model and not just the ideal gas model.

### Incompressible phase

Another useful reference state is the incompressible phase. This model is used to describe liquid and solid phases when an equation of state describing all phases at once is unavailable. The generating chemical potential is as follows, where $\mu^{(ip)}_{i}(T)$ is again a temperature-dependent term and $\bar{v}^0_{i}$ is the (constant) partial molar volume of the incompressible species $i$. Using the expression for the chemical potential as a generating function all required properties are recovered, As $\alpha$ and $\beta_T$ are zero, some thermodynamic properties are not well specified but can be determined by other means. For example, $\bar{c}_p^{(ip)}=\bar{c}_v^{(ip)}$. In Simcem, ideal solids still contain a mixing entropy term which is included to allow ideal solid solutions to be constructed and used as a base class for more complex models.

## Helmholtz Models ($V$ as a variable)

In this section, the calculation of the required properties for minimisation with phases specified by the following set of variables is considered, In this variable set, the required non-zero derivatives to compute the constraint functions are as follows, These derivatives are convenient to determine in this variable set and also indirectly specify $\alpha$ and $\beta_T$ which are two of the three required material derivatives to allow use of the Bridgman tables. The third molar derivative is also convenient to determine and provides a direct path to the molar volume as given in Eq.\eqref{eq:TVpartialmolarvolume} and reproduced below, Thus, these three derivatives are required to be implemented by all models and used to derive other properties. The derivatives of the potentials are as follows where $f=\left\{H,G,-S,U,A\right\}$. The derivatives for each potential are specified below in terms of the most convenient properties/derivatives for this variable set.

$Y_1$$Y_2$$f$$\left(\frac{\partial f}{\partial N_i}\right)_{T,V,\left\{N_{j\neq i}\right\}}$$\left(\frac{\partial f}{\partial T}\right)_{V,\left\{N_j\right\}}$$\left(\frac{\partial f}{\partial V}\right)_{T,\left\{N_j\right\}} p$$T$ $G$ $\breve{g}_i$ $V\,\left(\frac{\partial p}{\partial T}\right)_{V,\left\{N_i \right\}} -S$$V \left(\frac{\partial p}{\partial V}\right)_{T,\left\{N_i\right\}} V$$T$ $A$ $\breve{a}_i$ $-S$$-p p$$S$ $H$ $\breve{h}_i$ $C_{V,\left\{N_i\right\}}+V\,\left(\frac{\partial p}{\partial T}\right)_{V, \left\{N_i\right\}}$$V\left(\frac{\partial p}{\partial V}\right)_{T,\left\{N_i\right\}}+T\left(\frac{\partial p}{\partial T}\right)_{V,\left\{N_i\right\}} p V H U -S -\breve{s}_i -\frac{C_{V,\left\{N_i\right\}}}{T}$$-\left(\frac{\partial p}{\partial T}\right)_{V,\left\{N_i\right\}}$
$V$$S U \breve{u}_i C_{V,\left\{N_i\right\}}$$T\,\left(\frac{\partial p}{\partial T}\right)_{V,\left\{N_i\right\}} - p$

where $C_V = \left(\partial U/\partial T\right)_V$, and is related to the isobaric heat capacity using the following relationship, In summary, models should provide equations to calculate $p$, $\breve{a}_i$, $\breve{u}_i$, $C_{v,\left\{N_i\right\}}$, $\left(\partial p/\partial V\right)_{T,\left\{N_i\right\}}$, $\left(\partial p/\partial T\right)_{V,\left\{N_i\right\}}$, $\left(\partial p/\partial N_i\right)_{T,V,\left\{N_{j\neq i}\right\}}$. These derivatives are used to compute the frozen $\alpha$ and $\beta_T$ values using Eqs.\eqref{eq:VMatDeriv1} and \eqref{eq:VMatDeriv1}. The third material derivative, $C_{p,\left\{N_i\right\}}$, is then obtained from Eq.\eqref{eq:CpCv}. The partial molar volume is calculated from Eq.\eqref{eq:TVpartialmolarvolume}. The partial volar/molar quantities are related by Eq.\eqref{eq:molartovolarproperty}; however, the partial volar Helmholtz free energy is equal to the chemical potential, just like any molar derivative of a potential when its natural variables are held constant (see Eq.\eqref{eq:ChemPotDefinition}). Thus, the partial volar/molar Gibbs and Helmholtz free energies are closely related, All other required partial thermodynamic potential properties are derived using straightforward applications of Eq.\eqref{eq:molartovolarproperty} and the partial molar relations of Eqs.\eqref{eq:partialmolarrelationstart}-\eqref{eq:partialmolarrelationend}, the results of which are below,

### Ideal gas model

As experimental ideal-gas data is usually presented in terms of isobaric polynomials, the ideal gas functions here are based on the previous form of the ideal gas model given in Eq.\eqref{eq:idealgasmuTP}. Transforming the pressure variable to phase volume yields the following definition of the chemical potential and partial volar Helmholtz free energy, The pressure derivatives are obtained from the well known relation $p=N\,R\,T/V$, Finally, the internal energy and heat capacity are as follows,

## Other models

### Excess phases

Assume a phase/model is infinite in quantity, therefore its extensive state variables must be excluded from the system under study. The infinite phase can exchange mass with the system under study, thus it has a chemical potential. The change in the Gibb's free energy of the (infinite) system can be defined in terms of the changes in the species within it: \begin{align} \Delta G = \sum_i \mu_i\,\Delta N_i \end{align} where $\mu_i$ is the constant chemical potential. We note then that the entropy change and volume change of this system due to variation in the system temperature/pressure must be zero.

## Automatic differentiation

The best introduction to automatic differentiation I found are here from the 2010 Sintef winter school and these notes are largely based on those notes.

Our goal is to evaluate a function and its derivatives at once. By considering how derivatives are propagated through operators/functions we see that the $k$th derivative of a function $f(g)$ can be expressed in terms of the first $k$ derivatives of $g$. To prove this, consider the Taylor expansion of a general function $f$: \begin{align*} f(a+x) &= f(a) + \frac{1}{1!}\left(\frac{\partial f}{\partial x}\right)(a)(x-a) + \frac{1}{2!}\left(\frac{\partial^2 f}{\partial x^2}\right)(a)(x-a)^2 + \ldots \\ &= \sum_{k=0}^\infty \frac{1}{k!}\left(\frac{\partial^k f}{\partial x^k}\right)(a)(x-a)^k \\ &= \sum_{k=0}^\infty f_k(a)(x-a)^k \end{align*} Each Taylor term $f_k$ corresponds to the $k$th derivative (divided by $k!$). Thus if we can determine how the Taylor coefficients of the result of an operation are related to the Taylor coefficients of its arguments, we can propogate derivative information through the operation.

### Basic arithmetic

To demonstrate this we start from two basic definitions: one for the Taylor coefficients of a constant, $c$: \begin{align*} (c)_k = \begin{cases} c & \text{for $k=0$}\\ 0 & \text{for $k>0$} \end{cases}, \end{align*} and another for the Taylor coefficients of a variable, $x$: \begin{align*} (x)_k = \begin{cases} x & \text{for $k=0$}\\ 1 & \text{for $k=1$}\\ 0 & \text{for $k>1$} \end{cases}. \end{align*} Our first operations are addition and subtraction, where it is easy to see the Taylor series of the result is trivially calcluated from the Taylor series of the arguments: \begin{align*} \left(f+g\right)_k &= f_k + g_k\\ \left(f-g\right)_k &= f_k - g_k \end{align*} For multiplication, consider Taylor expansions of both the result and the two functions: \begin{align*} \sum_{k=0}^\infty \left(f\,g\right)_k (x-a)^k &= \sum_{l=0}^\infty f_l (x-a)^l \sum_{m=0}^\infty g_m (x-a)^m \\ &= \sum_{l=0}^\infty\sum_{m=0}^\infty f_l\,g_m (x-a)^{l+m} \end{align*} We can then match terms in equal powers of $(x-a)$ on either side of the equation to yield the final result: \begin{align*} \left(f\,g\right)_k = \sum_{i=0}^k f_{i} g_{k-i} \end{align*} Division is slightly more challenging, instead consider that $f = (f\div g) g$, and insert the taylor expansions: \begin{align*} \sum_{k=0}^\infty f_k (x-a)^k &= \sum_{l=0}^\infty(f\div g)_l (x-a)^l \sum_{m=0}^\infty g_m (x-a)^m \\ &= \sum_{l=0}^\infty \sum_{m=0}^\infty (f\div g)_l g_m (x-a)^{l+m} \end{align*} Again equating terms with equal powers exactly as with multiplication: \begin{align*} f_k &= \sum_{l=0}^k(f\div g)_l g_{k-l} \\ &= \sum_{l=0}^{k-1} (f\div g)_l g_{k-l} + (f\div g)_k g_0 \\ (f\div g)_k &= \frac{1}{g_0}\left(f_k - \sum_{l=0}^{k-1} (f\div g)_l g_{k-l}\right) \end{align*} Where we removed our target term from the sum in the second line, then rearranged to make it the subject in the third line.

### Special functions

Now we focus on special functions. To solve these, we need the general derivative of a taylor series: \begin{align*} \frac{\partial f}{\partial x} = \sum_{k=1}^\infty k\,f_k (x-a)^{k-1} \end{align*} Considering $\ln$ first, we have the following differential relationship: \begin{align*} \frac{\partial}{\partial x} \ln f &= \frac{\partial f}{\partial x} \frac{1}{f} \\ f \frac{\partial}{\partial x} \ln f &= \frac{\partial f}{\partial x}, \end{align*} where we have multiplied by $f$ to avoid polynomial division when the Taylor series are inserted. Doing this now, \begin{align*} \sum_{i=0}^\infty f_i (x-a)^i \sum_{j=1}^\infty j\left(\ln f\right)_j (x-a)^{j-1} &= \sum_{k=1}^\infty k\, f_k (x-a)^{k-1} \end{align*} Multiplying both sides by $(x-a)$, factoring common terms as with multiplication: \begin{align*} \sum_{i=0}^\infty f_i \sum_{j=1}^\infty j\left(\ln f\right)_j (x-a)^{i+j} &= \sum_{k=1}^\infty k\,f_k (x-a)^k \\ \sum_{i=1}^k f_{k-i}\,i\left(\ln f\right)_i &= k\,f_k & \text{for $k>0$} \\ f_0\,k\,\left(\ln f\right)_k + \sum_{i=1}^{k-1} f_{k-i}\,i\left(\ln f\right)_i &= k\,f_k & \text{for $k>0$} \\ \left(\ln f\right)_k &= \frac{1}{f_0}\left(f_k - \frac{1}{k}\sum_{i=1}^{k-1} i\,f_{k-i}\,\left(\ln f\right)_i\right) & \text{for $k>0$} \end{align*} Where this expression only applies for $k>0$ as the first line has no constant terms within it; however, we have the trivial identity $(\ln f)_0 = \ln f_0$. Sine and Cosine have to be calcluated at the same time (regardless of which one is required). Starting from the general differentiation rules \begin{align*} \frac{\partial}{\partial x} \sin f &= \frac{\partial f}{\partial x} \cos f \\ \frac{\partial}{\partial x} \cos f &= -\frac{\partial f}{\partial x} \sin f \end{align*} Inserting the Taylor expansions and grouping terms with identical coefficients the result is found: \begin{align*} \left(\sin f\right)_k &= \frac{1}{k}\sum_{i=1}^k i\,f_i \left(\cos f\right)_{k-i} & \text{for $k>0$} \\ \left(\cos f\right)_k &= -\frac{1}{k}\sum_{i=1}^k i\,f_i \left(\sin f\right)_{k-i} & \text{for $k>0$}, \end{align*} again we have $\left(\sin f\right)_0 = \sin f_0$ and in general $\left(f(g)\right)_0 = f(g_0)$.

### Generalised Power law

Finally, we calculate the generalized power rule: \begin{align*} \frac{\partial}{\partial x} f^g &= f^g \left(\frac{\partial f}{\partial x} \frac{g}{f} + \frac{\partial g}{\partial x} \ln f\right) \\ f \frac{\partial}{\partial x} f^g &= f^g \frac{\partial f}{\partial x} g + f^g \frac{\partial g}{\partial x} f\,\ln f \end{align*} Inserting Taylor expansions: \begin{align*} \sum_{i=0}^\infty \sum_{j=1}^\infty j\,f_i\left(f^g\right)_j \left(x-a\right)^{i+j-1} &= \sum_{i=0}^\infty \left(f^g\right)_i\sum_{j=1}^\infty j\,f_j \sum_{k=0}^\infty g_k \left(x-a\right)^{i+j-1+k} \\ &\qquad + \sum_{i=0}^\infty \left(f^g\right)_i\sum_{j=1}^\infty j\,g_j \sum_{k=0}^\infty f_k \sum_{l=0}^\infty \left(\ln f\right)_l\left(x-a\right)^{i+j-1+k+l} \end{align*} Multiplying both sides by $\left(x-a\right)$ and grouping common indices/terms: \begin{align*} \sum_{i=0}^\infty \sum_{j=1}^\infty j\,f_i\left(f^g\right)_j \left(x-a\right)^{i+j} &= \sum_{i=0}^\infty \sum_{j=1}^\infty \sum_{k=0}^\infty j\left(f^g\right)_i\left[f_j g_k \left(x-a\right)^{i+j+k} + g_j f_k \sum_{l=0}^\infty \left(\ln f\right)_l\left(x-a\right)^{i+j+k+l}\right] \end{align*} Selecting all terms with the $m$th power: \begin{align*} \sum_{j=1}^m j\,f_{m-j}\left(f^g\right)_j &= \sum_{j=1}^{m} \sum_{i=0}^{m-j} j\left(f^g\right)_i\left[f_j g_{m-i-j} + g_j \sum_{k=0}^{m-i-j} f_k \left(\ln f\right)_{m-i-j-k}\right] \end{align*} Factoring the $m$th term from the left-hand side: \begin{align*} \left(f^g\right)_m &= \frac{1}{m\,f_0}\left(\sum_{j=1}^{m} \sum_{i=0}^{m-j} j\left(f^g\right)_i\left[f_j g_{m-i-j} + g_j \sum_{k=0}^{m-i-j} f_k \left(\ln f\right)_{m-i-j-k}\right] - \sum_{j=1}^{m-1} j\,f_{m-j}\left(f^g\right)_j\right) \end{align*} If the power is a constant, i.e., $g=a$, then the following simplified expression is obtained: \begin{align*} \left(f^g\right)_m &= \frac{1}{m\,f_0}\left(\sum_{j=1}^{m} j\left(f^g\right)_{m-j} f_j\,a - \sum_{j=1}^{m-1} j\,f_{m-j}\left(f^g\right)_j\right) \\ &= \frac{1}{m\,f_0}\sum_{j=1}^{m}\left(f^g\right)_{m-j} f_j\left(j\,a - m+j\right) \\ &= \frac{1}{f_0}\sum_{j=1}^{m}\left(\frac{(a + 1)j}{m} - 1\right) f_j \left(f^g\right)_{m-j}. \end{align*}

The most precise and concise derivation of thermodynamics I have found is the review of thermodynamics which is part of the Statistical Physics Using Mathematica course by James J. Kelly.

All relevant thermodynamic equations are concisely summarized on Wikipedia's thermodynamic equations.