# Background thermodynamics

This section outlines the derivation of thermodynamics with a focus on applied chemical thermodynamics. In short, thermodynamics is a set of differential and algebraic constraints on state variables. These constraints are really powerful, as they give you relationships between real measurable (and immeasurable) properties (the state variables). Ultimately we want to determine what will happen to real fluids, thus in the end we focus on determining the equilibrium state. But we're getting ahead of ourselves, lets carefully argue everything out.

## Thermodynamic systems, variables, and state

First, lets define some terminology. A thermodynamic system is the partitioning of some quantity of mass and energy from its surroundings through an enclosing boundary. The key idea is the division of what we are interested in (the system) from the uninteresting (the surroundings).

The boundary of the system may be physical (e.g., the walls of a vessel such as a balloon) or may be defined by some imaginary division of space (e.g., a finite volume in a CFD simulation). If the boundary is physical, then it may or may not be included as part of the system. For example, water droplets in air have a surface tension which acts like the skin of a balloon and pulls the drop into a spherical shape. This "stretched elastic" surface has an associated energy and it is at our discretion whether to include the energy as part of the system or as part of the surroundings (or neglect it entirely as an approximation).

Both physical and imaginary boundaries may be fixed or change shape over time, we don't care about that in equilibrium thermodynamics, only what the contents of the system is. If mass can pass through the boundaries then the system is deemed open and closed if it cannot.

The mass and energy contained inside a thermodynamic system may take many forms, but only the observable properties at the boundary of the system, such as volume, mass, surface area, and pressure are visible to us. Thermodynamics focuses on these observable variables and tries to find mathematical relationships to link these variables together. Any other internal effect the mass and energy of the system has cannot be seen unless it appears at the boundary and so thermodynamics does not immediately concern itself with these invisible properties. For example, a thermometer measures the temperature at its surface/boundary with the system it is inserted in. If the system is small enough, then the observable properties should be approximately constant across the system. For larger systems, we can always split it into smaller and smaller sub-systems until the observable properties of a system are approximately constant. Moving on, we will always assume each system we consider is small enough (and thus homogeneous enough) to ignore any changes in observable properties across its volume. We do this so that we can assume all processes are reversible later to keep us in the realm of equilibrium thermodynamics.

The sum total of the energy inside a system (the internal energy) is not directly observable as most of it is internally stored away from the boundary; however, the conservation of energy tells us it must exist. It should come as no surprise then that there are other internal variables, such as the entropy, which can only be discerned indirectly and yet are very interesting to thermodynamics (but more on this later).

The key external and internal variables are the so-called state variables. These are the variables that change whenever the system itself has changed. This is a circular argument which is easy to understand for the external variables as we only know if a system has changed if the observable properties themselves have changed. Counter examples of non-state variables are the age of the system or the distance it has moved (assuming there's no external field like gravity to couple distance with energy), as these non-state variables can change without anything really happening to the system. If a complete set of state variables is collected such that any change of the system results in a change in one (or more) of the collected state variables then we have what is called an ensemble. Which variables are collected into an ensemble really depends on what we are observing and how we are observing it. For example, perhaps our external/observable variables are temperature and volume for a balloon. However, we could also measure the balloon's pressure and mass. Thermodynamics later tells us that the (molar) volume is directly related to the pressure and temperature, so its redundant to observe all of these variables; however, its important to realise that all state variables are of equal importance, they are all just observable quantities and we can calculate one from the other once we have enough of them. As chemical engineers/chemists, we like to talk about temperature and pressure all the time but that does not mean that using volume/entropy/internal energy or some free energy instead is not more valid. The mathematics does not care. To try to "shake-off" the idea that some variables are special, mathematicians have an "exercise" called the implicit function theorem. Instead of writing $y=f(x)$, they write $y-f(x)=0$ or $f'(y,\,x)=0$. This final form shows that $y$ and $x$ are equal, either can be obtained from the other (under certain conditions), i.e. $x=f^{-1}(y)$. A simple example of this in thermodynamics is the ideal gas relationship, \begin{align}\label{eq:idealgas} P\,V = N\,R\,T. \end{align} If you've done any calculations with this you should be familiar with rearranging this equation to make $P$, $V$, $N$, or $T$ the subject of the equation, \begin{align*} P &= N\,R\,T\,V^{-1} & V &= N\,R\,T\,P^{-1} \\N &= P\,V \left(R\,T\right)^{-1} & T &= P\,V \left(N\,R\right)^{-1}. \end{align*} Thus its a good mental exercise to think of the ideal gas law as $P\,V-N\,R\,T=0$ and realise that all state variables are equal in importance.

Back to the "ensemble". The number of state variables needed to build an ensemble really depends on the system studied. For example, the thermodynamic system of a battery uses the same variables as a balloon but also includes the electric potential (and current) across its terminals as this is one way energy can be transferred out the system. This is a key concept: an ensemble is complete when any process connected to the transfer of mass and/or energy in the system has its associated state variables. These variables always come in "conjugate" pairs (e.g. $p$ and $V$), but this is discussed more later.

Now that the initial terminology has been outlined, a governing equation for the changes in the energy of a thermodynamic system can be derived.

## The fundamental equation

The power of thermodynamics arises from its ability to find simple universal relationships between observable state variables. These relationships are a direct consequence of the laws of thermodynamics.

The first law of thermodynamics is an observation that energy is neither created or destroyed but only transformed between different forms. Every thermodynamic system may contain internally some energy, $U$. The first law then allows us immediately write $U_{sys.}+U_{surr.}=C$ where $C$ is a constant and is the total energy of the universe but this equation is not particularly useful. Lets look instead at how energy might be transferred: \begin{align}\label{eq:firstlaw} {\rm d} U_{sys.} = - {\rm d}U_{surr.}, \end{align} where the ${\rm d}X$ indicates an infinitesimal change in $X$. This is called an exact differential as $U_{sys.}$ cannot change without $U_{surr.}$ changing, thus the two variables are always linked.

From further observation of real systems, two different types of energy transfer are identified: heat transfer and "work", \begin{align}\label{eq:initialebalance} {\rm d} U_{sys.} = - {\rm d}U_{surr.} = \partial Q_{surr.\to {sys.}} - \partial W_{sys.\to surr.}, \end{align} where $\partial Q_{surr.\to {sys.}}$, is heat transferred to the system due to temperature differences and $\partial W_{sys.\to surr.}$ represents all forms of work carried out by the system (the negative sign on the work term is a conventional choice). The work term represents many forms of energy transfer, so why is heat transfer singled out? Well, it appears that Nature wants to maximise heat transfer over work whenever possible, and we'll get back to this later when reversibility is introduced.

You should note that a $\partial$ symbol is used for the work/heat-transfer terms to indicate inexact differential relationships. A thermodynamic system may transfer arbitrarily large amounts of heat, and perform arbitrarily large amounts of work, but only the remainder $(\partial Q_{\to {sys.}}-\partial W_{sys.\to surr.})$ will actually cause a change in the energy $U_{sys.}$. The internal energy is a state variable as it describes the state of the system; however, work and heat transfer are not.

A physical example which illustrates this include engines, which are thermodynamic systems that can perform arbitrary amounts of work provided sufficient heat/energy is supplied but they return to their initial state at the end of every cycle. An inexact differential implies there is no unique relationship between the variables (we cannot integrate this equation). Interestingly, inexact differentials can often be transformed into simpler exact differentials through the use of constraints. For example, if the engine is seized and no work can be carried out ($\partial W_{sys.\to surr.}=0$), then only heat transfer can change the energy of the system and we now have an exact differential relationship, ${\rm d}U_{sys.}={\rm d}Q_{\to {sys.}}$. This constraint is far too restrictive in general and another constraint, known as reversibility, must be invoked to generate exact differential equations we can integrate.

In the next two subsections, the concept of reversibility is introduced through consideration of cycles and is used to find exact differential descriptions of work and heat.

### Cycles, reversibility, and heat

A thermodynamic cycle is a process applied to a thermodynamic system which causes its state (and its state variables) to change but eventually return to its initial state (and so it also returns to the initial values of its state variables). For example, the combustion chamber inside an engine will compress and expand during its operation but it returns to its starting volume after each cycle. This leads to the following identity where the sum/integral of the changes over a cycle are zero, i.e., $\oint_{\rm cycle} {\rm d} V=0$, and similar identites must also apply for every state variable.

In 1855, Clausius observed that the integral of the heat transfer rate over the temperature is always negative when measured over a cycle, \begin{align*} \oint_{\rm cycle} \frac{\partial Q_{surr.\to {sys.}}}{T_{sys.}}\le0. \end{align*} This is known as the Clausius inequality. It was found that this inequality approaches zero in the limit that the cycle is performed slowly. This limiting result indicates that the kernel of the integral actually contains a state variable, i.e., \begin{align}\label{eq:entropydefinition} {\rm d} S_{sys.} &= \frac{\partial Q_{surr.\to {sys.}}}{T_{sys.}}; & \text{(assuming slow internal changes)}, \end{align} where $S_{sys.}$ is the state variable known as entropy of the system. Interestingly, the entropy (like the internal energy) is not directly observable and its existence is only revealed by this inequality.

As the inequality is generally negative over a cycle, it indicates that entropy always increases and must be removed from a system to allow it to return to its initial state (except in the limit of slow changes). This has led to the terminology of the irreversible cycle, $\oint_{\rm cycle}{\rm d}S>0$, and the idealised reversible cycle, $\oint_{\rm cycle}{\rm d}S=0$, which can be returned to its starting state without removing entropy.

If the Clausius inequality is true, the total entropy of a isolated system can only increase over time. Assuming the universe is an isolated system (at least over the timescales we're interested in), our thermodynamic system and its surroundings must always together have a positive (or zero) entropy change.

One last thing to note, thermodynamic processes which are not cycles may also be reversible or irreversible. For a general process to be reversible the total entropy change of the system and its surroundings together must remain zero. This allows the entropy to increase or decrease in the system, but only if the surroundings have a compensating opposite change. Let's now introduce the various forms of work for comparison.

### Work

The work term $\partial W_{sys.\to surr.}$ represents all methods of transferring of energy other than as heat. Reversible paths reduce total entropy changes to zero, which minimizes the heat transferred and actually maximizes the amount of work performed by the system for a given process. It also turns work into an exact differential!

As an illustrative example, consider the emptying of a balloon via popping it versus untying the neck and letting it go. In the first case, no work is done as the air is immediately released into the surroundings: this is the quickest path of deflating the balloon thus it maximizes entropy. Untying the neck, the air jet leaving the balloon will perform work by propelling the balloon around the room (thus yielding kinetic energy). This slower release of air has allowed work to be extracted.

All work can be expressed as a generalized driving force, $\vec{F}_{sys.}$, which is displaced by a change in the corresponding generalized distance, $\vec{L}_{sys.}$. For the balloon, the force is the pressure difference in the neck (and the air resistance, which should be equal and opposite when the system is reversible) and the distance is the travel of the balloon. The reversible limit corresponds to infinitesimally slow/small changes of the distance (i.e., ${\rm d}\vec{L}$) allowing all opposing forces time to remain in balance resulting in the following general expression for the work. \begin{align*} {\rm d}W_{sys.} &= \sum \vec{F}_{sys.}\cdot {\rm d}\vec{L}_{sys.} & \text{(assuming reversibility)}. \end{align*}

For the balloon, there are three forms of work taking place. First, as the volume of the balloon is decreased work must be performed to compress the volume against the pressure of the air within. The reversible pressure-volume work is then as follows, \begin{align}\label{eq:pressurevolumework} {\rm d} W_{{sys.},pV} = p_{sys.}\,{\rm d}V_{sys.}, \end{align} where the pressure $p$ is the generalised force and the volume $V$ is the generalised displacement. In addition, the balloon itself is shrinking, releasing the tension within its elastic surface. This is known as surface work, \begin{align}\label{eq:surfacework} {\rm d} W_{{sys.},surface} = \gamma_{sys.}\,{\rm d}\Sigma_{sys.}, \end{align} where $\gamma_{sys.}$ is the surface tension and $\Sigma_{sys.}$ is the surface area.

As air leaves the balloon through the neck it will carry away energy with it. This is known as chemical work, \begin{align}\label{eq:materialwork} {\rm d} W_{{sys.},mass} = -\sum_i^{N_C} \mu_{i,{sys.}}\,{\rm d} N_{i,{sys.}}, \end{align} where the chemical potential, $\mu_{i,{sys.}}$, is the energy added to the system if one mole of the component $i$ (from one of the $N_C$ components of the system) is added to or removed from the system by any process (e.g., flow through the boundaries or internal reactions). The definition of a component, $i$, in a thermodynamic system is flexible and may be used to represent a single type of atom, molecule, or elementary particle (i.e., electrons), or even a mixture of molecules (such as "air").

The term ${\rm d} N_{i,{sys.}}$ represents changes in the amounts of a species $i$. This change may be due to mass flowing in or out of a system, but it may also result from reactions within a system; However, for closed system (a system which cannot exchange mass with any other system), chemical work is impossible and thus the conservation of energy requires that the following holds true (even if ${\rm d} N_{i,{sys.}}\neq 0$ due to internal processes such as reactions), \begin{align*} \sum_i^{N_C} \mu_{i,{sys.}}\,{\rm d} N_{i,{sys.}} &= 0 & \text{for closed systems}. \end{align*} Closed systems are typical during process/unit-operation calculations; however, as these closed systems are often composed of multiple open sub-systems (i.e. multiple interacting phases within a closed vessel) the chemical work term is always useful to retain.

### Summary of the fundamental equation

In summary, under the constraint of a reversible system, the expression for entropy (Eq. \eqref{eq:entropydefinition}) and any relevant work terms (Eq. \eqref{eq:pressurevolumework}-\eqref{eq:materialwork}) can be substituted into the energy balance of Eq. \eqref{eq:initialebalance}, to yield the fundamental thermodynamic equation, \begin{align}\label{eq:fundamentalThermoRelation} {\rm d} U &= T\,{\rm d}S -p\,{\rm d}V +\sum_i^{N_C} \mu_{i}\,{\rm d} N_{i}+\cdots, \end{align} where the subscripts have been dropped from every term for convenience. Other work terms, such as the surface or electrical work, can be added to this equation depending on the system studied; however, the pressure-volume and chemical work terms are the most important from a process engineering perspective.

### Solution of the fundamental equation

As we have an exact differential in Eq.\eqref{eq:fundamentalThermoRelation} if the internal energy is taken as function of $U(S,\,V,\,\left\{N_i\right\})$, then the total derivative of the internal energy in these variables is as follows, \begin{align*} {\rm d}U= \left(\frac{\partial U}{\partial S}\right)_{V,\left\{N_j\right\},\cdots}{\rm d}S + \left(\frac{\partial U}{\partial V}\right)_{S,\left\{N_j\right\},\cdots}{\rm d}V + \sum_i^{N_C}\left(\frac{\partial U}{\partial N_i}\right)_{S,V,\left\{N_{j\neq i}\right\},\cdots}{\rm d} N_{i}+\cdots. \end{align*} where, for clarity, the variables which are held constant while a partial derivative is taken are written as subscripts on the parenthesis surrounding the derivative (this is needed for clarity as in thermodynamics we often change the set of independent and dependent variables).

Comparing the total derivative above to the fundamental thermodynamic relation of Eq.\eqref{eq:fundamentalThermoRelation} yields the following definitions of the partial derivatives, \begin{align*} \left(\frac{\partial U}{\partial S}\right)_{V,\left\{N_{j}\right\},\cdots}&= T & \left(\frac{\partial U}{\partial V}\right)_{S,\left\{N_j\right\},\cdots}&=-p & \left(\frac{\partial U}{\partial N_i}\right)_{S,V,\left\{N_{j\neq i}\right\},\cdots}&=\mu_i. \end{align*} This is the first indication that thermodynamics is a powerful tool as it has already found a differential relationship between the internal energy and the intensive properties. Also, as the variables of $U(S,\,V,\,\left\{N_i\right\})$, are all extensive, Euler's solution for homogeneous functions applies.

Proof of solution for homogeneous functions
Consider some function (in our case an extensive thermodynamic property), $Z$. Assume that the property is only a function of the extensive quantities $\left\{{sys.}_i\right\}$ (these may be the molar amounts $\left\{N_i\right\}$, volume $V$, and entropy $S$ which are all extensive). If all of the extensive properties $\left\{A_i\right\}$ are scaled equally by some factor, $k$, the extensive thermodynamic property must also scale. Thus, \begin{align*} Z\left(\left\{k\,A_{i}\right\}\right) = k\,Z\left(\left\{A_{i}\right\}\right), \end{align*} where $k$ is some arbitrary scaling factor. $Z$ is therefore a homogeneous function of first order in $\left\{A_i\right\}$. Taking the derivative of both sides with respect to $k$ (chain rule on the LHS): \begin{align*} \frac{\partial \left\{k\,A_{i}\right\}}{\partial k}\cdot\frac{\partial}{\partial \left\{k\,A_{i}\right\}}Z\left(\left\{k\,A_{i}\right\}\right) = \frac{\partial }{\partial k} k\,Z\left(\left\{A_{i}\right\}\right)\\ \left\{A_{i}\right\}\cdot\frac{\partial}{\partial \left\{k\,A_{i}\right\}}Z\left(\left\{k\,A_{i}\right\}\right) = Z\left(\left\{A_{i}\right\}\right) \end{align*} At this point it is selected that $k=1$ and expanding the dot product as a sum, \begin{align*} Z\left(\left\{A_{i}\right\}\right) = \sum_{i}A_{i} \frac{\partial Z}{\partial A_{i}} \end{align*} This allows us to solve for $Z$ if the partial derivatives in terms of each of its extensive parameters are known.

This allows the equation to be "solved" immediately as it is a first-order homogeneous function of the extensive properties. \begin{align}\label{eq:intEnergy} U= T\,S - p\,V + \sum_i^{N_C}\mu_i\,N_i+\cdots. \end{align} This is the remarkably simple solution for the internal energy which is the first thermodynamic potential we encounter.

### Natural variables

When performing calculations in thermodynamics, we are free to specify our system state using any of the state variables introduced so far $(U,\,T,\,S,\,p,\,V,\,\left\{\mu_i\right\}^{N_C},\,\left\{N_i\right\}^{N_C})$, but how many are required and which ones are independent? Each term of the fundamental thermodynamic equation consists of a so-called conjugate pairing of an intensive property such as $T$, $p$, or $\mu_i$ and a corresponding conjugate extensive property $S$, $V$, or $\left\{N_i\right\}$ respectively. Provided all the relevant work terms have been included, it has been observed that a thermodynamic state is fully specified if at least one variable is specified for each of the conjugate pairs considered.

The natural variables for a particular function are whichever choices result in an exact differential relationship for that function. For example, the internal energy has the natural variables $U(S,\,V,\,\left\{N_i\right\}, \ldots)$. This is apparent from Eq.\eqref{eq:fundamentalThermoRelation}, where these variables are all exact differentials related to ${\rm d}U$. Unfortunately, these variables are not particularly nice (and the internal energy is not particularly interesting) as we cannot directly measure the entropy or internal energy in experiments. There are other thermodynamic potentials which have more convenient natural variables and these can be derived by considering the consequences of the second law of thermodynamics. These are all Legendre transforms of the internal energy thus their natural variables will always correspond to one variable from each conjugate pair.

## Free-energies and thermodynamic potentials

The second law of thermodynamics has already been introduced via the Clausius inequality and is formally written as follows, \begin{align*} {\rm d} S_{total} \ge 0, \end{align*} i.e., the total entropy of the universe (our system and its surroundings) must always increase or remain constant. This statement implies that the only "stationary" thermodynamic state is where the entropy has reached its maximum, henceforth known as the equilibrium state. The equilibrium state is of particular interest as all thermodynamic systems approach it and, if left undisturbed, remain there indefinitely.

It is often the basis of process calculations that a particular thermodynamic system has reached equilibrium, thus determining the equilibrium state (via a maximization of the total entropy) is our primary goal. Starting from some initial non-equilibrium state, some unconstrained internal parameters (e.g., composition, reaction progression) are varied such that the total entropy is maximized.

Although the universe's entropy must be maximized at equilibrium, our interest is in a smaller thermodynamic system contained within it. The total entropy is the sum of the entropy of this system within the universe and the rest of the universe, i.e., \begin{align*} S_{total}&=S_{sys.}+S_{surr.}. \end{align*} It is clear that both $S_{sys.}$ and $S_{surr.}$ may increase or decrease, provided the overall change results in an increase of $S_{total}$.

It is henceforth assumed that the surroundings are at equilibrium, they remain at equilibrium, and any interaction with the surroundings is reversible. The author considers these the largest assumptions they have ever made, both physically and in terms of approximation; however, it is equivalent to a “worst case” estimate for the generation of entropy. In this case, there can be no “external” process driving changes within the thermodynamic system. Anything that the system does must happen “spontaneously”. In this case, the only possible mechanism by which the universe's entropy may change is via heat transfer from the system (and the heat flux becomes an exact differential). \begin{align*} {\rm d} S_{total}&={\rm d} S_{sys.}+{\rm d} S_{surr.}\\ &={\rm d} S_{sys.}+\frac{{\rm d} Q_{sys.\to surr.}}{T_{surr.}}\\ &={\rm d} S_{sys.}-\frac{{\rm d} Q_{surr.\to sys.}}{T_{surr.}}. \end{align*} This makes it clear that the entropy change of the system must be balanced against the entropy it is generating in the surroundings through heat transfer (the surroundings are also so large the other effects of the heat transfer are negligble). Inserting the fundamental thermodynamic equation (Eq.\eqref{eq:fundamentalThermoRelation}), \begin{align} {\rm d} S_{total}&={\rm d} S_{sys.}-\frac{{\rm d} U_{sys.} + p_{sys.}\,{\rm d}V_{sys.} - \sum_i^{N_C} \mu_{i,sys}\,{\rm d} N_{i,sys.}}{T_{surr.}}. \end{align} To simplify the remainder of this section, the thermodynamic system is now assumed to be closed which allows the elimination of the chemical potential term, \begin{align}\label{eq:totalentropy} -T_{surr.}\,\left({\rm d} S_{total}\right)_{C}&={\rm d} U_{sys.} + p_{sys.}\,{\rm d}V_{sys.}-T_{surr.}\,{\rm d} S_{sys.}. \end{align} The subscript $C$ on the parenthesis is used to indicate that the system is closed. This equation makes it clear that, in closed systems interacting reversibly with its surroundings which are at local equilibrium, the overall equilibrium is not solely linked to the entropy of the system itself but is the minimisation of the RHS of Eq.\eqref{eq:totalentropy} (due to the negative sign on the entropy change). The RHS often corresponds to some thermodynamic potential which arise under different constraints and these are now derived in the sections below.

Isolated system (closed system at constant volume and internal energy)

Consider a system which is completely isolated from its surroundings. It cannot exchange heat $\left({\partial}Q=0\right)$ or work $\left({\partial}W=0\right)$. As we're in the zero-work reversible limit, the system must be at constant volume $\left({\rm d}V=0\right)$ and the molar amounts $\left\{N_i\right\}$ may individually vary but only such that $\left(\sum_i^{N_C} \mu_i\,{\rm d} N_{i,A}=0\right)$. Examining the original balance in Eq.\eqref{eq:initialebalance}, \begin{align*} {\rm d} U = \cancelto{0}{\partial Q} - \cancelto{0}{\partial W} &= 0, \end{align*} thus it is clear that the isolated constraint is also equivalent to ${\rm d}V=0$ and ${\rm d}U=0$. Examining the total entropy under these constraints, \begin{align*} \left({\rm d} S_{total}\right)_{U,V,C}&={\rm d} S_{sys.}+\frac{\cancelto{0}{\partial Q_{sys.\to surr.}}}{T_{surr.}}\\ &={\rm d} S_{sys.}, \end{align*} where the subscripts on the brackets indicate that the $U$, $V$, are held constant in the closed system. It is clear from this expression (and our own intuition) that an isolated system, with surroundings already at equilibrium, all changes in the total entropy must arise from changes in the system entropy.

To put this in terms of minimising a thermodynamic potential we define the negative of the entropy (sometimes called negentropy), \begin{align*} \left(f\right)_{U,V,C}=f_{U,V,C}=-S_{sys.}. \end{align*} To determine the equilibrium state the potential, $f_{U,V,C}$, must be minimised and this action is equivalent to maximising the total entropy.

Closed systems under constant temperature and pressure

Isolated systems are interesting in certain cases; however, in process engineering, we often have a closed system at a fixed temperature $T$ and pressure $p$. Under these conditions the system is free to transfer heat and change its volume. For the interaction of the system with its surroundings to be reversible, the surroundings must have the same temperature $T_{sys.}=T_{surr.}$ and pressure $p_{sys.}=p_{surr.}$.

If we now define a new thermodynamic potential called the Gibbs free energy, \begin{align*} G = U + p\,V - T\,S, \end{align*} and look at changes in $G$ while holding $T$ and $p$ constant, we have, \begin{align*} \left({\rm d} G\right)_{T,p} = {\rm d} U +p\,{\rm d} V - T\,{\rm d} S \end{align*} Comparing with this expression against Eq.\eqref{eq:totalentropy} it is immediately apparent that, \begin{align*} \left({\rm d} S_{total}\right)_{T,p,C} &= -\left(\frac{{\rm d} G_{sys.}}{T_{sys.}}\right)_{T,p,C}. \end{align*} Thus, maximisation of $S_{total}$ is equivalent to minimisation of $G$ when $T$ and $p$ are held constant. Writing this in the same notation as before we have, \begin{align*} f_{T,p,C} = G_{sys.}. \end{align*}

Closed systems under constant temperature and volume

Again, reversibility requires that $T_{surr.}=T_{sys.}$. No pressure-volume work occurs as the volume is fixed and so the pressure of the surroundings is actually irrelevant. Material work is also ignored as the system is closed. \begin{align} \left({\rm d} S_{total}\right)_{T,V,C}&=-\frac{{\rm d} U_{sys.} + p_{sys.}\,\cancelto{0}{{\rm d}V_{sys.}}-T_{surr.}\,{\rm d} S_{sys.}}{T_{surr.}} =-\left(\frac{{\rm d}A}{T_{sys.}}\right)_{T,V,C}. \end{align} In the final equality, another thermodynamic potential is introduced: the Helmholtz free energy, \begin{align*} A &= U - T\,S. \end{align*} It is now clear that under these constraints the maximum total entropy is reached at the minimum Helmholtz free energy. I.e., \begin{align*} f_{T,V,C} = A_{sys.}. \end{align*}

Closed system under constant pressure and enthalpy

The enthalpy is defined as follows, \begin{align*} H&=U+p\,V\\ {\rm d} H &= {\rm d} U+ V\,{\rm d} p + p\,{\rm d} V. \end{align*} For constant pressure and enthalpy we have, \begin{align*} \left({\rm d} U + p\,{\rm d} V\right)_{H,p} = 0. \end{align*} Examining the total entropy under these constraints (Eq.\eqref{eq:totalentropy}), \begin{align} \left({\rm d} S_{total}\right)_{H,p,C}&=-\frac{\cancelto{0}{{\rm d} U_{sys.} + p_{sys.}\,{\rm d}V_{sys.}}-T_{surr.}\,{\rm d} S_{sys.}}{T_{surr.}} ={\rm d} S_{sys.}. \end{align} Even though the surroundings temperature is unknown, it is merely a scaling factor and again the maximisation of the total entropy is equivalent to the minimisation of the system's negentropy, \begin{align*}f_{H,p,C} = -S_{sys.}.\end{align*}

Closed system under constant entropy and pressure

Again, starting with the total entropy of a closed system, Eq.\eqref{eq:totalentropy}, \begin{align} \left({\rm d} S_{total}\right)_{p,S,C}&=-\frac{{\rm d} U_{sys.} + p_{sys.}\,{\rm d}V_{sys.}-T_{surr.}\,\cancelto{0}{{\rm d} S_{sys.}}}{T_{surr.}}. \end{align} We note that, \begin{align*} \left({\rm d} H\right)_{p} &= {\rm d} U+ \cancelto{0}{V\,{\rm d} p} + p\,{\rm d} V \end{align*} Thus, we have \begin{align*} \left({\rm d} S_{total}\right)_{p,S,C} &= -\left(\frac{{\rm d}H}{T_{surr.}}\right)_{p,S,C}. \end{align*} The surroundings can be at some arbitrary constant temperature (they are at equilibrium and very large thus unaffected by heat transfer), thus $T_{surr.}$ is simply a scaling factor. The thermodynamic potential to minimise is then $f_{p,S,C} = H_{sys.}$.

Closed system under constant entropy and volume

Finally, the simplest example, \begin{align} \left({\rm d} S_{total}\right)_{p,S,C}&=-\frac{{\rm d} U_{sys.} + p_{sys.}\,\cancelto{0}{{\rm d}V_{sys.}}-T_{surr.}\,\cancelto{0}{{\rm d} S_{sys.}}}{T_{surr.}} = \frac{{\rm d} U_{sys.}}{T_{surr.}}. \end{align} Thus, the thermodynamic potential to minimise for maximum total entropy is $f_{S,V,C} = U_{sys.}$ and the surroundings temperature is unimportant.

### Summary

In summary, there are a number of relevant thermodynamic potentials for a closed system. These are defined below, \begin{align} U&= T\,S - p\,V + \sum_i^{N_C}\mu_i\,N_i\label{eq:Urule}\\ H&= U + p\,V = T\,S + \sum_i^{N_C}\mu_i\,N_i\label{eq:Hrule}\\ A&= U - T\,S = - p\,V + \sum_i^{N_C}\mu_i\,N_i\label{eq:Arule}\\ G&= H - T\,S = \sum_i^{N_C}\mu_i\,N_i\label{eq:Grule} \end{align}

For each set of constrained thermodynamic states in closed systems, a particular thermodynamic potential is minimised at equilibrium. These are summarised in the table below:

ConstantsFunction to minimise, $f$
$p,\,S$$H p,\,T$$G$
$p,\,H$$-S V,\,S$$U$
$V,\,T$$A V,\,U$$-S$

The variables held constant correspond to the "natural" variables of each potential. Expressing the change in each thermodynamic potential in terms of these natural variables yields the following differential equations, \begin{align}\label{eq:dU} {\rm d}U &= T\,{\rm d}S - p\,{\rm d}V + \sum_i^{N_C}\mu_{i}\,{\rm d} N_{i}\\ -{\rm d}S &= -\frac{1}{T}{\rm d}U - \frac{p}{T}{\rm d}V + \sum_i^{N_C}\frac{\mu_{i}}{T}\,{\rm d} N_{i}\\ {\rm d}A &= -S\,{\rm d}T - p\,{\rm d}V + \sum_i^{N_C}\mu_{i}\,{\rm d} N_{i}\label{eq:dA}\\ {\rm d}H &= T\,{\rm d}S + V\,{\rm d}p + \sum_i^{N_C}\mu_{i}\,{\rm d} N_{i}\\ {\rm d}G &= -S\,{\rm d}T + V\,{\rm d}p + \sum_i^{N_C}\mu_{i}\,{\rm d} N_{i}.\label{eq:dG} \end{align} The significance of the chemical potential cannot be overstated. It is the change of each thermodynamic potential per mole of each species exchanged when the other natural variables of the potential are held constant, \begin{align}\label{eq:ChemPotDefinition} \mu_i = -T\left(\frac{\partial S}{\partial N_i}\right)_{U,V,\left\{N_{j\neq i}\right\}}= \left(\frac{\partial A}{\partial N_i}\right)_{T,V,\left\{N_{j\neq i}\right\}} = \left(\frac{\partial H}{\partial N_i}\right)_{S,p,\left\{N_{j\neq i}\right\}} = \left(\frac{\partial G}{\partial N_i}\right)_{T,p,\left\{N_{j\neq i}\right\}} \end{align} The implication of this is that when dealing with systems exchanging mass, but constrained by two "natural" variables, the chemical potential for each species must be equal in all phases, regardless of which constrained variables are actually used (otherwise a change of mass between systems could change the value of the overall thermodynamic potential implying it is not at a minimum). It is also the partial molar Gibbs free energy ($G=\sum_i N_i\,\mu_i$) and thus calculation of the Gibbs free energy can be reduced to considering the chemical potential.